article
string
abstract
string
the problem of estimating parameters of selected populations has wide practical applications in estimation of experimental data in agriculture , industry and medicine .some of the real world applications of this theory are the problem of estimating the average yield of a selected variety of plant with maximum yield ( kumar and kar , 2001 ) , estimating the average fuel efficiency of the vehicle with minimum fuel consumption ( kumar and gangopadhyay , 2005 ) and selecting the regimen with maximal efficacy or minimal toxicity from a set of regimens and estimating a treatment effect for the selected regimen ( sill and sampson , 2007 ) .the problem of estimation after selection has received considerable attention by many researches in the past three decades .interested readers are referred to , for example , gibbons et al .( 1977 ) for more details .some other contributions in this area include sarkadi ( 1967 ) , dahiya ( 1974 ) , kumar and kar ( 2001 ) , misra et al .( 2006a , b ) , kumar et al .( 2009 ) and nematollahi and motammed - shariati ( 2012 ) . for a summary of results , as well as a list of references until 2006 ,see misra et al .( 2006 a , b ) . in this paper, we introduce and develop the problem of estimation of the parameters of a dynamically selected population from a sequence of infinite populations which is not studied in the literature , according to the best of our knowledge .let be a sequence of random variables where is drawn from population with corresponding cumulative distribution function ( cdf ) and probability density function ( pdf ) .the traffic volume trend , daily temperatures , sequences of stock quotes , or sequences of estimators of interior water volume in a dam reservoir are examples of such sequences . suppose we want to estimate the parameter of the population corresponding to the largest value of the sequence yet seen , that is }^u=\theta_{t_n},\ ] ] where , with probability one , and for or similarly the parameter of the population corresponding to the smallest value of the sequence yet seen , that is }^l=\theta_{t'_n},\ ] ] where , with probability one , and for we want to estimate }^u ] .this happens for example , when we want to estimate the largest value of traffic volume or stock quotes yet seen , the temperature of the coldest day or the largest volume of the coming water into the dam reservoir , up to now . for simplicity , we denote }^u ] hereafter .we may write }=\sum_{j = n}^{\infty}\theta_ji_j(x_1,x_2,\ldots),\end{aligned}\ ] ] where the statistics and are called upper and lower records , respectively . in the sequence ,the sequences of partial maxima and upper record statistics are defined by and , respectively , where with probability 1 , and for .the record statistics could be viewed as the dynamic maxima of the original random variables .so , we call the problem of estimating } ] under the two models 1 and 2 , presented below .* model 1 : * let be a sequence of independent absolutely continuous random variables with pdf where is a complete sufficient statistic with the gamma)-distribution .some well - known members of the above family are : \1 .exponential( ) , with , and ; \2 .gamma( ) , with and ; \3 .normal(0, ) , with , , and ; \4 .inverse gaussian( ) , with , , and ; \5 .weibull( ) , with known , , , and ; \6 .rayleigh( ) , with , , and . to estimate } ] under the gamma()-distribution with pdf by using the u - v method of robbins ( 1988 ) , we have the following lemma ( see also vellaisamy and sharma , 1989 ) .[ lem1 ] let be a sequence of independent random variables with densities defined in ( [ gamma ] ) .let be a real - valued function such that for \(ii ) then the functions satisfy = e_{{\boldsymbol\theta}}\left [ \theta_j u_j(\mathbf{x})\right],~j=1,2,\cdots.\end{aligned}\ ] ] the next result obtains the unbiased estimator of } ] , under sel function , which satisfies with } ] , under sel function , based on given by where is defined in .thus , to find an unbiased estimator of } ] , under the sel function , for the general family , can be obtained as where is the upper record value of the sequence . for a monotone function ( available in all of the above examples , except in the normal distribution ) , can be obtained simply as for an increasing and as for a decreasing .for example , for the rayleigh()-distribution , an unbiased estimator for } ] , under sel function , for the general family .assume to be known and let be the cumulative hazard function of .for the general family , an unbiased estimator of } ] is given by similarly , for the family , an unbiased estimator for } ] are indeed umvu estimators of } ] which depend on only through and , i.e. .then , we have the following results , under models 1 and 2 , respectively . [ risk ] under the model 1 and the sel function ,an unbiased estimator of the risk of an estimator of } ] is from lemma 2 and using similar argument as in the proof of theorem [ risk ] , we have }^2)&=\sum_{j = n}^{\infty}\theta_j^2\e(i_j(\mathbf(x)))\\ & = \sum_{j = n}^{\infty}\theta_j\e\left[\int_{-\infty}^{x_j}h(t)i_j(x_1,\ldots , x_{j-1},t , x_{j+1},\ldots)\;{\rm d}t\right]\\ & = \sum_{j = n}^{\infty}\e\left[\int_{-\infty}^{x_j}h(s)\int_{-\infty}^{s}h(t)i_j(x_1,\ldots , x_{j-1},t , x_{j+1},\ldots)\;{\rm d}t\;{\rm d}s\right]\\ & = \e\left[\int_{u_{n-1}}^{u_n}h(s)\int_{u_{n-1}}^{s}h(t)\;{\rm d}t\;{\rm d}s\right]\\ & = \e\left[\frac{h^2(u_n)-h^2(u_{n-1})}{2}-h(u_{n-1})(h(u_n)-h(u_{n-1}))\right]\\ & = \e\left[\frac{(h(u_n)-h(u_{n-1}))^2}{2}\right].\end{aligned}\ ] ] furthermore }v(u_{n},u_{n-1}))&=\sum_{j = n}^{\infty}\theta_j\e(i_j(\mathbf{x})v(x_j , u_{n-1}))\\ & = \sum_{j = n}^{\infty}\e\left(\int_{0}^{x_j}h(t)v(t , u_{n-1})\right.\\ & \qquad\times\left.i_j(x_1,\ldots , x_{j-1},t , x_{j+1},\ldots)\;{\rm d}t\right)\\ & = \e\left(\int_{u_{n-1}}^{u_n}h(t)v(t , u_{n-1})\;{\rm d}t\right).\end{aligned}\ ] ] this completes the proof . immediate corollary of theorem [ risk2 ] is as follows .[ c2]for the general family and under the sel function , + ( i ) an unbiased estimator of the risk of is ( ii ) the risk of is })=\e(\theta_{[n]}^2).\ ] ] the results for the general family can be obtained by replacing with in theorem [ risk2 ] and corollary [ c2 ] . since is a complete sufficient statistic for , the above unbiased estimators of }) ] .the following result presents the distribution of the unbiased estimator in the family .[ expnons ] in the general family , the following identities hold : \(i ) for every and , \(ii ) for every , and , let and , .we only prove part ( i ) . part ( ii )is proved in a simillar way .using the fact that and the lack of memory property of the exponential distribution , which is the required result. the general family with pdf , we have thus , a natural estimator for } ] .so a risk comparison of the natural estimators with umvues of } ] , with that of the natural estimator }^2=\frac{u_n}{p} ] are less than those of }}^2 ] , under the white noise model 3 ..simulated bias and risk of the umvue and the natural estimator of }$ ] under three different models from gamma distribution for different values of and .[sim ] [ cols="^,^,^,^,^,^",options="header " , ] to generate random variables identically distributed as , one may generate an iid sample form standard exponential , namely , , and return .table [ tct ] presents the simulated values of -critical values of , , for , and , which are generated using r.14.1 package with iterations .the hypothesis is rejected at level as for the rainfall data we obtain , which is less than .therefore , is not rejected in favor of at level .the problem of estimating parameters of the dynamically selected populations can be extended to the bayesian context .moreover , the problem of unbiased estimation of the selected parameters under other loss functions is of interest .the distributional models which are not members of studied families can be studied separately , specially the discrete distribution .another problem is to find the two stage ( conditionally ) unbiased estimators of the parameters of the dynamically selected populations .these problems are treated in an upcoming work , to appear in subsequent papers .the authors thank the anonymous referee for his / her useful comments and suggestions on an earlier version of this manuscript which resulted in this improved version .doostparast , m. and emadi m. ( 2013 ) .evidential inference and optimal sample size determination on the basis of record values and record times under random sampling scheme , _ statistical methods & applications _ , doi:10.1007/s10260 - 012 - 0228-x .misra n. , vander meulen e.c . and branden k.v .( 2006a ) . on estimating the scale parameter of the selected gamma population under the scale invariant squared error loss function ._ journal of computational and applied mathematics _ ,* 186 * , 268 - 282 .misra n. vander meulen e.c . and brandan k.v .( 2006b ) . on some inadmissibility results for the scale parameters of selected gamma populations . _ journal of statistical planning and inference _ , * 136 * , 2340 - 2351 .nematollahi , n. and motammed - shariati , f. ( 2012 ) .estimation of the parameter of the selected uniform population under the entropy loss function , _ journal of statistical planning and inference _ , * 142 * , 2190 2202 .salehi m. ahmadi j. and balakrishnan , n. ( 2013 ) .prediction of order statistics and record values based on ordered ranked set sampling , _ journal of statistical computation and simulation _ , doi = 10.1080/00949655.2013.803194 .sill , m. w. and sampson , a. r. ( 2007 ) .extension of a two - stage conditionally unbiased estimator of the selected population to the bivariate normal case , _ communications in statistics - theory and methods _ , * 36 * , 801 813 .
we introduce the problem of estimation of the parameters of a dynamically selected population in an infinite sequence of random variables and provide its application in the statistical inference based on record values from a non - stationary scheme . we develop unbiased estimation of the parameters of the dynamically selected population and evaluate the risk of the estimators . we provide comparisons with natural estimators and obtain asymptotic results . finally , we illustrate the applicability of the results using real data . * keywords : * extreme value theory , general record models , partial maxima , pfeifer model , selected population , uniformly minimum variance unbiased estimator .
in this paper , we study the temporal behavior of the distribution of stock prices for 24 stocks in the dow jones industrial average ( djia ) .this is done using a new method of measuring changes in the volatility and drifts of stocks with time . when this method is applied to time - series constructed from the daily close of stocks , changes as fast as one daycan be seen in both .given that it is not possible to accurately _ measure _ ( as oppose to _ predict _ ) intraday changes in the volatility using only daily - close data , for two of the 24 stocks we have been able to reach the maximum resolution ( known as the nyquist criteria ) of one day in the rate that the volatility can change , while for the great majority of the remaining stocks , we have come within one day of this maximum .we believe that this method can measure changes in the volatility and drift that occur during the trading day as well if intraday price data is used . buteven with only daily - close data , we have been extraordinarily successful at determining the temporal behavior of stocks in general , and of the volatility in particular , and in the process , we have furthered our understanding of the behavior of stock prices as a whole .we find that the stock prices of these 24 stocks can be well described by a stochastic process for which the volatility changes _ deterministically _ with time . on the one hand, this is a process where the yield at any one time is not correlated with the yield at any other time ; the process thus describes an efficiently priced stock . on the other hand , this is a process where the predicted kurtosis agrees with the sample kurtosis of the stock ; the process thus also provides a solution to the long standing problem of explaining how an efficiently priced stock can have a kurtosis that is so different from what is expected for a gaussian distribution .indeed , we find that abnormally large kurtoses are due solely to changes in the volatility of the stock with time .when this temporal behavior is accounted for in the daily yield , the kurtosis reduces dramatically in value , and now agrees well with model predictions .this finding is in agreement with rosenberg s ( 1972 ) observation that the kurtosis for nonstationary random variables is larger than than the kurtosis of individual random variables .we have also determined changes in the volatility of these stocks , and for three of the 24 stocks , variations of as fast as one day can be seen . for another 16 stocks ,this temporal resolution was two days or less , and for only five of the 24 stocks is this resolution longer than 2.5 days .the behavior of the drifts for all 24 stocks can also be determined using this method , and with the same resolution as their volatility .we find that the drift for the majority of the stocks is positive ; these drifts thus tend to augment the increase of the stock price caused by the random - walk nature of the stochastic process .this finding is not surprising , nor is it surprising that we find that the drift is much smaller than the volatility for all 24 stocks .what is surprising is that for three of the 24 stocks the drift is uniformly _negative_. for these stocks , the drift tends not to increase the stock price , but to depress it . that the stock price for these three stocks increase at all is because this drift is much smaller in the magnitude than the volatility . over the short term , growth in the prices of these stocksas they are for all 24 stocksis due to a random walk , and thus driven more by the volatility than the drift .indeed , this is the only reason that the prices of these stocks increase with time .finally , the distribution of the stock prices for the 24 djia stocks has been determined .when the temporal variation in the volatility is corrected for in the daily yield , we find that the resultant distribution for all but four of the stocks is described by a rademacher distribution with the probability that the yield increases on any one day being 1/2 .for the four other stocks , the distribution is described by a generalized rademacher distribution with the probability that the yield increases on any one day being slightly greater than the probability that it decreases .in 2005 , 403.8 billion shares were traded on the new york stock exchange ( nyse ) with a total value of 12 trillion dollars . at the nyse , traders , investors , and speculatorsbig and smallplace bets on the movement of stock prices , whether up or down .profits are made , or losses are reconciled , based on the changing price of the stock . as such , great effort is made to predict the movements of stock prices in the future , and thus much attentionwith attending analysisis focused on the price of stocks . in the cboe ,traders , investors , and speculators write or enter into contacts to purchase or sell a predetermined amount of stocks at a set time in the future .profits here are made , or losses reconciled , based on the degree of risk that the movement of the stock will be down when expected to be up , or up when expected to be down . here , it is not so much the price of the stock that matters .it is the amount of volatility in the stock , and predicting how stock prices may move in the future is much less important .indeed , the pricing of optionsthrough the black - scholes equation and its variantsis based on the argument that it is _ not _ possible to predict how the price of stocks will change in the future . in this pricing , it is taken for granted that the markets are efficient , and that earning returns which are in excess of the risk - free interest rate is not possible .all is random , and the increase in stock prices seen is due to a simple random walk with a ( small ) drift .great interest is thus paid in modeling the _ distribution _ of stock prices , and the application of these models to the pricing of options and derivatives .given the ] is the expectation value of over a gaussian distribution , and is the dirac delta function .we emphasize that while eq . may have a form that is similar to various stochastic volatility models of the stock market , for us is a _ deterministic _ function of time ; it does not have the random component that is inherent in stochastic volatility models . as usual , it is more convenient to work with ] . using eq . , we then conclude that = \sigma(t)^2\delta(t - t ' ) , \label{cont - auto}\ ] ] so that the autocorrelation function of the instantaneous yield vanishes unless ; the yield of the stock at any one time does not depend on the yield at any other time .our model thus describes a market for the stock that is efficient .this is to be expected . at each instant , , describes a gaussian distribution with drift , and volatility , , and it is well known that for a gaussian distribution the daily yield on any one day is not correlated with the daily yield on any other . note that if the volatility was a function of as well as , or if it was itself a stochastic process , as it is taken to be in stochastic volatility models , we could not have moved the volatilities outside the expectation value to obtain eq . . in these cases , it is not clear whether the yield of the stock at any one time depends on the yield at any other time . formally , the solution to eq . is straightforward .if for all , divide through by , and then reparametize time by taking equation then simplifies to where and is still a gaussian random variable , but now in .equation is simply a stochastic process with drift and unit volatility ; its solution in terms of is well known .the solution to the original equation , eq . , can then be obtained , at least in principle , by integrating eq . , and then replacing with resulting function of . in practice , our task is much more difficult .we are _ not _ given a , and then asked to find the price , , of the stock at subsequent times .we are instead given a collection of stock prices collected over some length of time , and then asked to find the volatility .this is a much more difficult problem , but surprisingly , it is a solvable one , as we will see in the next subsection .the drift for the standardized yield in this subsection , we derive a recursions relation that is used to solve for the volatility as a function of time .this derivation is most conveniently done using a discretized version of the continuous stochastic process eq . considered above , and we consider as a continuous approximation to the discrete time series , , for , of stock prices collected at equal time intervals , ; this is usually taken as one trading day .the subscript enumerates the time step when the price of the stock was collected , and is an integer that runs from 0 to the total number of data points , . as such , , , , , and the volatility at .the instantaneous daily yield is then where .it is clear that is the yield of over the time period ; when is one trading day , is the daily yield .our task in this section is to determine _ given _ the time - series , and we do so by making use of the analysis in the previous section .we call the _ standardized _ yield of the stock price over a time period , and if is one trading day , we call it the standardized daily yield . since then from the discretized versions of eqs. and , we see that the distribution of standardized yields has a volatility of , or one , if is set to one trading day .the collection of standardized yields has a _ known _ volatility .consider now a subset of the time - series with elements , and the corresponding collection of standardized yields , , where runs now runs from to . because this subset was arbitrarily selected from a collection of standardized yields that has a volatility of , this subset must also have a volatility of .as such where we have included for completeness in eq . the standard error for the volatility given data points ( see stuart and ord 1994 ) to emphasize that the accuracy of eq . depends on .equation must be true for each .in particular , it must hold for , and thus we can write which is similar in form to eq . .this self - similar property of the distribution is used to determine , as we show below .we first expand eq . , and single out the terms , where we have dropped the error terms in eq . for clarity .using eq . in the second term of eq . and completing a square , we arrive at a surprisingly simple equation for , this is easily solved to give , where the sign of the root must be chosen so that for all .the standardized yield , , can then be calculated using eq . for each time step .equation gives a recursion relation for .a recursive approach to calculating the volatility similar in spirit to the one above is described in stuart and ord ( 1994 ) .that calculation is for volatilities that do not change with time , however , while in ours the volatility can do so explicitly .as we will see below , this introduces a number of complications .we note also that eq . differs markedly from autoregression approaches such as the ewma , arch , and garch in that depends nonlinearly on .equation gives a first - order recursion relation for , and thus given an initial , the values for for is determined . to determine this initial , we note that in the continuous process eq . holds .a similar relation must hold for the discretized yields . to determine this relation , we follow the same approach that led to eq . , and consider the following function the first term in eq . is the average of the standardized yield over the first terms in the time - series , and it corresponds to the discretization of the first term in continuous constraint eq .the second term is the quotient of the average daily yield calculated over the same period with the volatility evaluated at the end of this period , and it corresponds to the discretization of the second term in continuous constraint eq . . if can be chosen so that the mean of , can be minimized to zero at the 95% cl , then eq . will hold on average for the discretized yield .as usual , the 95% cl for this mean is calculated through the standard error , ^{1/2} ] . while for the continuous process would be a gaussian random variable , for the discrete process we will show that is a random variable for the generalized rademacher distribution described below .the standard way of calculating is to use a moving average over a window of days .however , just like for the volatility , calculating with a moving average will mean that variations in the drift faster than can not be clearly seen .we will instead calculate directly from , which is possible to do because the distribution of standardized yield is so simple .we first note that since changes to and is due to shifts in the distribution of standardized yield with time , these shifts must be due to the drift , , of the standardized yield .shifts in random variables are trivial changes to the distribution , however , and a drift that changes with time will not materially change the distribution of standardized yields .we next note that = 0 ] , and find that the population mean , ] , is easily calculated to be .\label{mk}\ ] ] the population variance is thus , while the population skewness is and the population kurtosis for the distribution is clearly , if , then , , and ; this is the rademacher distribution , which is a special case of the binomial distribution .when , we call this the generalized rademacher distribution . given the plot in fig . , we would expect that the distribution of standardized yields to be a rademacher distribution with at all time steps . to show that that this is the case , we have calculated the sample skewness and the kurtosis of the standardized yield after the drift , , has been removed from .this has been done for all 24 stocks using the entire time - series for each .we then compared these sample skewness and kurtosis with the population skewness and kurtosis for the rademacher distribution using the t - test . for completeness, we have also calculated the probability , , for each stock by counting the total number of , and compared it to the rademacher value of using the chi - squared test .the results of these calculations and tests are given in table iii .we see that for all but four of the stocks the fit is exceedingly good ; the skewness , the kurtosis , and the probability all agree at the 95% cl .lrrrrrr gm & & .07 & & .92 & 0.500 & .01 + dd & & .15 & & .92 & 0.499 & .02 + vz & & .18 & & .92 & 0.501 & .03 + dis & & .54 & & .92 & 0.498 & .29 + axp & & .58 & & .92 & 0.497 & .34 + mmm & & .66 & & .93 & 0.503 & .43 + & & & & & & + pg & & .67 & & .93 & 0.502 & .45 + jnj & & .78 & & .94 & 0.503 & .60 + hpq & & .78 & & .94 & 0.504 & .61 + ko & & .94 & & .96 & 0.503 & .89 + c & & .98 & & .97 & 0.507 & .96 + ba & & .28 & & .04 & 0.495 & .64 + & & & & & & + aig & & .32 & & .04 & 0.507 & .73 + intc & & .34 & & .05 & 0.507 & .79 + pfe & & .42 & & .07 & 0.506 & .01 + cat & & .42 & & .07 & 0.505 & .01 + ge & & .49 & & .10 & 0.505 & .21 + aa & & .50 & & .70 & 0.494 & .26 + & & & & & & + msft & & .76 & & .19 & 0.512 & .12 + wmt & & .77 & & .19 & 0.510 & .13 + mrk & & .32 & & .40 & 0.509 & .37 + ibm & & .68 & & .55 & 0.509 & .21 + xom & & .86 & & .63 & 0.510 & .20 + mo & & .94 & & .66 & 0.510 & .67 + [ statistics ] although this agreement is not as good for altria , exxon , ibm , and merck , this is only because we were comparing it with the rademacher distribution with .we find that the distribution of standardized yields for these stocks is instead the generalized rademacher distribution with a slightly greater than .using eq . and the sample mean for these stocks , we have solved for a probability , , for the stocks .we find that this probability is in close agreement with those listed in table iii for these stocks , and when this is then used in eqs . and to predict values for the skewness and kurtosis , the predicted values are now in agreement with the sample skewness and kurtosis at the 95% cl .we also note that variance of the distribution calculated using the values for given in table 4 in eq . ranges from 0.9994 to 1.000 .this is in excellent agreement with the requirement that the variance of the standardized daily yield is one when is one trading day .we thus conclude that the distribution of standardized daily yields is a generalized rademacher distribution shifted by the drift , . for 20 of these stocks, we find that at the 95% cl .the probability that the daily yield increases is the same as the probability that it decreases . for the other four stocks , is slightly greater than , and the probability that the daily yield increases is slightly larger than the probability that it decreases .having determined the distribution for the standardized daily yield , we now turn our attention to determining the volatility of the stock .we find that while the recursion relation , eq . , is straightforwardly solved using the given in table ii , there is a great deal of noise associated with the resultant values for .this can be seen in fig . where we have plotted as a function of trading day the volatility obtained from eq . .although we can discern that there is an inherent structure in graph , this structure is buried within random fluctuations of .these fluctuations are due to random noise generated when eq. is solved , and they mask the functional dependence of on . in this section , we will extract this dependence from the noise .the presence of the noise in is inherent , but not because itself obeys a stochastic process , as is assumed in stochastic volatility models .if it were , then there will necessarily be a second stochastic differential equation for to augment eq . , and the two coupled equations would have to be solved simultaneously .certainly , eq . and the recursion relation eq . would not , in general , be solutions of the coupled stochastic differential equations , and it is this recursion relation that was used to obtain fig .rather , this noise is inherent in determining the volatility itself .note from eq . that . for a stochastic process of the form eq . where the volatility changes with time , at each time step , , is a random variable from a distribution with volatility . as need not equal for any two and , each can come from a _ different _ distribution . in the worst case, we will have only _ one _ out of any distribution with which to determine , and this can take any value from to with a probability determining would thus seem to be an impossible task . that it can nevertheless be doneis due to three observations .first , because is gaussian , there is a 68% probability that any value of will be within .it is for this reason that it is still possible to discern an overall functional dependence of on the trading day through the noise in fig .second , is a _ deterministic _ function of , and thus the value of the volatility at time step is related to its value at time step . given a sufficient number of thus a sufficient number of must be possible to construct a functional form for .third , using fourier analysis ( also called spectral analysis ) and signal processing techniques , it is possible to remove from fig . the noise that is obscuring the details of how depends on the trading day , and obtain a functional form for the volatility .noise in the discrete volatility , that fourier analysis provides an efficient way of removing the noise from fig . is based on the following theorem : * theorem : * if is a time series where is a gaussian random variable with zero mean , and volatility , , then the fourier sine , , and fourier cosine , , coefficients of the fourier transform of are gaussian random variables with zero mean and volatility .this theorem is well - known in signal analysis , and is an immediate consequence of parseval s theorem . a proof of this theorem , as well as a review of the discrete fourier transform , is given in appendix .it is because the volatility of the fourier sine and cosine coefficients for gaussian random variables are reduced by a factor of that it is possible to remove from the random noise . in general, this reduction in the coefficients does not occur if the are _ not _ random variables , and thus the structure in fig . can be resolved once the fourier transform of is taken . after this removalis accomplished , we can then take the inverse fourier transform to obtain , which we call the instantaneous volatility to differentiate it from the that comes directly from eq . .figure and c are plots of the fourier sine and fourier cosine coefficients of the discrete fourier transform of defined as they depend on an integer , which runs from to .as the fourier transform decomposes the time - series , , into components that oscillate with frequency day ( or , equivalently , with period days ) for ; the coefficients and are the amplitudes of these oscillations . in the graphs shown in figs . and , we can readily see that there is a component of the fourier coefficients for coca cola that varies randomly between .this is the noise floor .coefficients in this floor are the result of the fourier transform of the noise that mask the functional behavior of on the trading .this noise floor is similar for both the fourier sine and cosine coefficients , as can be see in detail in inset plot in fig . where the features of the plot of are magnified for between .it is also apparent from the graph that there are fourier coefficients that rise above the noise .while the most prominent of these is ( which is the average of over all trading days ) , such points exist for other coefficients as well .this is due to the structure in shown in fig . ; if there were no structure at all in the plot , then there would be no fourier coefficients that rise above the noise floor . the instantaneous volatility , , after noise removal by combining this observation with the near uniformity of the noise floor , we are able to filter out the noise component of , and construct a ( approximately ) noise - free instantaneous volatility , .a description of the process that we used , along with the statistical criteria used to determine the noise floor for the fourier sine and cosine coefficients , is given in detail in appendix .the effectiveness of the noise removal process can be seen in fig . where a plot of the instantaneous volatility for coca cola is shown . when this plot is compared to fig . , the amount of noise removed , and the success of the noise removal procedure , is readily apparent .indeed , out of a total of 21,523 fourier sine and cosine coefficients for , 10,734 fourier sine and 10,728 of fourier cosine coefficients were removed as noise ; only 59 points were kept to construct . while the graph of may appear to be noisy , this is because eight decades of trading days are plotted in the figure .much of this apparent noise disappears when the range of trading days plotted is narrowed , as can be seen in the inset figure . here ,the instantaneous volatility over a one - year period from december 29 , 2005 to december 29 , 2006 has been plotted . to compare the instantaneous volatility with the historical volatility , we have included in fig . a graph of historical volatility calculated from the daily yield using a 251-day moving average .it is immediately apparent that the historical volatility is generally larger than the instantaneous volatility ; at times it is dramatically so .it is also readily apparent that the historical volatility does not show nearly as much detail as the instantaneous volatility , as can be seen in the inset figure .a functional form for can be found for all 24 stocks .for coca cola , this expression has 59 terms ; we give only four of them here , t ) \bigg ] , \label{instantaneoussigma}\end{aligned}\ ] ] where rad / day is the fundamental angular frequency .the amplitude of the first term in the expression is the largest ; it is the average of over all the trading days in the time - series .the second largest amplitude is the sine term in eq . , and it is 25% the size of the first .all other amplitudes are smaller then this term , for most by a factor of 5 , and yet notice from fig . that these amplitudes are nonetheless sufficient to generate a instantaneous volatility that is far from a constant function . from the last term in eq . , we see the that shortest frequency of oscillations that make up is day .this is very close to the nyquist criteria of day for , which is the upper limit on the frequencies of the fourier components of .the underlying reason for such a limit is because the the original time - series , , was acquired once each trading day .we therefore can not _ measure _ oscillations with a period shorter than two trading days ; there simply is not enough information about the stocks to determine what happens within the trading day .( this is in contrast to _ predicting _ how the volatility may behave during the trading day , which certainly can be done . ) for each of the 24 stocks , the shortest period of the fourier components that make up the instantaneous volatility are listed in table iii , and we see that for all but 3 of the stocks our expression for comes very close to nyquist criteria . in the case of alcoa , caterpillar , and johnson & johnson , the shortest period has even reached it .lrrrrrr cat & 2.0 & 3.33 & 3.07 & 3.36 & -0.04 & 3.04 + jnj & 2.0 & 3.20 & 2.99 & 3.23 & -0.03 & 2.99 + aa & 2.0 & 3.40 & 3.00 & 3.47 & -0.02 & 3.00 + ge & 2.1 & 3.47 & 3.01 & 3.44 & -0.04 & 3.00 + hpq & 2.1 & 3.45 & 3.00 & 3.31 & -0.08 & 3.01 + dis & 2.1 & 3.42 & 2.99 & 3.64 & -0.03 & 3.03 + & & & & & & + xom & 2.2 & 3.79 & 3.11 & 3.61 & -0.05 & 3.03 + msft & 2.3 & 3.51 & 2.99 & 3.27 & -0.08 & 3.00 + pfe & 2.3 & 3.38 & 3.00 & 3.40 & -0.05 & 3.00 + axp & 2.4 & 3.56 & 3.17 & 3.33 & -0.04 & 3.14 + mrk & 2.4 & 3.33 & 3.03 & 3.95 & -0.03 & 3.00 + intc & 2.4 & 3.26 & 3.00 & 3.36 & -0.05 & 2.99 + & & & & & & + wmt & 2.5 & 3.47 & 2.96 & 3.29 & 0.11 & 2.99 + ko & 2.5 & 3.89 & 3.30 & 3.53 & 0.02 & 3.23 + ba & 2.7 & 3.37 & 3.00 & 3.30 & -0.01 & 2.99 + pg & 3.2 & 3.31 & 2.95 & 3.40 & -0.04 & 3.00 + aig & 3.2 & 3.44 & 3.00 & 3.32 & 0.01 & 3.00 + mmm & 3.5 & 3.03 & 2.54 & 3.06 & -0.04 & 2.58 + & & & & & & + ibm & 4.0 & 3.41 & 3.00 & 3.64 & 0.03 & 3.00 + vz & 5.2 & 3.28 & 2.99 & 3.23 & -0.09 & 3.00 + mo & 5.5 & 3.56 & 3.14 & 3.64 & -0.07 & 3.21 + c & 6.2 & 3.36 & 3.02 & 3.33 & 0.05 & 3.12 + dd & 9.6 & 3.81 & 3.31 & 3.64 & 0.00 & 3.12 + gm & 14.2 & 3.59 & 3.00 & 3.64 & -0.06 & 3.00 + [ fouriersummary ] in figs . and , we have graphed the instantaneous volatility as a function of trading day for all 24 stocks .they have been ordered into graphs where the degree of volatility are similar , with the stocks with roughly the highest volatility graphed last .analytical expressions for the other 23 stocks are not given as they are too lengthy .with now known and the drift for the standardized daily yield obtained previously , the drift for the daily yield , , can be found for all 24 stocks using the discretized version of eq . , , where the is the instantaneous volatility obtained above .since for all 24 stocks , .thus for all of the 24 stocks , the drift of the stock is smaller than the volatility of it .this is to be expected .if the drift of a stock is larger than the volatility , then future trends in the stock can be predicted with a certain degree of certainty ; the drift is , after all , a _ deterministic _ function of time . such trends could be seen by investors , and nearly riskless profits could be made .this clearly does not happen .it is instead very difficult to discern future trends in the price of stocks , and this is precisely because the volatility of the stock is so large . the instantaneous volatility for the djia stocks , i the instantaneous volatility for the djia stocks , iias a continuous process , we have found that the 24 djia stocks can be described as a stochastic process with a volatility that changes deterministically with time .it is a process for which the autocorrelation function of the yield vanishes at different times , and thus one that describes a stock whose price is efficiently priced . from the results of our calculation of the autocorrelation function of the daily yield for the 24 stocks ,this property of our stochastic process is in very good agreement with how these stocks are priced by the market .it is also a process for which the solution of the stochastic differential can be , at least formally , solved .this solution is valid only because the volatility is a deterministic function of time , however .if the volatility is depends on the stock price , or if the volatility itself is a stochastic process , the solution of the stochastic differential equation will not be so simple , and the autocorrelation function need not vanish at different times .it is , however , only after using the discretized stochastic process that we are able to validate our model . after correcting for the variability of the volatility by using the standardized daily yield, we have shown that for all 24 stocks the distribution of standardized daily yields is well described by the general rademacher distribution . indeed , we found that the abnormally large kurtosis is due to a volatility that changes with time . for 20 of the 24 stocks , the sample skewness , kurtosis , and probability distribution agrees with a rademacher distribution where at the 95% cl ; the probability that these stocks will increase on any one day is thus equal to the probability that it will decrease .the other four stocks agree with a generalized rademacher distribution and have a slightly greater than .for these stocks , the probability that the yield will increase on any one day is slightly higher than the probability that it will decrease .we conclude that our model is a very good description of the behavior of these stocks . that the kurtosis for the standardized daily yield is smaller than the kurtosis for the daily yield is in agreement with the results found by rosenberg ( 1972 ) .the daily yield is time dependent , and is thus a nonstationary random variable , while for the standardized daily yield , the time dependence due to the volatility has been taken account of .indeed , in many ways rosenberg ( 1972 ) presages the results of this work . by combining the properties of our continuous stochastic process for the stocks with noise removal techniques , we have been able to determine the time dependence of both the volatility and the drift of all 24 stocks . unlike the implied volatility, the volatility obtained here was obtained from the daily close directly without the need to fit parameters to the market price of options .the theory is thus self - contained . for alcoa , caterpillar , and johnson&johnson, the time dependence of the volatility can be determined down to a resolution of a single trading day , while for another 13 stocks , they can be determined to a resolution of less than 1 1/2 trading days .while other , more sophisticated signal analysis techniques can be used , given that the time - series is based on the daily close and thus the resolution is ultimately limited to a period of two trading days , we do not expect that it will be possible to dramatically improve on these results .only when intraday price data is used will we expect significant improvement to this resolution . indeed , with intraday data we expect that changes to the volatility that occur during the trading day can be seen .we have deliberately used large cap stocks in our analysis , and we take care to note that this approach to the analysis of the temporal behavior of stocks have only been shown to be valid for the 24 stocks we analyzed here . while we would expect it to be applicable to other large - cap stocks , whether our approach will also be valid when applied to mid- or small - cap stocks is still an open question .indeed , it will be interesting to see the range of stocks for which the volatility depends solely on time . with both the drift and the volatility determined down nearly to the single trading day level for most of the stocks , it is now be possible to calculate the autocorrelation function for both , as will as the correlation function between the drift and the volatility .in particular , the degree of influence that the volatility or drift on any one day has on the volatility or drift on any future day can be determined .this analysis is currently being done .the time - series for the 24 djia stocks analyzed here were obtained from the center for research in stock prices ( crsp ) .while the ending date for each series is december 29 , 2006 , the choice of the starting date is often different for different stocks .this choice of starting dates was not governed by a desire for uniformity , but rather by the desire to include as many trading days in the time series as possible , and thereby minimize standard errors .in addition , by maximizing the number of trading days included , we also demonstrate that our model is valid over the entire period for which the prices of the stock are available .although the daily close of stocks between the starting and ending dates are used as the basis of the time - series ( with dividends included in the price ) , a series of adjustments to the crsp data were made when the series were constructed .if the closing price of stock is listed by crsp as a negative numberan indication that the closing price was not available on that day , and the average of the last bid and ask prices was used insteadwe took the positive value of this number as the daily close on that day . if no record of the daily close was given for a particular trading day at allan indication that the bidding and asking prices were also not available for that datewe used the average of the closing price of the stock on the day preceding and the day following as the daily close for that day .next , the daily close of the stock prices were scaled to adjust for splits in the stock . for example , although the daily close for coca cola on december 12 , 1925 is listed by crsp as 48.25 , of the stock on december , 29 , 2006 by , doing so would result in stock prices that are 0.01 increments , doing so does not materially change the stock price , while still insuring that .we have not adjusted for inflation in our time - series , nor have we accounted for weekends , holidays , or any other days on which trading did not take place .we have instead concatenated the daily close on each trading day , one after another , when constructing the time - series .the time - series are thus a sequence of _ trading _ days , and not calendar days . while this concatenation is natural , issues of bias such as those studied by fleming , kirby , and ostdiek ( 2006 ) have not been taken into account .whether these issues are relevant for the stocks considered here we leave for further study .our focus is instead on the gross features of the stock price .finally , we list here the following particularities that occurred in our analysis of the 24 djia stocks . _boeing : _ when solving for , the term was greater than 27 , while all other terms was less than .this data point was an outlier , and since it is in the transient region for the stock , we have set this term equal to , which is the typical size of for . _merck : _ when solving for , the term was greater than 168 , while all other terms were three orders of magnitude smaller .this data point was replaced by , which is the typical size of for . _ exxon - mobil : _ when solving for , the term was greater than 158 , while the term was greater than 1500 .the data point was replaced by , and the data point was replaced by .in this section , we collect the expressions used here in calculating the mean , variance , skewness , kurtosis , and autocorrelation of the time - series , along with their respective standard errors . with the exception of the autocorrelation function ,these expressions are taken from stuart and ord ( 1994 ) .given a collection of data points , , the sample moments , , of order , , that are used in our analysis are defined as follows as usual , the sample skewness and kurtosis are defined as while the standard error of the mean and the variance is well known , the standard error in the sample skewness and kurtosis are not .for the skewness , this error is while for the sample kurtosis , the standard error is although standard errors are defined in terms of the population moments , these moments are not known a prior . following stuart and ord ( 1994 ) ,we have used instead the sample moments listed in eq . when calculating standard errors . for the time - series , , where , we define the autocorrelation of to be equation measures the correlation of the time - series at time step with the time - series at time step .this definition differs somewhat from the one given in kendall ( 1953 ) and in kendall , stuart and ord ( 1983 ) in that they divide by the product of the volatility of the time series , with the volatility of of the time - series , . it also differs substantially from the expression used in alexander ( 2001 ) , where a simplified expression for the autocorrelation function in kendall , stuart , and ord ( 1983 ) is used .we use eq . instead of the expressions given in kendall , stuart , and ord ( 1983 ) and alexander ( 2001 ) for two reasons .first , is simply the variance of the time - series , so that the volatility for a stock can be read off easily from its graph , as can be seen in fig .second , we will see below that the variance for is easily calculated when are gaussian random variables , and the standard error for can be readily determined .derivations of the standard error for the autocorrelation functions given in kendall , stuart , and ord ( 1983 ) and alexander ( 2001 ) , on the other hand , are more involved . to determine the standard error for , consider a time - series where the are gaussian random variables with mean zero and standard deviation , . then =0 ] .( here , is the kronecker delta with if while otherwise . ) consequently , = 0 ] for . expanding in a fourier series using eq . , we find that = \sum_{k , k'=-(n-1)/2}^{(n-1)/2}\xi^\omega_k \xi^\omega_{k'}n \delta_{k , -k ' } , \label{krondelta}\ ] ] where the last equality holds from eq . .thus , which is parseval s theorem for a discrete fourier series .following eq . , we express in eq .then eq . can be written as \right\ } , \label{fprob}\end{aligned}\ ] ] and the theorem is proved .it is straightforward to see that the converse is also true .namely , if and are gaussian random variables with zero mean and volatility , then is a gaussian random variable with zero mean and volatility , .notice from eq . that = 0 $ ] , and thus the two random variables are independent .notice also that while we began with degrees of freedom with the random variables , , we seem to have ended up with degrees of freedom for the random variables , and . from eq . we see , however , that and ; not all the variables in eq. are independent . when this redundancy is taken account of , we arrive back to degrees of freedom . given that the noise floor associated with the fourier sine and cosine coefficients is constant over all frequencies , , noise removal is straight forward .we need only remove from , the set of all fourier sine coefficients for , and , the set of all fourier cosine coefficients for , those coefficients whose amplitudes is less than the amplitudes , and , of the noise floor for the fourier sine and fourier cosine coefficients , respectively .the coefficients left over for the fourier sine coefficients , and for the fourier cosine coefficientscan then be used to construct the instantaneous volatility , , by summing the fourier series eq . .the noise floor amplitudes , and , are determined statistically .consider the set of coefficients that are removed : for the fourier sine coefficients and for the fourier cosine coefficients . because the distribution of the noise floor is gaussian , and be chosen so that the distributions of coefficients in and are gaussian as well .if either amplitude is chosen too large , then coefficients from or that make up the signal , , would be included in the noise distributions as noise .as these coefficients are supposed to be above the noise , they will skew and flatten the distribution ; the skewness and the kurtosis for the distribution of and of will then differ from their gaussian values if these coefficients are included . on the other hand , if either amplitude for the noise floor is chosen too _, then coefficients from or that make up the noise would be _ excluded _ from the noise distributions .as these coefficients would have populated the tails of the gaussian distribution , their removal will tend to _ narrow _ the distribution , and the kurtosis of the noise distributions will differ once again from its gaussian value .( because the coefficients are remove symmetrically about the horizontal zero line , a choice of the amplitude for the noise floor that is too small will not tend to change the skewness significantly . ) thus , and must be chosen so that the skewness and kurtosis of the distribution of coefficients in and is as close to their gaussian distribution values as possible . while the above procedure is straightforward , there is an additional constraint .the volatility can not be negative , and thus the resultant instantaneous volatility , , obtained after the noise floor is removed must the positive as well .this constraint is not trivial . for a number of stocks, a choice of and that results in noise distributions that are closest to a gaussian distribution also results in a that is negative on certain days . to obtain a that is non - negative , slightly larger amplitudes for the noise floorswere chosen , which resulted in a slightly larger skewness and kurtosis .this approach to removing the noise from the volatility , , has been successfully applied to all 24 stocks using a simple c++ program that implements an iterative search algorithm to determine and .the results of our numerical analysis are shown in table iv . there , we have listed the noise floor amplitudes , and , used for each of the 24 stocks .their values are given as multiples of the standard deviation of the distribution of the fourier coefficients in and .as these values range from 3.030 times the standard deviation to 3.948 times the standard deviation , 99.756% to 99.992% of the data points that make up a gaussian distribution can be included in these distributions if they are present in either or .listed also in table iv are the kurtosis for and . we have found that they range in value from 2.95 to 3.31 , and are thus very close to the gaussian distribution value of three for the kurtosis .the skewness of the noise of the distribution of was calculated as well , and was found to vary in value from -0.03 to 0.11 ; this also is very close to the gaussian distribution value of zero for the skewness .the skewness for the distribution of was also calculated , but we find that their values are to times smaller than the skewness for the distribution of , and there was no need to listed these values in the table .this extremely close agreement with the skewness of the gaussian distribution is because the fourier sine coefficients are antisymmetric about : .the average of any odd power of over in particular , the skewness of the distribution of automatically vanishes . for this reason ,the skewness for is exceedingly small .the author is also an adjunct professor at the department of physics , diablo valley college , pleasant hill , ca 94523 , and a visiting professor at the department of physics , university of california , berkeley , ca 94720 .he would like to thank peter sendler for his support and helpful criticisms while this research was being done .it is doubtful that this work would have been completed without his encouragement and , yes , prodding .
while the use of volatilities is pervasive throughout finance , our ability to determine the instantaneous volatility of stocks is nascent . here , we present a method for measuring the temporal behavior of stocks , and show that stock prices for 24 djia stocks follow a stochastic process that describes an efficiently priced stock while using a volatility that changes deterministically with time . we find that the often observed , abnormally large kurtoses are due to temporal variations in the volatility . our method can resolve changes in volatility and drift of the stocks as fast as a single day using daily close prices . keywords : spectral analysis , noise reduction , rademacher distribution
the goal of the development of the code was to have a simple and efficient tool for the computation of adiabatic oscillation frequencies and eigenfunctions for general stellar models , emphasizing also the accuracy of the results .not surprisingly , given the long development period , the simplicity is now less evident .however , the code offers considerable flexibility in the choice of integration method as well as ability to determine all frequencies of a given model , in a given range of degree and frequency .the choice of variables describing the equilibrium model and oscillations was to a large extent inspired by .as discussed in section [ sec : eqmodel ] the equilibrium model is defined in terms of a minimal set of dimensionless variables , as well as by mass and radius of the model .fairly extensive documentation of the code , on which the present paper in part is based , is provided with the distribution packagejcd / adipack.n ] . provided an extensive review of adiabatic stellar oscillations , emphasizing applications to helioseismology , and discussed many aspects and tests of the aarhus package , whereas carried out careful tests and comparisons of results on polytropic models ; this includes extensive tables of frequencies which can be used for comparison with other codes .the equilibrium model is defined in terms of the following dimensionless variables : here is distance to the centre , is the mass interior to , is the photospheric radius of the model and is its mass ; also , is the gravitational constant , is pressure , is density , and , the derivative being at constant specific entropy .in addition , the model file defines and , as well as central pressure and density , in dimensional units , and scaled second derivatives of and at the centre ( required from the expansions in the central boundary condition ) ; finally , for models with vanishing surface pressure , assuming a polytropic relation between and in the near - surface region , the polytropic index is specified .the following relations between the variables defined here and more `` physical '' variables are often useful : we may also express the characteristic frequencies for adiabatic oscillations in terms of these variables . thus if is the buoyancy frequency , is the lamb frequency at degree and is the acoustical cut - off frequency for an isothermal atmosphere , we have where is the adiabatic sound speed , and is the pressure scale height , being the gravitational acceleration .finally it may be noted that the squared sound speed is given by these equations also define the dimensionless characteristic frequencies , and as well as the dimensionless sound speed , which are often useful . as is well known the displacement vector of nonradial ( spheroidal ) modescan be written in terms of polar coordinates as \exp ( - { { \rm i}}\omega t ) \right\ } \ ; . \nonumber\end{aligned}\ ] ] here is a spherical harmonic of degree and azimuthal order , being co - latitude and longitude ; is an associated legendre function , and is a suitable normalization constant .also , , , and are unit vectors in the , , and directions . finally , is time and is the angular frequency of the mode .similarly , e.g. , the eulerian perturbation to pressure may be written \ ; . \label{eq : e2.2}\ ] ] as the oscillations are adiabatic ( and only conservative boundary conditions are considered ) is real , and the amplitude functions , , , etc . can be chosen to be real .the equations of adiabatic stellar oscillations , in the nonradial case , are expressed in terms of the following variables : , results from the earlier use of an unconventional sign convention for ; now , as usual , is defined such that the perturbed poisson equation has the form , where is the eulerian density perturbation . ] here is the perturbation to the gravitational potential . also , we introduce the dimensionless frequency by corresponding to eqs [ eq : buoy ] [ eq : cutoff ] . these quantities satisfy the following equations : y_1 + ( a - 1 ) y_2 + \eta a y_3 \ ; , \\ \label{eq : ea.3 } x { { { \rm d}}y_3 \over { { \rm d}}x } & = & y_3 + y_4 \ ; , \\ \label{eq : ea.4 } x { { { \rm d}}y_4 \over { { \rm d}}x } & = & - a u y_1 - u { v_g \over \eta } y_2 \\ & & + [ l ( l + 1 ) + u(a - 2 ) + u v_g ] y_3 + 2(1 - u ) y_4 \ ; . \nonumber\end{aligned}\ ] ] here , and the notation is otherwise as defined in eq .[ eq : fivea ] . in the approximation , where the perturbation to the gravitational potential is neglected ,the terms in are neglected in eqs [ eq : ea.1 ] and [ eq : ea.2 ] and eqs [ eq : ea.3 ] and [ eq : ea.4 ] are not used .the dependent variables in the nonradial case have been chosen in such a way that for they all vary as for . for large a considerable ( and fundamentally unnecessary ) computational effortwould be needed to represent this variation sufficiently accurately with , e.g. , a finite difference technique , if these variables were to be used in the numerical integration .instead i introduce a new set of dependent variables by these variables are then in near the centre .they are used in the region where the variation in the is dominated by the behaviour , for , say , where is determined on the basis of the asymptotic properties of the solution . this transformation permits calculating modes of arbitrarily high degree in a complete model . for radial oscillations only and used , where is defined as above , and here the equations become y_1 + a y_2 \ ; . \label{eq : ea.6}\end{aligned}\ ] ] the equations are solved on the interval $ ] in . here ,in the most common case involving a complete stellar model , where is a suitably small number such that the series expansion around is sufficiently accurate ; however , the code can also deal with envelope models with arbitrary , typically imposing at the bottom of the envelope .the outermost point is defined by where is the surface radius , including the atmosphere ; thus , typically , .the centre of the star , , is obviously a singular point of the equations .as discussed , e.g. , by boundary conditions at this point are obtained from a series expansion , in the present code to second significant order . in the general casethis defines two conditions at the innermost non - zero point in the model . for radial oscillations , or nonradial oscillations in the cowling approximation, one condition is obtained .the surface in a realistic model is typically defined at a suitable point in the stellar atmosphere , with non - zero pressure and density . herethe simple condition of vanishing lagrangian pressure perturbation is implemented and sometimes used .however , more commonly a condition between pressure perturbation and displacement is established by matching continuously to the solution in an isothermal atmosphere extending continuously from the uppermost point in the model .a very similar condition was presented by . in addition , in the full nonradial case a condition is obtained from the continuous match of and its derivative to the vacuum solution outside the star . in full polytropic models , or other models with vanishing surface pressure , the surface is also a singular point . in this casea boundary condition at the outermost non - singular point is obtained from a series expansion , assuming a near - surface polytropic behaviour ( see * ? ? ? * for details ) .the code also has the option of considering truncated ( e.g. , envelope ) models although at the moment only in the cowling approximation or for radial oscillations . in this casethe innermost boundary condition is typically the vanishing of the radial displacement although other options are available .the numerical problem can be formulated generally as that of solving with the boundary conditions here the order of the system is 4 for the full nonradial case , and 2 for radial oscillations or nonradial oscillations in the cowling approximation .this system only allows non - trivial solutions for selected values of which is thus an eigenvalue of the problem .the programme permits solving these equations with two basically different techniques , each with some variants .the first is a shooting method , where solutions satisfying the boundary conditions are integrated separately from the inner and outer boundary , and the eigenvalue is found by matching these solutions at a suitable inner fitting point .the second technique is to solve the equations together with a normalization condition and all boundary conditions using a relaxation technique ; the eigenvalue is then found by requiring continuity of one of the eigenfunctions at an interior matching point .for simplicity i do not distinguish between and ( cf . section [ sec : eq ] ) in this section .it is implicitly understood that the dependent variable ( which is denoted ) is for and for .the numerical treatment of the transition between and has required a little care in the coding .it is convenient here to distinguish between = 2 and = 4 . for = 2 the differential eqs [ eq : e3.1 ]have a unique ( apart from normalization ) solution satisfying the inner boundary conditions [ eq : e3.2 ] , and a unique solution satisfying the outer boundary conditions [ eq : e3.3 ] .these are obtained by numerical integration of the equations .the final solution can then be represented as .the eigenvalue is obtained by requiring that the solutions agree at a suitable matching point , say .thus these equations clearly have a non - trivial solution only when their determinant vanishes , i.e. , when equation [ eq : e3.5 ] is therefore the eigenvalue equation .for = 4 there are two linearly independent solutions satisfying the inner boundary conditions , and two linearly independent solutions satisfying the outer boundary conditions .the former set is chosen by setting and the latter set is chosen by setting the inner and outer boundary conditions are such that , given and , and may be calculated from them ; thus eqs [ eq : e3.6 ] and [ eq : e3.7 ] completely specify the solutions , which are obtained by integrating from the inner or outer boundary .the final solution can then be represented as at the fitting point continuity of the solution requires that this set of equations only has a non - trivial solution if where , e.g. , .thus eq .[ eq : e3.9 ] is the eigenvalue equation in this case .clearly as defined in either eq .[ eq : e3.5 ] or eq .[ eq : e3.9 ] is a smooth function of , and the eigenfrequencies are found as the zeros of this function .this is done in the programme using a standard secant technique .however , the programme also has the option for scanning through a given interval in to look for changes of sign of , possibly iterating for the eigenfrequency at each change of sign .thus it is possible to search a given region of the spectrum completely automatically .the programme allows the use of two different techniques for solving the differential equations .one is the standard second - order centred difference technique , where the differential equations are replaced by the difference equations , \quad i = 1 , \ldots , i \ ; .\label{eq : e3.11}\ ] ] here i have introduced a mesh in , where is the total number of mesh points ; , and .these equations allow the solution at to be determined from the solution at .the second technique was proposed by ; here on each mesh interval we consider the equations with constant coefficients , where .these equations may be solved analytically on the mesh intervals , and the complete solution is obtained by continuous matching at the mesh points .this technique clearly permits the computation of solutions varying arbitrarily rapidly , i.e. , the calculation of modes of arbitrarily high order . on the other hand solving eqs [ eq : e3.12 ] involves finding the eigenvalues and eigenvectors of the coefficient matrix , and therefore becomes very complex and time consuming for higher - order systems .thus in practice it has only been implemented for systems of order 2 , i.e. , radial oscillations or nonradial oscillations in the cowling approximation .if one of the boundary conditions is dropped , the difference equations , with the remaining boundary condition and a normalization condition , constitute a set of linear equations for the which can be solved for any value of ; this set may be solved efficiently by forward elimination and backsubstitution ( e.g. , * ? ? ?* ) , with a procedure very similar to the so - called henyey technique ( e.g. , * ? ?* see also christensen - dalsgaard 2007 ) used in stellar modelling .the eigenvalue is then found by requiring that the remaining boundary condition , which effectively takes the role of , be satisfied .however , as both boundaries , at least in a complete model , are either singular or very nearly singular , the removal of one of the boundary conditions tends to produce solutions that are somewhat ill - behaved , in particular for modes of high degree .this in turn is reflected in the behaviour of as a function of .this problem is avoided in a variant of the relaxation technique where the difference equations are solved separately for and , by introducing a double point in the mesh . the solution is furthermore required to satisfy the boundary conditions [ eq : e3.2 ] and [ eq : e3.3 ] , a suitable normalization condition ( e.g. ) , and continuity of all but one of the variables at , e.g. , ( when = 2 clearly only the first continuity condition is used ) we then set and the eigenvalues are found as the zeros of , regarded as a function of . with this definition, may have singularities with discontinuous sign changes that are not associated with an eigenvalue , and hence a little care is required in the search for eigenvalues .however , close to an eigenvalue is generally well - behaved , and the secant iteration may be used without problems . as implemented herethe shooting technique is considerably faster than the relaxation technique , and so should be used whenever possible ( notice that both techniques may use the difference eqs [ eq : e3.11 ] and so they are numerically equivalent , in regions of the spectrum where they both work ) . for _ second - order systems _ the shooting technique can probably always be used ; the integrations of the inner and outer solutions should cause no problems , and the matching determinant in eq .[ eq : e3.5 ] is well - behaved . for _ fourth - order systems _, however , this needs not be the case . for modes where the perturbation to the gravitational potential has little effect on the solution , the two solutions and , and similarly the two solutions and , are almost linearly dependent , and so the matching determinant nearly vanishes for any value of .this is therefore the situation where the relaxation technique may be used with advantage .this applies , in particular , to the calculation of modes of moderate and high degree which are essential to helioseismology . to make full use of the increasingly accurate observed frequencies the computed frequencies should clearly at the very least match the observational accuracy , for a given model . only in this waydo the frequencies provide a faithful representation of the properties of the model , in comparisons with the observations. however , since the numerical errors in the computed frequencies are typically highly systematic , they may affect the asteroseismic inferences even if they are smaller than the random errors in the observations , and hence more stringent requirements should be imposed on the computations .also , the fact that solar - like oscillations , and several other types of asteroseismically interesting modes , tend to be of high radial order complicates reaching the required precision .the numerical techniques discussed so far are generally of second order .this yields insufficient precision in the evaluation of the eigenfrequencies , unless a very dense mesh is used in the computation ( see also * ? ? ?the code may apply two techniques to improve the precision .one technique ( cf .* ) uses the fact that the frequency approximately satisfies a variational principle .the variational expression may formally be written as where and are integrals over the equilibrium model depending on the eigenfunction , here represented by .the variational property implies that any error in induces an error in that is .thus by substituting the computed eigenfunction into the variational expression a more precise determination of should result .this has indeed been confirmed .the second technique uses explicitly that the difference scheme [ eq : e3.11 ] , which is used by one version of the shooting technique , and the relaxation technique , is of second order . consequently the truncation errors in the eigenfrequency and eigenfunction scale as .if and are the eigenfrequencies obtained from solutions with and meshpoints , the leading - order error term therefore cancels in \ ; . \label{eq : e3.18}\ ] ] this procedure , known as _ richardson extrapolation _ , was used by .it provides an estimate of the eigenfrequency that is substantially more accurate than , although of course at some added computational expense .indeed , since the error in the representation [ eq : e3.11 ] depends only on even powers of , the leading term of the error in is . even with these techniquesthe precision of the computed frequencies may be inadequate if the mesh used in stellar - evolution calculations is used also for the computation of the oscillations .the number of meshpoints is typically relatively modest and the distribution may not reflect the requirement to resolve properly the eigenfunctions of the modes . discussed techniques to redistribute the mesh in a way that takes into account the asymptotic behaviour of the eigenfunctions ; a code to do so , based on four - point lagrangian interpolation , is included in the adipls distribution package .on the other hand , for computing low - order modes ( as are typically relevant for , say , scuti or cephei stars ) , the original mesh of the evolution calculation may be adequate .it is difficult to provide general recommendations concerning the required number of points or the need for redistribution , since this depends strongly on the types of modes and the properties of the stellar model .it is recommended to carry out experiments varying the number and distribution of points to obtain estimates of the intrinsic precision of the computation ( e.g. , * ? ? ?* ; * ? ? ?in the latter case , considering simple polytropic models , it was found that 4801 points yielded a relative precision substantially better than for high - order p modes , when richardson extrapolation was used . in the discussion of the frequency calculation it is important to distinguish between _ precision _ and _ accuracy _ ,the latter obviously referring to the extent to which the computed frequencies represent what might be considered the ` true ' frequencies of the model .in particular , the manipulations required to derive eq . [ eq : varprinc ] and to demonstrate its variational property depend on the equation of hydrostatic support being satisfied .if this is not the case , as might well happen in an insufficiently careful stellar model calculation , the value determined from the variational principle may be quite precise , in the sense of numerically stable , but still unacceptably far from the correct value .indeed , a comparison between and provides some measure of the reliability of the computed frequencies ( e.g. * ? ? ?the programme finds the order of the mode according to the definition developed by and , based on earlier work by .specifically , the order is defined by here the sum is over the zeros in ( excluding the centre ) , and is the sign function , if and if . for a complete model that includes the centre for radial oscillations and for nonradial oscillations .thus the lowest - order radial oscillation has order .although this is contrary to the commonly used convention of assigning order 0 to the fundamental radial oscillation , the convention used here is in fact the more reasonable , in the sense that it ensures that is invariant under a continuous variation of from 0 to 1 . with this definition for p modes , for f modes , and for g modes , at least in simple models .it has been found that this procedure has serious problems for dipolar modes in centrally condensed models ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?the eigenfunctions are shifted such that nodes disappear or otherwise provide spurious results when eq .[ eq : e4.1 ] is used to determine the mode order . a procedure that does not suffer from this difficulty has recently been developed by ; i discuss it further in section [ sec : develop ] . a powerful measure of the characteristics of a mode is provided by the _ normalized inertia_. the code calculates this as \rho r^2 { { \rm d}}r \over m [ \xi_r ( r_{\rm phot } ) ^2 + l(l+1 ) { \xi_{\rm h}}(r_{\rm phot } ) ^2 ] } \nonumber \\ & = & { \int_{x_1}^{{x_{\rm s } } } \left [ y_1 ^ 2 + y_2 ^ 2 / l ( l + 1 ) \right ] q u { { \rm d}}x / x \over 4 \pi [ y_1 ( x_{\rm phot } ) ^2 + y_2 ( x_{\rm phot } ) ^2/l(l+1 ) ] } \ ; .\end{aligned}\ ] ] ( for radial modes the terms in are not included . ) here and are the distance of the innermost mesh point from the centre and the surface radius , respectively , and is the fractional photospheric radius .the normalization at the photosphere is to some extent arbitrary , of course , but reflects the fact that many radial - velocity observations use lines formed relatively deep in the atmosphere .a more common definition of the inertia is where is the so - called _mode mass_. the code has the option to output the eigenfunctions , in the form of .in addition ( or instead ) the displacement eigenfunctions can be output in a form indicating the region where the mode predominantly resides , in an energetical sense , as ( for radial modes only is found ) .these are defined in such a way that { { \rm d}}x / x \over 4 \pi [ y_1 ( x_{\rm phot } ) ^2 + y_2 ( x_{\rm phot } ) ^2/l(l+1 ) ] } \ ; . \label{eq : e4.4}\ ] ] the form provided by the is also convenient , e.g. , for computing rotational splittings ( e.g. , * ? ? ? * ) , where is the frequency of a mode of radial order , degree and azimuthal order . for slow rotationthe splittings are obtained from first - order perturbation analysis as characterized by _ kernels _ , where in general the angular velocity depends on both and .the code has built in the option to compute kernels for first - order rotational splitting in the special case where depends only on .several revisions of the code have been implemented in preliminary form or are under development .a substantial improvement in the numerical solution of the oscillation equations , particularly for high - order modes , is the installation of a fourth - order integration scheme , based on the algorithm of .this is essentially operational but has so far not been carefully tested .comparisons with the results of the variational expression and the use of richardson extrapolation , of the same formal order , will be particularly interesting . as discussed by the use of ( or , as here , ) as one of the integration variables has the disadvantage that the quantity enters into the oscillation equations .in models with a density discontinuity , such as results if the model has a growing convective core and diffusion is neglected , has a delta - function singularity at the point of the discontinuity . in the adipls calculations thisis dealt with by replacing the discontinuity by a very steep and well - resolved slope .however , it would obviously be an advantage to avoid this problem altogether . this can be achieved by using instead the lagrangian pressure perturbation as one of the variables .implementing this option would be a relatively straightforward modification to the code and is under consideration .the proper classification of dipolar modes of low order in centrally condensed models has been a long - standing problem in the theory of stellar pulsations , as discussed in section [ sec : results ] .such a scheme must provide a unique order for each mode , such that the order is invariant under continuous changes of the equilibrium model , e.g. , as a result of stellar evolution . as a major breakthrough , takata in a series of papers has elucidated important properties of these modes and defined a new classification scheme satisfying this requirement .a preliminary version of this scheme has been implemented and tested ; however , the latest and most convenient form of the takata classification still needs to be installed .a version of the code has been established which computes the first - order rotational splitting for a given rotation profile , in addition to setting up the corresponding kernels .this is being extended by k. burke , sheffield , to cover also second - order effects of rotation , based on the formalism of .an important motivation for this is the integration , discussed by , of the pulsation calculation with the astec evolution code to allow full calculation of oscillation frequencies for a model of specified parameters ( mass , age , initial rotation rate , etc . ) as the result of a single subroutine call .i am very grateful to w. dziembowski and d. o. gough for illuminating discussions of the properties of stellar oscillations , and to a. moya and m. j. p. f. g. monteiro for organizing the comparisons of stellar oscillation and model calculations within the esta collaboration .i thank the referee for useful comments which , i hope , have helped improving the presentation .this project is being supported by the danish natural science research council and by the european helio- and asteroseismology network ( helas ) , a major international collaboration funded by the european commission s sixth framework programme .christensen - dalsgaard , j. , berthomieu , g. : theory of solar oscillations . in : cox , a. n. , livingston , w. c. , matthews , m. ( eds ) , _ solar interior and atmosphere _ , p. 401space science series , university of arizona press ( 1991 ) moya , a. , christensen - dalsgaard , j. , charpinet , s. , lebreton , y. , miglio , a. , montalbn , j. , monteiro , m. j. p. f. g. , provost , j. , roxburgh , i. , scuflaire , r. , surez , j. c. , suran , m. : inter - comparison of the g- , f- and p - modes calculated using different oscillation codes for a given stellar model .apss , this volume ( 2007 )
development of the aarhus adiabatic pulsation code started around 1978 . although the main features have been stable for more than a decade , development of the code is continuing , concerning numerical properties and output . the code has been provided as a generally available package and has seen substantial use at a number of installations . further development of the package , including bringing the documentation closer to being up to date , is planned as part of the helas coordination action .
it was game theory research , seeking screening strategies to prevent the counterfeiting of websites i.e. phishing attacks .various ways for websites to counterfeit installed software behaviour were studied . in full screen mode , it was found that , browsers can counterfeit almost anything , including blue screens of death and formatting the hard drive . this is discussed in section ii . from an academic point of view, full screen counterfeiting eliminates several categories of installed software behaviour , as possible anti - counterfeiting solutions .one category of installed software behaviour was resistant to counterfeiting .in section iii i present an explanation of this category from the discipline of cryptography and another explanation from the discipline of game theory .every solution , in that category , was found to be a user - browser shared secret .basically mallory can not counterfeit what mallory does not know .the user - browser shared secret is not known by either bob or mallory .this means a user - browser shared secret can be used to extend tls to seek further authentication of bob by the computer user . on successful verification of a tls certificate s digital signature , the browser presents a dialogue , fig .1 . mallory can not counterfeit this dialogue without hacking in , to steal the shared secret .the computer user now authenticates ( 1 ) her browser dialogue and ( 2 ) bob , by accepting the shared secret and bob s identity credentials which came from the tls certificate .acceptance of these two proofs of identity is communicated by the user entering her login credentials . in section iv the possibility of bob authenticating alice was studied .my research found that the absence of trent in this process means that alice must authenticate bob first , during account setup .this makes alice authenticating bob more important than the other way around . at this pointi returned to researching alice s authentication of bob . our total dependence on alice authenticating bob requires us to make this act as straight forward as possible .the final solution proposed is a mixed strategy of ( 1 ) a user - browser shared secret to facilitate extending tls , for alice to authenticate bob , ( 2 ) standardisation of the login process via a browser created login window , ( 3 ) using central banks as trent for financial institutions and ( 4 ) utilisation of common knowledge and education to guide alice through the process i.e. to prevent phishing attacks alice must fulfil her role and authenticate bob .the idea is that a phishing attack is a game of incomplete information . that the user does not even know that a phishing attack is taking place .it is the successful counterfeiting of the website that does this .if we can devise a signalling strategy which can not be counterfeited then the computer user will know when a phishing attack is taking place .they will back away from the phishing website causing the phishing attack to fail .the idea was to add information , specifically an anti - counterfeiting signalling strategy which would be triggered after the browser has verified the digital signature on bob s tls certificate .i listed behaviour that installed software is capable of but websites are not capable of .the idea was your browser is installed software so it has this advantage over websites trying to counterfeit its behaviour .the following categories were proposed for research : 1 .drawing outside the browser canvas area .2 . creation of modal windows .file manipulation e.g. file creation , copying , renaming etc .this includes the possibility of formatting the hard disk , though we ca nt use that as evidence either .access to local data and operating system identifiers e.g. your username , your account login picture or whether or not you have accessed this website before .microsoft , user account control behaviour . 6 .existing best practice i.e. inspection of the tls certificate being used by your browser . in my original researchi dismissed or counterfeited every category except number 4 .every solution in category 4 is actually a secret shared between the computer user and their web browser .category 4 is discussed below in section iii .originally i dismissed category 3 believing it to be unworkable .however references [ 3 ] and sitekey [ 5 ] both use cookies to trigger their solutions .cookies actually fit category 3 .this is discussed further in section iv .a key component of this research was the study of screening strategies .while it was not my intention mallory , the counterfeiters , never wanted screening strategies to work .effectively i was researching an arms race where mallory would keep changing her behaviour to prevent my screening strategies from working .section vi outlines one interpretation of that arms race .the actual path that i followed was to study the categories listed above .there is no point in me documenting that research here because it is quite similar to discussions of screening strategies found in [ 1 ] and [ 2 ] .one phishing attack website that i stumbled upon requested a username and password . even though the genuine website was open access .this type of phishing is more social engineering than counterfeiting . during my researchthe most versatile type of phishing i stumbled upon was full screen counterfeiting .about ten years ago browsers stopped the javascript function window.open() from creating completely undecorated popup windows .the function still works , however an address bar is added to the window , as shown in fig .2 . without thisit would be easy to counterfeit browser controls and inspection of the tls certificate. within the past few years a fullscreen api has been added to html 5 / javascript .it s still in development so browser specific function names exist like mozrequestfullscreen ( ) and webkitrequestfullscreen ( ) .once a user performs an action which results in this function call , the browser switches into full screen mode .the transition to full screen happens whether or not the user has been warned in advance .yes the browser will show a warning , after the transition .however each browser has its own warning windows and websites can query your browser and deliver the appropriate counterfeit .3 shows a firefox warning .the thing about social engineering is that almost anything goes .consider a criminal harassing a user with warnings , like that shown in fig .4 . once the user is accustomed to hitting allow , the criminals can request full screen mode and hope the user falls for it .after that they can counterfeit the entire computer desktop , blue screens of death etc . for example , fig .5 shows six bitmap images .they are deliberately drawn to look fake . when assembled correctly they mimic a desktop .6 shows a computer desktop just before a full screen counterfeiting attack .once the user clicks on a button which executes javascript .the requestfullscreen ( ) function is called , along with code which alters the webpage to show the fake browser , and desktop , controls .this turns fig . 6 into fig .note the substitution of the bitmaps from fig . 5 for the genuine browser and desktop .if the user does not notice that the warning message is different they may click on allow. there are a number of differences between the harassment window and the genuine warning window . along with the message being differentthe harassment window can not grey the entire desktop , nor can it be drawn at the top centre of the screen .if the bitmaps used in fig . 5 were realistic then fig . 6 and fig .8 would be almost identical . furthermore fig .7 would only appear odd / unusual because the greyed area extended beyond the perceived canvas area and because the warning window appeared outside of the perceived canvas area .these are very weak indicators of counterfeiting . from a researchers point of viewmany types of installed software behaviour can be counterfeited .including browser addons , inspection of tls certificates , and microsoft user account control behaviour . as such categories 5 and6 must be eliminated as suitable anti - counterfeiting solutions .furthermore we now need to be concerned with counterfeiting of blue screens of death , hackers / criminals blackmailing people with the threat of formatting their hard drives etc .the purpose here is to demonstrate these mechanisms .no user testing has been performed .the academic exercise of demonstrating that this is possible is sufficient to eliminate categories 5 and 6 .there is anecdotal evidence in [ 7 ] that these tactics will work . in the short termbest practice can be changed to ( 1 ) hit escape to exit full screen , ( 2 ) click on the padlock symbol and ( 3 ) read the tls certificate details window , to ensure you are connected to the correct website .also we can roll back the full screen api , just like the undecorated window functionality .i believe the system has been mistakenly identified as a two actor system .when , in fact , a third actor is present .the failure of tls to force proper authentication of bob allows mallory to masquerade as bob , to counterfeit his identity i.e. phishing attack .you might argue that the two actor model is incorrect , that users check for the presence of the padlock symbol .if they did then phishing attacks would not exist , or the conflict would escalate as outlined in section vi . fig .9 shows a schematic of the existing two actor model and my three actor model . to distinguish between the actors in the two modelsi have given them different names , as shown .in the existing system alice - browser verifies the digital signature on bob s tls certificate . on success alice - browser and bob proceed to implement tls . in my model hal - browserverifies the digital signature on bob s tls certificate . on success hal - browserturns to alice - human and invites her to further authenticate bob .he does this by displaying a window like that shown in fig .the problem is : this act is vulnerable to counterfeiting . in this context counterfeitingis referred to as a phishing attack . shown in fig .10 is a picture of a turtle which is a shared secret between alice - human and hal - browser .neither bob nor mallory know this secret .as such mallory can not counterfeit fig . 10 without hacking into hal - browser to steal the secret .hacking into thousands of computers to steal these secrets is an entirely different endeavour to tricking people into going to a fake website . once you correctly model the system as a three actor system .cryptographers know how to appropriately authenticate the three participants .as such fig .10 is a relatively obvious step for cryptographers .dhamija et al also use a user - display shared secret .they use it to protect a dedicated login window from counterfeiting .they do not appear to go beyond that and use it to present bob s identity credentials [ 4 ] . with my solution , by entering her login credentialsalice - human is accepting bob s identity credentials and her browser s shared secret .she is authenticating both bob and her web browser .hal - browser then proceeds to implement tls .hence fig .10 extends tls to ensure proper authentication of bob by alice - human .alice - human now knows she is looking at a dialogue created by her web browser i.e. it is not a counterfeit , a phishing attack .she can now examine the identity credentials presented and complete bob s authentication .i was approaching this as a game theorist seeking screening strategies to prevent counterfeiting . herefollows an outline of the game theory interpretation .anti - counterfeiting technologies and the screening strategy that accompany them go together like a lock and key pair .the research involved the study of each category , from section ii , to find screening strategies which would prevent phishing attacks .the definition of a screening strategy , from [ 2 ] is given since its language is used to frame the discussion that follows . from [ 2 ] : _ a screening strategy is a strategy used by a less informed player to elicit information from a more informed player . _ human interactive proofs ( captcha ) , turning tests and anti - counterfeiting technologies are all specific types of screening strategy . here too authentication , through the confirmation of a shared secret , constitutes a screening strategy .the less informed player is eliciting the identity of the more informed player .they are not eliciting the secret because they already know it .they want to know do you know what the secret is? this is why it s just a point of view that this is cryptography . as a game theoristi see a screening strategy .it elicits their identity , as the individual who knows the secret or someone else .furthermore , the fact that this works while other approaches fail indicates phishing attacks involve the counterfeiting of an identity , not a website .this is significant because it allows us to prevent any type of counterfeiting .it recasts counterfeiting as theft of intellectual property , patents , copyright , trademarks , designs etc .accompanied by identity theft .the purpose of the identity theft is to undermine law enforcement attempts which would otherwise prevent the intellectual property theft .this means authentication based solutions can be developed for any type of counterfeiting including manufactured goods like pharmaceutical drugs and currencies .references [ 3 ] and sitekey [ 5 ] both utilise cookies to trigger mutual authentication . in both cases ,the cookie is used to authenticate hal - browser to bob .bob then presents his shared secret , seeking authentication from alice - human .this behaviour resides outside of the shared secret category of solutions , just discussed .it s subtle , but this is a new category of behaviour which is resistant to counterfeiting . as a machine hal - browser can discern the difference between identical websites .he can then treat these websites differently , even if we humans ca nt see the difference .all solutions in this new category involve the web browser differentiating between websites and then responding in an installed software manner , which is not vulnerable to counterfeiting . herefollow our two examples : 1 .browsers correctly returning cookies to the websites that created them .2 . browsers correctly volunteering saved passwords for the websites they belong to . for the solution presented in fig .10 our browser responds with installed software behaviour which is not vulnerable to counterfeiting .however that response is not specific to each website .both cookies and saved passwords require the browser to distinguish between websites and provide a response which is specific to that website . to develop a mechanism for bob to authenticate alice , consider the following thought experiments . on account setupa bank can create a login page which will trigger the browser save password facility .now the bank specifies both the username and password. the password should be very high entropy , impossible to remember by alice - human .the idea is that the browser will remember it .since the browser can distinguish between websites it will never enter the data into a phishing website .create a cookie with a high entropy code stored inside it .this is equivalent to the password idea , just described i.e. the browser only hands over cookies to the websites that created them .the cookie could be created with multi factor authentication e.g. a code could be sent to alice - human s mobile phone .both solutions are equivalent and they are both vulnerable to the same kinds of attack .i refer to this as cookie as certificate since these solutions use cookies as a poor man s certificate . to identify hal - browser to bob . both [ 3 ] and sitekey utilise cookies as certificates .the thing is , they do nt have a secret private key and they do nt have a digital signature .basically they are a number , a password .all they have going for them is that the browser will only hand them over to the website that created them .mallory simply attacks the cookie creation process .while various solutions can be presented they tend to look like an arms race e.g. mallory could try a brute force attack generating lots of cookies .bob will respond in the usual ways . on closer inspection ,the asymmetric nature of tls makes it easier for alice - human to authenticate bob , than the other way around .alice - human can trust trent and proceed accordingly . in bob s interactions with alice - human no third party ( trent ) is vouching for anyone . for bob to use a cookie, he must create that cookie as part of an account registration process .whatever process is used , it will always be vulnerable to attack by mallory unless alice - human correctly authenticates bob during that account registration . basically , without trent, bob can never authenticate alice - human without her authenticating him first .the cookie creation process will always involve alice - human authenticating bob or it will be vulnerable to mitm .hence bob can not authenticate alice - human without her authenticating him first .this makes bob authenticating alice - human less important than the other way around .this is because tls is asymmetric .alice - human can enjoy the use of tls certificates .the user - browser secret combined with the tls certificate is fig .10 . that s all alice - human needs .if alice - human will always authenticate bob first then its academic how bob authenticates her .codes , pictures , mobile phone , email , it s all the same .he might as well just ask for a username and password .it s up to her not to type her credentials into a phishing website .the responsibility always falls on her to properly authenticate bob first .after that it s academic if he accepts a username and password or some convoluted dance .effectively this breaks the use of cookies as an anti - counterfeiting mechanism .yes they are an effective solution .however they are dependent upon tls and fig .we re actually going in the wrong direction . instead of worrying about bob authenticating alice - human .we should focus more on actually getting alice - human to authenticate bob .the original idea was quite similar to [ 3 ] and sitekey , however it was abansoned , as outlined above .basically an out of band , multi - factor authentication technique could be used to create a cookie e.g. send a text message to alice - human s phone .12 would need to be used to create the cookie , just to ensure tls is actually being used .even then it would still fail i.e. this system is vulnerable to all the attacks that work against sitekey and users would only see fig , 12 once every three , or four , years .the reason why its a nice solution , why it s inclused here .is that mallory would not be able to get the login screen up without the cookie .so even if mallory has successfully harvested correct usernames and passwords , she would still need to create the cookie to get the login screen up .its possible this solution will still be implemented , however , computer users still need to authenticate bob .there is no getting away from this .this is the cookie as username idea .11 shows fig .10 with this additional solution implemented . .hence fig . 12 .note the shared secret is now a photo of a ruler which lookes like a giraffe . ]before i dismissed cookies i tried to overcome their shortcomings . fig .12 is a mock - up of a browser created dialogue for the creation of a cookie .sitekey , [ 3 ] and the cookie as username idea all require fig .12 to ensure a mitm attack is not used . since alice - humanwill only see this dialogue once every three or four years , this solution will fail . to overcome this problemwe need alice - human to be educated in the art . since it s not practical to force people to take lessons we need another way .the counterfeiting of currencies is hindered through common knowledge i.e. people use genuine bank notes every day . when they encounter a new note their everyday knowledge will aid them in evaluating that note s authenticity . in the same way fig .10 will become common knowledge .effectively krugman s theory of passive learning [ 8 ] .clearly passive learning and low involvement consumer behaviour are beyond the scope of this paper .the ideas outlined here need to be combined with user studies like [ 7][9 ] and academic work in the field of education and passive learning . to educate alice - human , in the art, we can make fig .10 the second page , in a login wizard . the first page can be used to present messages which will slowly educate the user e.g. fig .13 and fig . 14 .effectively this passive learning approach will make the required knowledge , common knowledge .the idea is to meet users half way i.e. we should adjust our authentication solution to make it more accessible . and we should aid users in its use . where flaws appear the solution can be assessed to ascertain why the system failed .if the authentication mechanism is flawed we can alter tls , if users are being tricked by some aspect of the system , we can adjust both the educational messages and the login process till they can discern genuine from counterfeit .dhamija et al found that some users did not know this type of fraud was possible .some users accepted websites which contained professional looking images and distrusted websites which were plain i.e. few links and few images . some distrusted webpages which did not originate from the bank s main homepage [ 7 ] .misconceptions like this can be addressed with informational messages on wizard page one .informing and educating does not mean annoying , so a i know this check box could be added to remove educational messages that the user already knows .eventually just the login page will appear or questions during setup could also remove it .the existing system is unhelpful for many reasons .users must remember to click on the padlock to examine the tls certificate . even then , they do nt really know if www.bobsonlinebanking.com should be used rather than www.bobsonlinebanking.ie .trent should be an actual trusted third party .the public probably do nt even know the name of any certificate authorities .identities presented in tls certificates are de facto worldwide trademarks .they are these businesses shopfront .yet we put all of these brand names behind a picture of a padlock . on inspecting fig .10 alice - human needs to remember three things , her user - browser shared secret , trent and bob . her secret will be easy because it s the same for every website .once the phishers start attacking browser weaknesses associated with tls , alice - human will need to remember trent .so why not make it easy for her ?if central banks sell tls certificates to banks that operate within their regulation domain then those certificates can state that central bank s name as trent .hence european central bank , federal reserve , bank of england , bank of japan etc . will all appear as trent in their respective countries .now alice - human has less to remember .regular ecommerce sites can continue as usual with tls certificates bought on the open market .also the central banks can outsource the certificate creation process . so the certificate authorities will not be out of pocket, they just wo nt have their name in lights on fig .effectively this is an extension of the existing extended validation certificates idea .the central banks would end up authenticating the identity of these banks .this is precisely trent s role .tell users to search for the padlock . with either commercially purchased tls certificates , or certificates they created themselves , the criminals can now display a padlock symbol .users are now told to click on the padlock and examine the tls certificate details to verify bob s identity .life is a lot more difficult for the phishers because they now need to trick users into full screen mode .however , once there , they can counterfeit anything .this gives them more options for theft / crime .authentication windows like fig .10 are introduced and users are directed to only enter their login credentials into these login windows .hal - browser only displays the authentication window fig .10 on successful verification of a tls certificate s digital signature .hence mallory can not persuade hal - browser to display the window without her own tls certificate .the criminals are likely to attack tls and browser weaknesses associated with tls certificates .anti - virus , private keys , shared secrets all have a role to play .alice - human s job is to complete bob s authentication .a mixed strategy of educating alice , making details of the process common knowledge and altering the authentication process can be used to bring alice - human into the process . at the end of the day ,only alice - human can authenticate bob .phishing attacks are not the counterfeiting of a website but the counterfeiting of an identity .the research found the system to be responsive to strategies which protect the website creator s identity , whereas website content could not be protected .in fact , the research found that web browsers can counterfeit almost anything .the asymmetric nature of tls makes alice - human authenticating bob more important than vice versa . without trent ,bob authenticating alice - human will always require her to authenticate him first , during account setup .otherwise a mitm attack will occur .this requirement undermines solutions like sitekey .the conclusion is that a mixed strategy is called for .a user - browser shared secret allows tls to be extended so alice - human can complete bob s authentication .the shared secret protects this act from counterfeiting , from phishing , since mallory can not counterfeit what mallory does not know .the login process should be standardised into a predictable login window .this makes details of the interaction common knowledge , guiding alice - human through the process .furthermore educational messages displayed in a login wizard will educate the public by making this knowledge common knowledge .this includes the possibility of central banks taking over the role of trent for financial institutions.the mixed strategy means meeting users half way. alice - human must fulfil her role and authenticate bob .we must make this task as easy as possible .protecting the authentication process from counterfeiting , through the use of a user - browser shared secret , using central banks for trent , standardising the login window , informing and educating users all bring us closer to alice fulfilling her role .s. schechter , r. dhamija , a. ozment and i. fischer , the emperor s new security indicators , 2007 ieee symposium on security and privacy ( sp 07 ) , 2007 .r. dhamija , j. tygar and m. hearst , why phishing works , proceedings of the sigchi conference on human factors in computing systems - chi 06 , 2006 .p. kumaraguru , s. sheng , a. acquisti , l. f. cranor , and j. hong .`` lessons from a real world evaluation of anti - phishing training '' e - crime researchers summit , anti - phishing working group , october 2008 .
phishing attacks occur because of a failure of computer users to authenticate bob . the computer user s role , her job , is to authenticate bob . nobody else can carry out this task . i researched the ability of browsers to counterfeit the behaviour of installed software . the objective was to find a signalling strategy which would protect against counterfeiting i.e. phishing attacks . the research indicates that a user - browser shared secret can not be counterfeited because mallory can not counterfeit what mallory does not know . after your browser has verified a tls certificate s digital signature the browser should create a two page login wizard . the first page should display a random educational message , to inform and educate users about the process . the second page will show the user ( 1 ) the user - browser shared secret , ( 2 ) the verified identity credentials from the tls certificate and ( 3 ) the input fields for the user to enter her login credentials . the shared secret prevents counterfeiting , prevents phishing . computer users can now authenticate bob by examining the tls certificate s identity credentials . the educational messages will communicate to the user , the issues and pitfalls involved . on accepting bob , as bob , the user can enter her login credentials and login . phishing attacks , game theory , applied cryptography , authentication , security protocols , human factors .
inferring macroscopic properties of physical systems from their microscopic description is an ongoing work in many disciplines of physics , like condensed matter , ultra cold atoms or quantum chromo dynamics .the most drastic changes in the macroscopic properties of a physical system occur at phase transitions , which often involve a symmetry breaking process .the theory of such phase transitions was formulated by landau as a phenomenological model and later devised from microscopic principles using the renormalization group .one can identify phases by knowledge of an order parameter which is zero in the disordered phase and nonzero in the ordered phase .whereas in many known models the order parameter can be determined by symmetry considerations of the underlying hamiltonian , there are states of matter where such a parameter can only be defined in a complicated non - local way .these systems include topological states like topological insulators , quantum spin hall states or quantum spin liquids .therefore , we need to develop new methods to identify parameters capable of describing phase transitions . such methods might be borrowed from machine learning . since the 1990s this field has undergone major changes with the development of more powerful computers and artificial neural networks .it has been shown that such neural networks can approximate every function under mild assumptions .they quickly found applications in image classification , speech recognition , natural language understanding and predicting from high - dimensional data .furthermore , they began to outperform other algorithms on these tasks . in the last years physicists started to employ machine learning techniques .most of the tasks were tackled by supervised learning algorithms or with the help of reinforcement learning . supervisedlearning means one is given labeled training data from which the algorithm learns to assign labels to data points . after successful trainingit can then predict the labels of previously unseen data with high accuracy .in addition to supervised learning , there are unsupervised learning algorithms which can find structure in unlabeled data .they can also classify data into clusters , which are however unlabelled .it is already possible to employ unsupervised learning techniques to reproduce monte - carlo - sampled states of the ising model .phase transitions were found in an unsupervised manner using principal component analysis .we employ more powerful machine learning algorithms and transition to methods that can handle nonlinear data .a first nonlinear extension is kernel principal component analysis .the first versions of autoencoders have been around for decades and were primarily used for dimensional reduction of data before feeding it to a machine learning algorithm .they are created from an encoding artificial neural network , which outputs a latent representation of the input data , and a decoding neural network that tries to accurately reconstruct the input data from its latent representation .very shallow versions of autoencoders can reproduce the results of principal component analysis . in 2013 ,variational autoencoders have been developed as one of the most successful unsupervised learning algorithms .in contrast to traditional autoencoders , variational autoencoders impose restrictions on the distribution of latent variables .they have shown promising results in encoding and reconstructing data in the field of computer vision . in this workwe use unsupervised learning to determine phase transitions without any information about the microscopic theory or the order parameter .we transition from principal component analysis to variational autoencoders , and finally test how the latter handles different physical models .our algorithms are able to find a low dimensional latent representation of the physical system which coincides with the correct order parameter .the decoder network reconstructs the encoded configuration from its latent representation .we find that the reconstruction is more accurate in the ordered phase , which suggests the use of the reconstruction error as a universal identifier for phase transitions .whereas for physicists this work is a promising way to find order parameters of systems where they are hard to identify , computer scientists and machine learning researchers might find an interpretation of the latent parameters .the ising model is one of the most - studied and well - understood models in physics .whereas the one - dimensional ising model does not possess a phase transition , the two - dimensional model does .the hamiltonian of the ising model on the square lattice with vanishing external magnetic field reads with uniform interaction strength and discrete spins on each site .the notation indicates a summation over nearest neighbors . a spin configuration is a fixed assignment of a spin to each lattice site, denotes the set of all possible configurations .we set the boltzmann constant and the interaction strength for the ferromagnetic case and for the antiferromagnetic case .a spin configuration can be expressed in matrix form as lars onsager solved the two dimensional ising model in 1944 .he showed that the critical temperature is .for the purpose of this work , we assume a square lattice with length such that , and periodic boundary conditions .we sample the ising model using a monte - carlo algorithm at temperatures ] .the order parameter is defined analogously to the ising model magnetization , but with the -norm of a magnetization consisting of two components ._ principal component analysis_ is an orthogonal linear transformation of the data to an ordered set of variables , sorted by their variance .the first variable , which has the largest variance , is called the first principal component , and so on . the linear function , which maps a collection of spin samples to its first principal component , is defined as \ , \label{eq : pca}\end{aligned}\ ] ] where is the vector of mean values of each spin averaged over the whole dataset .further principal components are obtained by subtracting the already calculated principal components and repeating ._ kernel principal component analysis _ projects the data into a kernel space in which the principal component analysis is then performed . in this workthe nonlinearity is induced by a radial basis functions kernel . _traditional neural network - based autoencoders _ consist of two artificial neural networks stacked on top of each other .the encoder network is responsible for encoding the input data into some latent variables .the decoder network is used to decode these parameters in order to return an accurate recreation of the input data , shown in .the parameters of this algorithm are trained by performing gradient descent updates in order to minimize the reconstruction loss ( reconstruction error ) between input data and output data ._ variational autoencoders _ are a modern version of autoencoders which impose additional constraints on the encoded representations , see latent variables in .these constraints transform the autoencoder to an algorithm that learns a latent variable model for its input data . whereas the neural networks of traditional autoencoders learn an arbitrary function to encode and decode the input data , variational autoencoders learn the parameters of a probability distribution modeling the data . after learning the probability distribution , one can sample parameters from it and then let the encoder network generate samples closely resembling the training data . to achieve this, variational autoencoders employ the assumption that one can sample the input data from a unit gaussian distribution of latent parameters .the weights of the model are trained by simultaneously optimizing two loss functions , a reconstruction loss and the kullback - leibler divergence between the learned latent distribution and a prior unit gaussian . in this workwe use autoencoders and variational autoencoders with one fully connected hidden layer in the encoder as well as one fully connected hidden layer in the decoder , each consisting of 256 neurons .the number of latent variables is chosen to match the model from which we sample the input data .the activation functions of the intermediate layers are rectified linear units .the activation function of the final layer is a _ sigmoid _ in order to predict probabilities of spin or in the ising model , or _tanh _ for predicting continuous values of spin components in the xy model .we do not employ any or dropout regularization .however , we tune the relative weight of the two loss functions of the variational autoencoder to fit the problem at hand .the kullback - leibler divergence of the variational autoencoder can be regarded as reguarization of the traditional autoencoder . in our autoencoderthe reconstruction loss is the cross - entropy loss between the input and output probability of discrete spins , as in the ising model .the reconstruction loss is the mean - squared - error between the input and the output data of continuous spin variables in the xy model . to understand why a variational autoencoder can be a suitable choice for the task of classifying phases , we recall what happens during training .the weights of the autoencoder learn two things : on the one hand , they learn to encode the similarities of all samples to allow for an efficient reconstruction . on the other hand , they learn a latent distribution of the parameters which encode the most information possible to distinguish between different input samples .let us translate these considerations to the physics of phase transitions .if all the training samples are in the unordered phase , the autoencoder learns the common structure of all samples .the autoencoder fails to learn any random entropy fluctuations , which are averaged out over all data points .however , in the ordered phase there exists a common order in samples belonging into the same phase .this common order translates to a nonzero latent parameter , which encodes correlations on each input sample .it turns out that in our cases this parameter is the order parameter corresponding to the broken symmetry .it is not necessary to find a perfect linear transformation between the order parameter and the latent parameter as it is the case in .a one - to - one correspondence is sufficient , such that one is able to define a function that maps these parameters onto each other and captures all discontinuities of the derivatives of the order parameter .we point out similarities between principal component analysis and autoencoders .although both methods seem very different , they both share common characteristics .principal component analysis is a dimensionality reduction method which finds the linear projections of the data that maximizes the variance .reconstructing the input data from its principal components minimizes the mean squared reconstruction error .although the principal components do not need to follow a gaussian distribution , principal components have the highest mutual agreement with the data if it emerges from a gaussian prior .moreover , a single layer autoencoder with linear activation functions closely resembles principal component analysis .principal component analysis is much easier to apply and in general uses less parameters than autoencoders .however , it scales very badly to a large dataset .autoencoders based on convolutional layers can reduce the number of parameters . in extreme casesthis number can be even less than the parameters of principal component analysis .furthermore , such autoencoders can promote locality of features in the data .+ + + + + pixels , from the latent parameter .the brightness indicates the probability of the spin to be up ( white : , black : ) . the first row is a reconstruction of sample configurations from the ferromagnetic ising model .the second row corresponds to the antiferromagnetic ising model .the third row is the prediction from the af latent parameter , where each second spin is multiplied by , to show that the second row indeed predicts an antiferromagnetic state.,title="fig:",scaledwidth=50.0% ] + +the four different algorithms can be applied to the ising model to determine the role of the first principal components or the latent parameters .shows a clear correlation between these parameters and the magnetization for all four methods .however , the traditional autoencoder is inaccurate ; this fact leads us to enhancing traditional autoencoders to variational autoencoders .the principal component methods show the most accurate results , slightly better than the variational autoencoder .this is to be expected , since the former are modeled by fewer parameters . in the following results section , we concentrate on the variational autoencoder as the most advanced algorithm for unsupervised learning . to begin with ,we choose the number of latent parameters in the variational autoencoder to be one . after training for 50 epochs and a saturation of the training loss ,we visualize the results in . on the left, we see a close linear correlation between the latent parameter and the magnetization . in the middlewe see a histogram of encoded spin configurations into their latent parameter .the model learned to classify the configurations into three clusters .having identified the latent parameter to be a close approximation to the magnetization allows us to interpret the properties of the clusters .the right and left clusters in the middle image correspond to an average magnetization of , while the middle cluster corresponds to the magnetization . employing a different viewpoint , from weconclude that the parameter which holds the most information on how to distinguish ising spin samples is the order parameter . on the right panel , the average of the magnetization , the latent parameter and the reconstruction loss are shown as a function of the temperature .a sudden change in the magnetization at defines the phase transition between paramegnetism and ferromagnetism . even without knowing this order parameter, we can now use the results of the autoencoder to infer the position of the phase transition . as an approximate order parameter, the average absolute value of latent parameter also shows a steep change at .the averaged reconstruction loss also changes drastically at during a phase transition .while the latent parameter is different for each physical model , the reconstruction loss can be used as a universal parameter to identify phase transitions . to summarize , without any knowledge of the ising model and its order parameter , but sample configurations, we can find a good estimation for its order parameter and the occurrence of a phase transition .it is a priori not clear how to determine the number of latent neurons in the creation of the neural network of the autoencoder . due to the lack of theoretical groundwork ,we find the optimal number by experimenting .if we expand the number of latent dimensions by one , see , the results of our analysis only change slightly .the second parameter contains a lot less information compared to the first , since it stays very close to zero .hence , for the ising model , one parameter is sufficient to store most of the information of the latent representation . while the ferromagnetic ising model serves as an ideal starting ground , in the next step we are interested in models where different sites in the samples contribute in a different manner to the order parameter .we do this in order to show that our model is even sensitive to structure on the smallest scales . for the magnetization in the ferromagnetic ising model ,all spins contribute with the same weight .in contrast , in the antiferromagnetic ising model , neighboring spins contribute with opposite weight to the order parameter .again the variational autoencoder manages to capture the traditional order parameter .the staggered magnetization is strongly correlated with the latent parameter , see .the three clusters in the latent representation make it possible to interpret different phases .furthermore , we see that all three averaged quantities - the magnetization , the latent parameter and the reconstruction loss - can serve as indicators of a phase transition . demonstrates the reconstruction from the latent parameter . in the first rowwe see the reconstruction from samples of the ferromagnetic ising model , the latent parameter encodes the whole spin order in the ordered phase .reconstructions from the antiferromagnetic ising model are shown in the second and third row .since the reconstructions clearly show an antiferromagnetic phase , we infer that the autoencoder encodes the spin samples even to the most microscopic level . in the xy modelwe examine the capabilities of a variational autoencoder to encode models with continuous symmetries . in models like the ising model , where discrete symmetries are present , the autoencoder only needs to learn a discrete set , which is often finite , of possible representations of the symmetry broken phase .if a continuous symmetry is broken , there are infinitely many possibilities of how the ordered phase can be realized .hence , in this section we test the ability of the autoencoder to embed all these different realizations into latent variables .the variational autoencoder handles this model equally well as the ising model .we find that two latent parameters model the phase transition best . the latent representation in the middle of shows the distribution of various states around a central cluster .the radial symmetry in this distribution leads to the assumption that a sensible order parameter is constructed from the -norm of the latent parameter vector . in, one sees the correlation between the magnetization and the absolute value of the latent parameter vector . averaging the samples for the same temperature hints to the facts that the latent parameter and the reconstruction loss can serve as an indicator for the phase transition .we have shown that it is possible to observe phase transitions using unsupervised learning .we compared different unsupervised learning algorithms ranging from principal component analysis to variational autoencoders and thereby motivated the need for the upgrade of the traditional autoencoder to a variational autoencoder .the weights and latent parameters of the variational autoencoder approach are able to store information about microscopic and macroscopic properties of the underlying systems .the most distinguished latent parameters coincide with the known order parameters .furthermore , we have established the reconstruction loss as a new universal indicator for phase transitions . we have expanded the toolbox of unsupervised learning algorithms in physics by powerful methods , most notably the variational autoencoder , which can handle nonlinear features in the data and scale very well to huge datasets . using these techniques, we expect to predict unseen phases or uncover unknown order parameters , e.g. in quantum spin liquids .we hope to develop deep convolutional autoencoders which have a reduced number of parameters compared to fully connected autoencoders and can also promote locality in feature selection . furthermore , since there exists a connection between deep neural networks and renormalization group , it may be helpful to employ deep convolutional autoencoders to further expose this connection . _ acknowledgments _ we would like to thank timo milbich , bjrn ommer , michael scherer , manuel scherzer and christof wetterich for useful discussions .we thank shirin nkongolo for proofreading the manuscript .s.w . acknowledges support by the heidelberg graduate school of fundamental physics .stefano curtarolo , dane morgan , kristin persson , john rodgers , and gerbrand ceder .predicting crystal structures with data mining of quantum calculations . _ physical review letters _ , 910 ( 13 ) ,doi : 10.1103/physrevlett.91.135503 .matthias rupp , alexandre tkatchenko , klaus - robert mller , and o. anatole von lilienfeld .fast and accurate modeling of molecular atomization energies with machine learning ._ physical review letters _ , 1080 ( 5 ) , jan 2012 .doi : 10.1103/physrevlett.108.058301 .zhenwei li , james r. kermode , and alessandro de vita .molecular dynamics with on - the - fly machine learning of quantum - mechanical forces ._ physical review letters _, 1140 ( 9 ) , mar 2015 .doi : 10.1103/physrevlett.114.096405 .erin ledell , prabhat , dmitry yu .zubarev , brian austin , and william a. lester. classification of nodal pockets in many - electron wave functions via machine learning ._ journal of mathematical chemistry _ , 500 ( 7):0 20432050 , may 2012 .doi : 10.1007/s10910 - 012 - 0019 - 5 .g. pilania , j. e. gubernatis , and t. lookman .structure classification and melting temperature prediction in octet ab solids via machine learning ._ physical review b _ , 910 ( 21 ) , jun 2015 .doi : 10.1103/physrevb.91.214302 .yousef saad , da gao , thanh ngo , scotty bobbitt , james r. chelikowsky , and wanda andreoni .data mining for materials : computational experiments withabcompounds . _physical review b _ , 850 ( 10 ) , mar 2012 .doi : 10.1103/physrevb.85.104104 .o. s. ovchinnikov , s. jesse , p. bintacchit , s. trolier - mckinstry , and s. v. kalinin .disorder identification in hysteresis data : recognition analysis of the random - bondrandom - field ising model ._ physical review letters _ , 1030 ( 15 ) , oct 2009 .doi : 10.1103/physrevlett.103.157203 .louis - franois arsenault , alejandro lopez - bezanilla , o. anatole von lilienfeld , and andrew j. millis .machine learning for many - body physics : the case of the anderson impurity model . _physical review b _ , 900 ( 15 ) , oct 2014 .doi : 10.1103/physrevb.90.155136 .john c. snyder , matthias rupp , katja hansen , klaus - robert mller , and kieron burke .finding density functionals with machine learning . _ physical review letters _ , 1080 ( 25 ) , jun 2012 .doi : 10.1103/physrevlett.108.253002 .geoffroy hautier , christopher c. fischer , anubhav jain , tim mueller , and gerbrand ceder .finding nature s missing ternary oxide compounds using machine learning and density functional theory . _chemistry of materials _ , 220 ( 12):0 37623767 , jun 2010 .doi : 10.1021/cm100795d .pierre baldi and kurt hornik .neural networks and principal component analysis : learning from examples without local minima . _ neural networks _ , 20 ( 1):0 5358 , jan 1989 .doi : 10.1016/0893 - 6080(89)90014 - 2 .n. d. mermin and h. wagner .absence of ferromagnetism or antiferromagnetism in one- or two - dimensional isotropic heisenberg models ._ physical review letters _ , 170 ( 22):0 11331136 , nov 1966 .doi : 10.1103/physrevlett.17.1133 .aloysius p. gottlob and martin hasenbusch .critical behaviour of the 3d xy - model : a monte carlo study ._ physica a : statistical mechanics and its applications _, 2010 ( 4):0 593613 , dec 1993 .doi : 10.1016/0378 - 4371(93)90131-m .
we employ unsupervised machine learning techniques to learn latent parameters which best describe states of the two - dimensional ising model and the three - dimensional xy model . these methods range from principal component analysis to artificial neural network based variational autoencoders . the states are sampled using a monte - carlo simulation above and below the critical temperature . we find that the predicted latent parameters correspond to the known order parameters . the latent representation of the states of the models in question are clustered , which makes it possible to identify phases without prior knowledge of their existence or the underlying hamiltonian . furthermore , we find that the reconstruction loss function can be used as a universal identifier for phase transitions .
functional data are more and more frequently involved in statistical problems . developping statistical methods in this special frameworkhas been popularized during the last few years , particularly with the monograph by ramsay & silverman ( 2005 ) .more recently , new developments have been carried out in order to propose nonparametric statistical methods for dealing with such functional data ( see ferraty & vieu , 2006 , for large discussion and references ) .these methods are also called doubly infinite dimensional ( see ferraty & vieu , 2003 ). indeed these methods deal with infinite - dimensional ( i.e. functional ) data and with a statistical model which depends on an infinite - dimensional unknown object ( i.e. a nonparametric model ) .this double infinite framework motivates the appellation of nonparametric functional statistics for such kind of methods .our paper is centered on the functional regression model : where is a real random variable , is a functional random variable ( that is , takes values in some possibly infinite - dimensional space ) and where the statistical model assumes only smoothness restriction on the functional operator . at this point , it worth noting that the operator is not constrained to be linear .this is a functional nonparametric regression model ( see section [ notations ] for deeper presentation ) .the aim of this paper is to extend in several directions the current knowledges about functional nonparametric regression estimates presented in section [ notations ] . in section [ mse ]we give asymptotic mean squared expansions , while in section [ asnorm ] the limiting distribution is derived .the main novelty / difficuly along the statement of these results relies on the exact calculation of the leading terms in the asymptotic expressions .section [ exsbp ] points out how such results can be used when the functional variable belongs to standard families of continuous time process .the accuracy of our asymptotic results leads to interesting perspectives from a practical point of view : minimizing mean squared errors can govern automatic bandwidth selection procedure while the limiting distribution of the error is a useful tool for building confidence bands . to this end, we propose in section [ computfeatures ] a functional version of the wild bootstrap procedure , and we use it , both on simulated and on real functional datasets , to get some automatic rule for choosing the bandwidth .the concluding section [ conc ] contains some important open questions which emerge naturally from the theoretical results given in this paper , such as the theoretical study of the accuracy of the functional wild bootstrap procedure used in our applications .the model is defined in the following way .assume that is a sample of i.i.d .pairs of random variables . the random variables are real and the s are random elements with values in a functional space . in all the sequelwe will take for a separable banach space endowed with a norm .this setting is quite general since it contains the space of continuous functions , spaces as well as more complicated spaces like sobolev or besov spaces .separability avoids measurability problems for the random variables s .the model is classically written : where is the regression function mapping onto and the s are such that for all , and .estimating is a crucial issue in particular for predicting the value of the response given a new explanatory functional variable .however , it is also a very delicate task because is a nonlinear operator ( from into ) for which functional linear statistical methods were not planned . to provide a consistent procedure to estimate the nonlinear regression operator , we propose to adapt the classical finite dimensional nadaraya - watson estimate to our functional model .we set several asymptotic properties of this estimate were obtained recently .it turns out that the existing literature adresses either the statement of upper bounds of the rates of convergence without specification of the exact constants ( see chapter 6 in ferraty & vieu , 2006 ) , or abstract expressions of these constants which are unusable in practice ( as for instance in the recent work by masry , 2005 , which has been published during the reviewing process of this paper ) .our aim in this paper is to give bias , variance , means square errors and asymptotic distribution of the functional kernel regression estimate with exact computation of all the constants ( see section [ theorie ] ) .we will focus on practical purposes in section computfeatures .several assumptions will be made later on the kernel and on the bandwidth .remind that in a finite - dimensional setting pointwise mean squared error ( at ) of the estimate depends on the evaluation of the density ( at ) w.r.t .lebesgue s measure and on the derivatives of this density .we refer to schuster ( 1972 ) for an historical result about this topic . on infinite - dimensional spaces ,there is no measure universally accepted ( as the lebesgue one in the finite - dimensional case ) and there is need for developping a free - densityapproach . as discussed along section [ exsbp ] the problem of introducing a density for is shifted to considerations on the measure of small balls with respect to the probability of .only pointwise convergence will be considered in the forthcoming theoretical results . inall the following , is a fixed element of the functional space .let be the real valued function defined as , \ ] ] and be the c.d.f. of the random variable : note that the crucial functions and depends implicitely on consequently we should rather note them by and but , as is fixed , we drop this index once and for all .similarly , we will use in the remaining the notation instead of .let us consider now the following assumptions .* h0 :* and are continuous in a neighborhood of , and . * h1 * * :* exists . * h2 :* the bandwidth satisfies and , while the kernel is supported on ] as : for which the following assumption is made : * h3 :* for all , ] we denote the indicator function on the set 0,1\right ] ] . a deeper discussion linking the above behavior of with small ball probabilities notions will be given in section [ exsbp ] .in both following subsections we will state some asymptotic properties ( respectively mean squared asymptotic evaluation and asymptotic normality ) for the functional kernel regression estimate .it is worth noting that all the results below can be seen as extensions to functional data of several ones already existing in the finite - dimensional case ( the literature is quite extensive in this field and the reader will find in sarda & vieu ( 2000 ) deep results as well as a large scope of references ) . with other words , our technique for proving both theorem theoremmse and theorem [ theoremasnorm ] is also adapted to the scalar or vector regression model since the abstract space can be of finite dimension ( even , of course , if our main goal is to treat infinite - dimensional cases ) .moreover , it turns out that the transposition to finite - dimensional situations of our key conditions ( see discussion in section [ exsbp ] below ) becomes ( in some sense ) less restrictive than what is usually assumed . with other words , the result of theorem theoremmse and theorem [ theoremasnorm ]can be directly applied to finite - dimensional settings , and will extend the results existing in this field ( see again sarda & vieu , 2000 ) to situation when the density of the corresponding scalar or multivariate variable does not exist or has all its successive derivatives vanishing at point ( see discussion in section [ dimfinie ] ) . all along this sectionwe assume that assumptions * h0-h3 * hold .let us first introduce the following notations : the following result gives asymptotic evaluation of the mean squared errors of our estimate .the asymptotic mean squared errors have a standard convex shape , with large bias when the bandwidth increases and large variance when decays to zero .we refer to the appendix for the proof of theorem [ theoremmse ] .[ theoremmse ] when * h0-h3 * hold , we have the following asymptotic developments : and let us denote the leading bias term by : before giving the asymptotic normality , one has to be sure that the leading bias term does not vanish .this is the reason why we introduce the following additional assumption : * h4 :* and .the first part of assumption * h4 * is very close to what is assumed in standard finite - dimensional literature .it forces the nonlinear operator not to be too smooth ( for instance , if is lipschitz of order , then ) .the second part of assumption * h4 * is specific to the infinite - dimensional setting , and the next proposition pr2 will show that this condition is general enough to be satisfied in some standard situations .this proposition will be proved in the appendix .[ pr2 ] i ) if 0,1\right ] } ( s) ] , and 0,1\right ] } ( s) ] ( else we have ) .moreover , since the rate of convergence depends on the function and for producing a reasonably usable asymptotic distribution it is worth having some estimate of this function .the most natural is its empirical counterpart : the pointwise asymptotic gaussian distribution for the functional nonparametric regression estimate is given in theorem [ theoremasnorm ] below which will be proved in the appendix .note that the symbol stands for convergence in distribution .[ theoremasnorm ] when * h0-h4 * hold , we have a simpler version of this result is stated in corollary [ coro1 ] below whose proof is obvious .the key - idea relies in introducing the following additional assumption : * h5 :* which allows to cancel the bias term . [ coro1 ] when * h0-h5 * hold , we have in practice , the constants involved in corollary [ coro1 ] need to be estimated . in order to compute explicitely both constants and , one may consider the simple uniform kernel and get easily the following result : under assumptions of corrollary [ coro1 ] , if }(.) ] such that both and do not vanish on $ ] , and therefore we arrive at : * * proof of propostion [ pr2]- * : obvious .
we consider the problem of predicting a real random variable from a functional explanatory variable . the problem is attacked by mean of nonparametric kernel approach which has been recently adapted to this functional context . we derive theoretical results by giving a deep asymptotic study of the behaviour of the estimate , including mean squared convergence ( with rates and precise evaluation of the constant terms ) as well as asymptotic distribution . practical use of these results are relying on the ability to estimate these constants . some perspectives in this direction are discussed . in particular a functional version of wild bootstrapping ideas is proposed and used both on simulated and real functional datasets . * key words : * asymptotic normality , functional data , nonparametric model , quadratic error , regression , wild functional bootstrap .
automation of financial and legal processes requires enforcement of confidentiality and integrity of transactions . for practical integration with the existing manual systems , such enforcement should be transparent to users .for instance , a person continually signs paper - based documents ( e.g. , bank checks ) by hand , while his embedded handwritten signature images are used to secure the digitized version of the signed documents .such scenario can be realizable using biometric cryptosystems ( also known as bio - cryptographic systems ) by means of the offline handwritten signature images . in bio - cryptography , biometric signals like fingerprints , iris , face or signature images , etc ., secure private keys within cryptography schemes like digital signatures and encryption .biometric samples provide a more trusted identification tool when compared to simple passwords .for instance , a fingerprint is attached to a person and it is harder to impersonate than traditional passwords . despite its identification power , biometrics forms a challenging design problem due to its fuzzy nature .for instance , while it is easy for a person to replicate his password during authentication , it rarely happens that a person applies exact fingerprint each time .the main source of variability in physiological biometrics like fingerprint , face , iris , retina , etc . is the imperfect acquisition of the traits .on the other hand , behavioral biometrics like handwritten signatures , gait , and even voice , have intrinsic variability that is harder to cancel .fuzzy vault ( fv ) is a reliable scheme presented mainly to enable usage of fuzzy keys for cryptography .a fv decoder permits limited variations in the decryption key so that secrets can be decrypted even with variable keys . accordingly, this scheme fits the bio - cryptography implementations , where biometrics are considered as fuzzy keys by which private cryptographic keys are secured .since the fv scheme has been proposed , it has being extensively employed for bio - cryptography , where most implementations focused on physiological biometrics , e.g. , fingerprints , face and iris .fv implementations based on the behavioral handwritten signatures are few and mostly employed online signature traits , where dynamic features like pressure and speed are acquired in real time by means of special devices as electronic pens and tablets .static offline signature images , that are scanned after the signing process ends , however , integrate too much variability to cancel by a fv decoder .recently , the authors have proposed the first offline signature - based fuzzy vault ( osfv ) implementation - .this implementation is employed to design a practical digital signature system by means of handwritten signatures . in this paper , this implementation is reviewed and extended .in particular , we propose an extension to enhance the security and accuracy of the basic osfv system by adapting cryptographic key size for individual users .finally , system performance on the gpds public signature database , besides the private pucpr brazilian database , are presented and interpreted .the rest of the paper is organized as follows . in the next section ,the osfv implementation and its application to produce digital signatures by means of the handwritten signature images are reviewed .section iii describes the signature representation and lists some aspects for enhanced representations .section iv introduces some osfv variants for enhanced accuracy .section v lists some variants for enhanced security .the new variant that adapts key sizes for enhanced security and accuracy is described in section vi .the simulation results are presented in section vii .finally , some research directions and conclusions are discussed in section viii .the system proposed for osfv consists of two main sub - systems : enrollment and authentication ( see figure [ fig : figure6 ] ) . in the enrollment phase , some signature templates are collected from the enrolling user .these templates are used for the user representation selection , as described in section iii .the user representation selection process results in a user representations matrix , where is the vector of indexes of the selected features , is a vector of indexes mapping represented in -bits , and is the vector of expected variabilities associated with the selected features .this matrix is user specific and contains important information needed for the authentication phase .accordingly , is encrypted by means of a user password . both fv and passwordare then stored as a part of user bio - cryptography template ( ) .then , the user parameters and are used to lock the user cryptography key by means of a single signature template in a fuzzy vault . in the authentication phase ,user password is used to decrypt the matrix .then , the vectors and are used to decode the fv by means of user query signature sample .finally , user cryptographic key is released to the user so he can use it to decrypt some confidential information or digitally signs some documents .the enrollment sub - system uses the user templates , the password , and the cryptography key to generate a bio - cryptography template ( bct ) that consists of the fuzzy vault and the encrypted user representation matrix .the user representation selection module generates the matrix as described in section iii .the osfv encoding module ( illustrated in figure [ fig : figure7 ] ) describes the following processing steps : 1 .the virtual indexes are quantized in -bits and produces a vector .2 . the user feature indexes are used to extract feature representation from the signature template . this representation is then quantized in -bits and produces a vector .the features are encoded to produce the locking set , where consists of -bits fv points 4 .the cryptography key of size where : + + is split into parts of -bits each , that constitutes a coefficient vector .a polynomial of degree is encoded using , where .the polynomial is evaluated for all points in and constitutes the set where .chaff ( noise ) points are generated , where , i \in [ 1,t ] ] .a chaff point is composed of two parts : the index part and the value part .two groups of chaff points are generated .chaffs of have their indexes equal to the indexes of the genuine points .the chaff points and the genuine point that have the same index part are all equally spaced by a distance , eliminating the possibility to differentiate between the chaffs and the genuine point .chaffs of have their index part differs than that of the genuine points as the number of chaffs in is limited by the parameters and , so to inject higher quantity of chaffs we define as a chaff groups ratio , where : + + where and are the amount of chaff features belong to and , respectively . chaffs are generated with indexes different than the genuine indexes .hence , the fv size is given by : + + so , the total number of chaffs is given by : + 7 .the genuine set , and the chaff set are merged to constitute the fuzzy vault , where , , and , .the authentication sub - system uses the user query sample and the password , to decode the fuzzy vault and restore the user cryptography key .first the password is used to decrypt the matrix .then the vectors , and are used to decode the fv by means of the query .the osfv decoding module ( illustrated in figure [ fig : figure9 ] ) describes the following processing steps : 1 .the virtual indexes are quantized in -bits and produces a vector .2 . the user feature indexes are used to extract feature representation from . this representation is then quantized in -bits and produces a vector .the features are encoded to produce the unlocking set , where .hence , the unlocking elements are represented in a field .the unlocking set is used to filter the chaff points from the fv .an adaptive matching method is applied to match unlocking and locking points .items of are matched against all items in .this process results in a matching set , where represents the projection of the matching features on the polynomial space .chaff filtering is done as follows .if the feature indexes are correct , then all elements of will have corresponding elements in .so , all of chaffs of will be filtered out .then , each of the remaining fv points will be compared to corresponding points extracted from the query sample .an adaptive matching method is applied : for every feature , a matching window is adapted to the feature modeled variability , where .a fv point is considered matching with an unlocking point , if they reside in the same matching window .i.e. , .the matching set is used to reconstruct a polynomial of degree by applying the rs decoding algorithm .the coefficients of are assembled to constitute the secret cryptography key . in ,the osfv implementation is employed to produce digital signatures using offline handwritten signatures .this methodology facilitates the automation of business processes , where users continually employ their handwritten signatures for authentication .users are isolated from the details related to the generation of digital signatures , yet benefit from enhanced security .figure [ signature ] illustrates the osfv - based digital signature framework .the user fv that is constructed during enrollment is used to sign user documents offline as follows . when a user signs a document by hand , his handwritten signature image is employed to unlock his private key .the unlocked key produces a digital signature by encrypting some message extracted from the document ( e.g. , check amount ) .the encrypted message is considered as a digital signature and it is attached to the digitized document .any party who possesses the user public key can verify the digital signature , where verification of the digital signature implies authenticity of the manuscript signature and integrity of the signed document ( e.g. , check amount did not change ) .according to aforementioned osfv implementation , the fv points encode some features extracted from the signature images .it is obvious that accuracy of a fv system relies on the feature representation .representations of intra - personal signatures should sufficiently overlap so that matching errors lie within the error correction capacity of the fv decoder . on contrary , representations of inter - personal signatures should sufficiently differ so that matching errors are higher than the error correction capacity of the fv decoder .accordingly , the authors proposed to design signature representations adapted for the fv scheme by applying a feature selection process in a feature - dissimilarity space . in this space ,features are extracted from each pair of template and query samples and the pair - wise feature distances are used as space dimensions . to illustrate this approach , see figure [ fig : figure12 ] . in this example , three signature images are represented : is the template signature , is a genuine query sample and is a forgery query sample . in the left side ,signatures are represented in the fv feature encoding space , where a fv point encodes a feature index and its value . for simplicity ,only two features ( and ) are shown , while the full representation consists of dimensions .on the right side , signatures are represented in the feature dissimilarity space . in this space ,a feature is replaced by its distance from a reference value .for instance , and are replaced by their dissimilarity representations , where , and .accordingly , while a point in the feature encoding space represents a signature image , a point in the feature dissimilarity space represents the dissimilarity between two different signature images .the point represents the dissimilarity between the genuine signature and the template , and a point represents the dissimilarity between the forgery signature and the template , where = ( ) , and = ( ) . in this example , and are discriminant features . for instance , for all genuine query samples like , and for all forgery query samples like , .unfolding these discriminant dissimilarity features to the original feature encoding space produces discriminant features in the encoding feature space , where the distance between two feature instances is used to determine their similarity .for instance , a genuine feature ( like ) lies close to the template feature , so they are similar , where closeness here implies that both features reside in a matching window .features extracted from a forgery image ( like ) do not resemble the template feature , as they reside outside the matching window .aforementioned description of the process to design representations is generic .some extensions are reviewed and compared below .shortage of user samples for training is addressed by designing a global writer - independent ( wi ) representation .a large number of signature images from a development database are represented in the feature - dissimilarity space of high dimensionality , and feature selection process runs to produce a global space of reduced dimensionality .such global approach permits designing fv systems for any user even who provides a single signature sample during enrollment . for performance improvement, the global representation is specified for individual users once enough number of enrolling samples becomes available . to this end, training samples are firstly represented in the global representation space , then an additional training step runs to produce a local writer - dependent ( wd ) representation that discriminates the specific user from others .simulation results have shown that local representations enhanced fv decoding accuracy by about 30% , where the average error rate ( aer ) is decreased from 25% in case of global representations to 17.75% for the local ones . in , the extended shadow code ( esc ) feature extraction method is adapted for the fv implementation .these features consist in the superposition of a bar mask array over a binary image of handwritten signatures . each baris assumed to be a light detector related to a spatially constrained area of the 2d signal .this method is powerful in detecting various levels of details in the signature images by varying the extraction scale .for instance , an image could be split to of horizontal and vertical cells , respectively , and shadow codes are extracted within individual cells .the higher the number of cells , the higher the resolution of detectors .the authors observed that designing fvs based on a single extraction scale results in varying performance for the different users .for instance , while high resolution scales are fine with users whose signatures are easy to forge or those who have high similarities with others , the low resolutions are better for users whose signatures integrate high variabilities .accordingly , a multi - scale feature fusion method is proposed , where different feature vectors are extracted based on different extraction scales and they are combined to produce a high - dimensional representation .this representation is then processed through the wi and wd design phased and produces the final local representation that encodes in the fv . besides fusing feature vectors that are extracted based on different scales , it is possible to fuse different types of features . in , the directional probability density function ( dpdf )features are fused with esc features to constitute a huge dimensional representation ( of 30,201 dimensionality ) .this representation is reduced through the wi and wd training steps and produced a concise representation of only 20 features .it is shown that injecting the additional feature type increased the fv decoding accuracy by about 22% ( aer is reduced from 17.75% to 13.75% ) .the aforementioned approach provides a practical scenario to produce representations with low intra - personal and high inter - personal variabilities which is mandatory feature for fv systems .however , the authors observed that the margin between the intra and the inter classes differs when using different signature prototypes ( templates ) for fv encoding .accordingly , a prototype selection method is proposed .the wd representation is projected to a dissimilarity space where distances to different user prototypes are the space constituents .then , a feature selection process runs in the dissimilarity space and locates the best prototype .this method has enlarged the separation between the intra and inter clusters significantly ( area under roc curve ( auc ) is increased from 0.93 to 0.97 ) .although accuracy of an osfv system relies mainly on quality of the feature representation , the proposed implementation provides additional opportunities for enhanced accuracy by applying some other design variants as described in this section .the results mentioned so far report accuracy of fv decoders that apply strict matching approach .two fv points are matching only if they have identical values .accuracy of a fv decoder is enhanced by applying the adaptive matching method , where the feature variability matrix is used for matching so that corresponding fv points are considered matching if their difference lies within the expected variability of their encoding feature ( see figure [ fig : figure3 ] ) .this method increased accuracy by about 27% ( aer is reduced from 13.75% to 10.08% ) . instead of decoding a single fv token ,it is possible to decode several fvs for enhanced performance . in case that some fvs are correctly decoded , the decrypted key is released to the user based on the majority vote rule .this method has increased detection accuracy by about 18% ( aer is reduced from 10.08% to 8.21% ) .the limited discriminative power of fvs is alleviated by using an additional password , so that the false accept rate ( far ) is reduced without significantly affecting the false reject rate ( frr ) .for the results reported so far , it was assumed that the user password is compromised . however , to report the actual performance of the system we have to consider the case when an attacker neither possesses a correct password nor a genuine signature sample . in this case, he can not decrypt the ur model and hence he randomly guesses the feature indexes .it is shown that the additional password has increased detection accuracy by about 65% ( aer is reduced from 8.21% to 2.88% ) . using additional passwords for enhanced system accuracycomes with the expense of the user inconvenience . in ,a novel user - convenient approach is proposed for enhancing the accuracy of signature - based biometric cryptosystems .since signature verification ( sv ) systems designed in the original feature space have demonstrated higher discriminative power to detect impostors , they can be used to improve the fv systems . instead of using an additional password ,the same signature sample is processed by a sv classifier before triggers the fv decoders ( see figure [ sv - fv ] ) . using thiscascaded approach , the high far of fv decoders is alleviated by the higher capacity of sv classifiers to detect impostors .this method has increased detection accuracy by about 35% ( aer is reduced from 10.08% to 6.55% ) . when multiple fvs are fused ,the aer is decreased by 31.30% ( from 8.21% to 5.64% ) .security of the osfv implementation is analyzed in terms of the brute - force attack .assume an attacker could compromise the fv without possessing neither valid password nor genuine signature sample . in this case , the attacker tries to separate enough number of genuine points ( ) from the chaff points .security of a fv is given by : where is the chaff group ratio , is the number of genuine points in the fv , is the degree of the encoding polynomial and is the chaff separation distance .high value of implies a high number of g2 features which are compromised in case that the password is compromised .the parameter should be concise as it impacts the accuracy and complexity of the fv .accordingly , entropy of the system can be increased through using different values of the parameters : and . however , there is a trade - off between system security and its recognition accuracy that could be alleviated by applying the following approaches . in the traditional chaff generation method ,equal - spaced chaff points are generated with a separation factor .in such case , there is a trade - off between security and robustness .for instance , with small separation , e.g. , , there are 40 fv points generated with the same index ( 1 genuine + 39 chaff points ) . in this case , a high number of chaffs is generated and results in high system entropy of about 68-bits and low accuracy of about aer .the adaptive chaff generation method enables the injection of high number of chaff with minimal impact on the fv decoding robustness . to this end , the feature variability vector is used during the fv locking phase so that chaff points are generated adaptively according to feature variability . for each feature , ( see figure [ fig : figure3 ] ) . by this method , it is less likely that an unlocking element equates a chaff element .for or instance , the same entropy ( 68-bits ) could be achieved with a minimal impact on system robustness ( aer = 10.52% ) . according to eq.[s ] ,the longer the cryptographic key size the higher entropy of the fv .however , this comes with expense of the accuracy . in , different key sizes ( ks )are tried ( 128 , 256 , 512 , 1024-bits ) and it is shown that different key sizes result in different performance for the different users .this observation motivates adapting the key length for each user as proposed in the following section ,in , functionality of a fv decoder is formulated as a simple dissimilarity threshold as follows : = -1 pt where a fv encoded by a template can be correctly decoded by a query only if the total dissimilarity between and is less than the error correction capacity of the fv decoder . here , is the dissimilarity part that results from the variability between the two samples , and is the dissimilarity part that results from wrong matches with chaff points .the methods discussed so far aimed to optimize the dissimilarity parts of eq.[fvd ] .for instance , the multi - scale and multi - type feature extraction approach results in separating intra - personal and inter - personal dissimilarity ranges .selection of robust templates ( prototypes ) and applying adaptive matching enlarged this separation .also , impact of the chaff error is minimized by presenting the adaptive chaff method . with applying all these methods ,however , accuracy of a signature - based fv is still below the level required for practical applications . accordingly, performance is increased by applying some complex and user inconvenient solutions like ensemble of fvs and using additional passwords or cascading sv and fv systems . herewe investigate a new room for enhancing fvs by optimizing the error correction capacity which is given by : = -1 pt it is obvious that this parameter relies on the fv encoding size and the encoding polynomial degree .also , from eq.[ks ] , we see that determines the key size .accordingly , we select user specific key sizes through changing the parameter so that for a specific user covers the range of his expected signature variability . to this end, we set for a user to his maximum intra - personal variability .based on the resulting user - specific error correction capacity , the parameter is determined using eq.[fv ] and user key size is computed using eq.[ks ] .once appropriate key size is computed for a user , his key is enlarged through injecting some padding bits in the original key during fv encoding . during authentication ,the enlarged key is reconstructed and the padding bits are removed to produce the original cryptographic key .all aforementioned performance results are reported for the pucpr brazilian signature database . herewe test the system for the public gpds-300 database as well .this database contains signatures of users , that were digitized as 8-bit greyscale at resolution of 300 dpi and contains images of different sizes ( that vary from pixels to pixels ) .all users have 24 genuine signatures and 30 simulated forgeries .it is split into two parts .the first part contains signatures of the first 160 users .a subset of this part is used to design the local representation and the remaining of this part is used for performance evaluation .the second part contains signatures of the last 140 users and it is used to design the global representation . see for a similar experimental protocol for both databases .table [ table : impact of using a user password ] shows results for the two databases for fixed and adaptive key sizes .it is obvious that employing the adaptive key size approach decreased the far significantly with low impact on the frr .for instance , the aer for the pucpr database in decreased by about 21% ( from 10.08 to 7.94 ) . also , performance of the system for the gpds database is comparable to state - of - the - art traditional sv systems ( aer is about 15% ) that employ more complex classifiers .moreover , the proposed method also enhances system security as it is possible to increase the key size , and hence the polynomial degree , without much impact on the accuracy . for instance , figure [ fig : adaptive ] shows the adapted polynomial degrees for different users in the pucpr database and the corresponding user variability .it is obvious that users with more stable signatures have their cryptographic keys more enlarged than users with less stable signatures . according to eq.[s ] ,system entropy of the standard osfv implementation ( with fixed keys of size 128-bits and polynomial degree ) is about 45-bits . with applying the adaptive key size method, the average is about 9.6-bits ( see figure [ fig : adaptive ] ) which provides an average entropy of about 51-bits ..impact of using a user password as a second authentication measure [ cols="^,^,^,^,^ " , ]in this paper , a recently published offline signature - based fv implementation is reviewed .several variants of the system are listed and compared for enhanced accuracy and security . a novel method to adapt cryptography key sizes for different usersis proposed and have shown accuracy and security enhancement .the performance is also validated on a public signature database where comparable results of complex sv in the literature is reported .although the proposed key adaptation method sounds , there is need to propose more intelligent tuning technique taking in consideration the similarities with simulated forgeries for higher forgery detection .this study listed many new approaches that are applied successfully to the signature based bio - cryptography .we believe that these methods shall be investigated for other biometrics which might enhance state - of - the - art of the area of bio - cryptosystems ., r. sabourin , and e. granger , on the dissimilarity representation and prototype selection for signature - based bio - cryptographic systems . ,york , uk , 3 - 5 july 2013 , lncs , vol.7953 , pp.265 - 280 .eskander . , r. sabourin , and e. granger , improving signature - based biometric cryptosystems using cascaded signature verification fuzzy vault ( sv - fv ) approach . , crete island , greece , 1 - 4september 2014 .g. eskander , r. sabourin , and e. granger .`` hybrid writer - independent writer - dependent offline signature verification system '' . _iet - biometrics journal , special issue on handwriting biometrics _ ,vol.2 , no.4 , pp .
an offline signature - based fuzzy vault ( osfv ) is a bio - cryptographic implementation that uses handwritten signature images as biometrics instead of traditional passwords to secure private cryptographic keys . having a reliable osfv implementation is the first step towards automating financial and legal authentication processes , as it provides greater security of confidential documents by means of the embedded handwritten signatures . the authors have recently proposed the first osfv implementation which is reviewed in this paper . in this system , a machine learning approach based on the dissimilarity representation concept is employed to select a reliable feature representation adapted for the fuzzy vault scheme . some variants of this system are proposed for enhanced accuracy and security . in particular , a new method that adapts user key size is presented . performance of proposed methods are compared using the brazilian pucpr and gpds signature databases and results indicate that the key - size adaptation method achieves a good compromise between security and accuracy . while average system entropy is increased from 45-bits to about 51-bits , the aer ( average error rate ) is decreased by about 21% .
quantum computing based on qubits has attracted considerable attention ( see , e.g. , ) .there are several candidates to realize quantum computers , such as using nuclear spins in molecules , photons , trapped ions , superconducting circuit and quantum dots ( see , e.g. , ) .however , it is still a great challenge to build a large - scale quantum computer .quantum computers can significantly outperform classical computers in doing some specific tasks .for example , two important quantum algorithms are shor s and grover s .algorithm can factorize a large integer in polynomial time , offereing an exponential speed - up over classical computation .algorithm gives a quadratic speed - up in searching database .this search algorithm has been found to be very useful in other related problems . to date , the study of quantum algorithms is a very active area of research ( see , e.g. , ) . using three coupled harmonic oscillators ,we have recently proposed an alternative approach ( without using qubits ) for quantum factorization .we consider these three harmonic oscillators to be coupled together via nonlinear interactions .to factorize an integer , this approach involves only three steps : initialization , time evolution , and conditional measurement . in this approach ,the states of the first two harmonic oscillators are prepared in a number - state basis , while the state of the third oscillator is prepared in a coherent state .the states of the first two harmonic oscillators encode the trial factors of the number .the nonlinear interactions between the oscillators produce coherent states that simultaneously rotate in phase space with different effective frequencies , which are proportional to the product of two trial factors . in this way , _ all _ possible products of any two trial factors can be _ simultaneously _ computed , and then they are `` written '' to the rotation frequencies of the coherent states in _ a single step_. this saves considerable computational resources .the resulting state of the first two oscillators is the factors state by performing a conditional measurement of a coherent state rotating with an effective frequency which is proportional to .however , the probability of obtaining this coherent state becomes low when is large . in this paper, we can circumvent this limitation by using an iterative method for increasing the chance of finding the states of the factors .this amplitude - amplification method involves a number of iterations , where each iteration is very similar to the factoring approach we recently proposed .we show that the number of iterations is of order of .thus , using this method , _ the factors of a large integer can be obtained , with a high probability , in linear time _the performance of this approach is even better than that of shor s algorithm , which factorizes a number in polynomial time .now we briefly describe this amplitude - amplification method for quantum factorization using three coupled harmonic oscillators .let us now consider the first step of our approach .initially , the first two harmonic oscillators are in a number - state basis and the third oscillator is in a coherent state . let the three coupled harmonic oscillators evolve for a period of time .the detection is then conditioned on a coherent state with a rotation frequency being proportional to .the probability of finding this coherent state can be adjusted by choosing both an appropriate period of time evolution and magnitude of the coherent state . herewe find that this probability is not small . indeed, the probability of finding the factors state can be increased by a factor which is the reciprocal of the probability of obtaining this coherent state .the resulting states of the first two oscillators , after the first step , are used as new input states in the second step of our approach .also , the state of the third oscillator is now prepared as a coherent state with the same , or higher , magnitude . by repeating the same procedure described in the first step , we can obtain the states of the factors with a much higher probability .we then iterate these procedures times , until the probability of finding the factors state is close to one . as an example of how this method works ,we show how to factorize the integer . herethe probabilities of obtaining coherent states , with rotation frequencies proportional to , are larger than 0.1 in each iteration .the probability of finding the factors can reach nearly one after 12 iterations .in addition , this amplitude - amplification method can be applied to search problems by suitably controlling nonlinear interactions between the harmonic oscillators and making appropriate conditional measurements .this approach can search a `` target '' from possible inputs in _linear time_. it surpasses grover s algorithm which only provides a quadratic speed - up for searching .since np - complete problems such as 3-sat , the traveling salesman problem , etc , can be mapped into search problems .this implies that _ np - complete problems can be exponentially sped up_. ( color online ) schematics of harmonic - oscillator quantum computation .there are two groups of coupled harmonic oscillators : of them ( in blue ) in the left side , and of them ( in red ) in the right side .this system can be `` programmed '' to find solutions of a system of functions in eq .( [ nonlinear_equation ] ) , by appropriately controlling nonlinear interactions between the oscillators .initially , all trial solutions are prepared for the collective state of the oscillators .all possible answers of the functions are simultaneously computed and then are `` written '' to the rotation frequencies of the coherent states of the coupled oscillators . by repeating the procedures of time evolution and conditional measurement on the oscillators , the states of the solutions of can be obtained with a high probability . finally , the solutions can be obtained by measuring the resulting state of the oscillators ., height=340 ] we also generalize this method of amplitude amplification by using a system of ( more than three ) coupled harmonic oscillators . in fig .[ hoqcomputer ] , the harmonic - oscillator quantum computation is schematically depicted .this can be used for solving a system of linear or nonlinear functions with integer unknowns which are subject to constraints .this is very useful for integer programming which is an important tool in operational research . to obtain the solutions of the functions, we require to perform conditional measurements of coherent states of the oscillators which are used for checking the constraints .this paper is organized as follows : in sec .ii , we introduce a system of coupled harmonic oscillators . in sec .iii , we study the quantum dynamics of the coupled harmonic oscillators starting with a product state of number states and a coherent state . in sec .iv , we propose an amplitude - amplification method to factorize an integer using three coupled harmonic oscillators .we discuss the convergence and performance of this factoring algorithm .for example , we show how to factor the number using this approach . in sec .v , we study the application of this amplitude - amplification method to quantum searching . in sec .vi , we generalize this amplitude - amplification method by using a system of coupled harmonic oscillators .we apply this method to solving linear or nonlinear functions with integer unknowns .appendix a shows details of this generalized method .we close this paper with a discussion and conclusions .we consider a system of coupled harmonic oscillators .the hamiltonian of the -th harmonic oscillator is written as where is the frequency of the harmonic oscillator , is the mass of the particle , and . the operators and are the position and momentum operators , which satisfy the commutation relation , =i\hbar ] .the hamiltonian of the three harmonic oscillators can be expressed in terms of the annihilation and creation operators : here we have ignored the constant term .we consider the harmonic oscillators coupled to each other via nonlinear interactions .such nonlinear interactions can be described by the hamiltonian as where are linear or nonlinear - operator functions ( excluding divisions ) of the number operators , for , and . the total hamiltonian can be written as the hamiltonians and commute with each other , i.e , }=0 ] is equal to one .after the first iteration , the probability of finding the factors of the number is increased by a factor , which is the inverse of the probability as seen from eqs .( [ pr_e_1 ] ) and ( [ rho^1_c ] ) . the probability amplification [ see also eqs .( [ varrho1 ] ) and ( [ varrho2 ] ) ] is thus inversely proportional to the probability of obtaining the coherent state .after the first iteration , we now obtain the reduced density matrix of the first two harmonic oscillators as we consider the state in eq .( [ rho2 ] ) of the oscillators 1 and 2 as an input state for the second iteration .the coherent state of the third harmonic oscillator is prepared in a coherent state , with a magnitude .the nonlinear interactions between the three harmonic oscillators are then turned on for a time .the state evolves as where .next , a conditional measurement is applied to the system at the time .the probability of obtaining the coherent state becomes ,\\ & = & \frac{1}{c_1}\sum_{n , m}{p}^{nm}_{nm}\;|{\epsilon}^{(1)}_{nm}|^2\;|{\epsilon}^{(2)}_{nm}|^2.\end{aligned}\ ] ] the conditioned state can be written as },\\ & = & \frac{1}{c_2}\sum_{n , m , n',m'}\tilde{p}^{n'm'}_{nm}(\tilde{t}_2){\epsilon}^{(1)}_{nm}{\epsilon}^{(1)*}_{n'm'}{\epsilon}^{(2)}_{nm}{\epsilon}^{(2)*}_{n'm ' } where is the normalization constant , the probability of finding the factors is enhanced by a factor after the second iteration .the coefficients is less than one for any product . from eqs .( [ c_1 ] ) and ( [ c_2 ] ) , it can be seen that .therefore , the probability of finding the factors is now higher after one additional iteration .similarly , we now iterate the procedure times . after iterations ,the reduced density matrix of the oscillators 1 and 2 can be written as where the state of the third harmonic oscillator is now prepared in a coherent state , with a magnitude .let the three coupled harmonic oscillators evolve for a time .this gives by performing a conditional measurement , the state becomes },\\ & = & \frac{1}{c_{l}}\sum_{n , m , n',m'}\tilde{p}^{n'm'}_{nm}(\tilde{t}_l)\prod^{l}_{l=1}{\epsilon}^{(l)}_{nm}{\epsilon}^{(l)*}_{n'm'}|n , m\rangle\langle{n',m'}|\otimes|\alpha^{(l)}_n(t_l)\rangle_3\;{}_3\langle\alpha^{(l)}_n(t_l)|.\end{aligned}\ ] ] after the -th step , the probability of finding the factors is increased by a factor . from eq .( [ c_l-1 ] ) , the probability of obtaining the coherent state can be written as the entire iterative procedure is now completed . the convergence and performance of this methodwill be discussed in the following subsections .we now study the convergence of this iterative method .we first consider the magnitude of the coherent state for each iteration as thus , we have for any product of and being not equal to , the coefficient is less than one and decreasing for higher , and the evolution time is non - zero and appropriately chosen .when the number of iterations tends to infinity , the product of the coefficients tends to zero , the coefficients are equal to one for any product of two factors and being equal to , i.e. , when .now we consider the probability of finding a pair of factors , and , after the -th iteration , which is since the coefficient is less than one , the normalization constant in eq .( [ c_l-1 ] ) is decreasing , i.e. , .therefore , the probability of finding a pair of factors increases after an additional iteration .this shows that this iterative method is convergent . from eqs .( [ c_l-1 ] ) and ( [ lime ] ) , it is very easy to show that in the limit of large number of iterations , we can obtain the state of the factors where and , with are factors of ( ) .this shows that the state of the factors can be achieved by employing this iterative method , if a sufficiently large number of iterations is used .we can now estimate the number of the iterations required to achieve a probability of order of one for factoring .we investigate the amplification ratio of the two probabilities of finding the factors and after the -th and the -th iterations . from eq .( [ pr_l ] ) , we have note that this ratio is just the reciprocal in eq .( [ pr_e_l ] ) of the probability of obtaining the coherent state .practically , this probability can not be too small .for example , let the amplification ratio be roughly equal to for each iteration , and let the probability of the factors state before the iterations be of order , where is a positive number .after iterations , the probability of finding the states of the factors can be increased by a factor . herewe require therefore , we obtain that the number of necessary iterations is of order of .this shows that this approach can factor an integer with a high probability and this in the linear time . in the limit of large , the probabilities in eq .( [ pr_e_l ] ) tend to one because approaches the sum of probabilities of the factors states in eq .( [ limit_c ] ) .the probability of finding the factors will be slowly increased after the number of iterations is reached . in this section ,we study how to factorize an integer with an initial pure state using this factoring algorithm .we consider the initial state of the first two harmonic oscillators as the superposition of number states , i.e. , are two normalization constants . herewe consider trial factors from 3 , , .the probability of finding the product of two factors is of order of .for example , now we show how to factor the integer . for simplicity, we now take , which is the lowest order of nonlinearity .the hamiltonian can be written as where is the nonlinear strength .the stronger nonlinear strengths and high - order nonlinearity can significantly shorten the required time evolution of the system .but the role of nonlinearity is not directly relevant to the number of the required iterations for the amplitude amplification ..[table1 ] this table shows the fidelities , the probabilities pr of obtaining the coherent states in eq .( [ coherent_state ] ) , and the evolution time for the iterations . here is the density matrix of the coherent state in eq .( [ coherent_state ] ) and is measured in units of . [cols="^,^,^,^,^,^,^,^ " , ] we take as the evolution time for the -th iteration , where is a uniformly distributed random number on the interval ] , and is the effective rotation frequency of the coherent state in phase space , we now set the frequency of as an even multiple of . if is even , then the effective frequency is even .otherwise , the effective frequency is odd .let the system evolve for a time as for an even , the state of the third harmonic oscillator evolves as and , for an odd , the coherent states of the third harmonic oscillator are either or , which depend on the parity of .a conditional measurement of the coherent state is made in each iteration . by repeating the procedures of the time evolution and the conditional measurement of the coherent state , the solutions [ i.e. , is an even number ] can be obtained , with a high probability , in linear time .here we have assumed that the `` black - box '' operation can be completed in polynomial time in the worst case .this shows that search problems can be exponentially sped up .we schematically summarize this method for quantum searching in figs .[ searchqcircuit ] and [ qcir_iteration_search ] .( a ) given a system of functions , it is easy to evaluate them by just substituting several integer variables as inputs of .( b ) however , it is extremely hard to do the `` opposite '' : to find out the integer variables , , from a known output of the functions ., height=264 ] we generalize the amplitude - amplification method using more than three coupled harmonic oscillators .this enables us to solve general linear or nonlinear problems which require to find out the non - negative integer variables subject to constraints such that where is a system of linear or nonlinear functions ( without any division of integer variables ) , while are real numbers for .here we assume that the solutions exist in this system of functions in eq .( [ nonlinear_equation ] ) , and each input integer variable is bounded by a number .figure [ amproblem2 ] schematically illustrates this problem .it is very easy to obtain the output of the functions by directly substituting the integer variables into .however , it is extremely hard to do the opposite , namely , to find out the inverse solutions from the `` output '' of the functions .a simple example with two arbitrary positive integers : easy to multiply , but hard to factorize .now we provide a way to solve this problem by using a system of coupled harmonic oscillators .we now consider a hamiltonian of the same form as in eq .( [ hamiltonian ] ) , where are the linear or nonlinear - operator functions of the number operators in eq .( [ nonlinear_equation ] ) ( by setting integer variables to the number operators ) for and . without loss of generality ,the total state of a system of coupled harmonic oscillators is first prepared as a superposition of products of number states .each of these product states , , represents a trial solution in eq .( [ nonlinear_equation ] ) .while a system of coupled harmonic oscillators is prepared in the product state of coherent states , we apply the time - evolution operator to the product state of and also to . from eq .( [ u_inputstate ] ) , we obtain where the amplitude is and is the rotation frequency , by quantum parallelism , all possible functions can be simultaneously computed .the value of has been `` written '' to the rotation frequency of the coherent state of the oscillator in the system of harmonic oscillators . by performing the conditional measurements of the coherent states of the harmonic oscillators depending on constraints ,we can obtain the solutions in eq .( [ nonlinear_equation ] ) , where is the number of solutions . however , the probability of obtaining these solutions is low for a large number of trial solutions .we generalize this amplitude - amplification method to the case of using multi - harmonic - oscillator .this can be used to efficiently increase the probability of obtaining the solutions .in fact , this generalized method is very similar to the method we described in sec .appendix a presents details of this amplitude - amplification method for solving general linear or nonlinear problems .we can show that the number of iterations is of order of , where is the number of trial solutions for each nonlinear function in eq .( [ nonlinear_ham ] ) and .let us now provide more details on this amplitude - amplification method to solve a system of linear or nonlinear functions as in eq .( [ nonlinear_equation ] ) by using a system of coupled harmonic oscillators .this generalization is very similar to the method discussed in sec .iv . for completeness , we show all the mathematical details in the following subsections. the total state of a system of harmonic oscillators , in the number - state basis , is initially prepared , i.e. , where the are the probabilities of the states , while , and .the states can be prepared in arbitrary states , including pure states or mixed states .our purpose is to find the target states from the ensemble such that where is the number of solutions and are the functions in eq .( [ nonlinear_ham ] ) .initially , the probability of obtaining the target states is very low . using this method, this probability can be efficiently increased after a sufficiently large number of iterations .the amplitude - amplification method is discussed in the following subsections . in the first iteration ,the states of a system of oscillators are prepared in the product state of coherent states , for . by applying the time - evolution operator [ is the hamiltonian in eq .( [ hamiltonian ] ) ] to the initial state , it becomes where ^{m_1,\ldots , m_a}_{m_1',\ldots , m_a'},\\ we consider the coherent states , corresponding to the target states , which have their rotation frequencies subject to the constraints in eq .( [ nonlinear_equation ] ) , namely where is the number of iterations , and .this means that the coherent states , corresponding to the non - target states have rotation frequencies which are larger than .the coherent states of the system of harmonic oscillators can act as `` markers '' for the target states and the other states . a measurement operator for the conditional measurements for the oscillators is defined as where and is the number of iterations .a conditional measurement of the coherent states with the rotation frequencies is performed on the system .the probability of obtaining the product of coherent states becomes ,\\ \label{apppr_e_1 } & = & \sum_{m_1,\ldots , m_a}p^{m_1,\ldots , m_a}_{m_1,\ldots , m_a}\;\prod_{k , x}|{\epsilon}^{(k1)}_{x , m_1,\ldots , m_a}|^2,\end{aligned}\ ] ] where the coefficient , for , \},~~~~{\rm for}~~l\geqslant{1}.\end{aligned}\ ] ] the value of the probability can be adjusted by appropriately choosing the evolution time and the magnitude . the density matrix of the conditioned state can be written as : },\\ \label{apprho1_c } & = & \frac{1}{\mathcal{c}_1}\sum_{\substack{m_1,\ldots , m_a\\ m_1',\ldots , m_a'}}\tilde{p}^{m_1,\ldots , m_a}_{m_1',\ldots , m_a'}(t_1)\:\lambda^{m_1,\ldots , m_a}_{m_1',\ldots , m_a'}(k,1)\prod_{j}|m_j\rangle_j\:{}_{j'}\langle{m_j'}|\:\otimes\prod_{k , x}|\alpha^{(k1)}_{x}(t_1)\rangle_k\;{}_k{\langle}\alpha^{(k1)}_{x}(t_1)|,\nonumber\\\end{aligned}\ ] ] where and are from eqs .( [ apppr_e_1 ] ) and ( [ apprho1_c ] ) , the probabilities of target states are increased by a factor , which is the inverse of the probability .after a single iteration , we now obtain the reduced density matrix of the first two harmonic oscillators as we consider this state of the oscillators as an input state for the second iteration .the states of the harmonic oscillators are prepared in the product of coherent states , with magnitudes . after applying a unitary operator ,the state becomes where . at the time , a conditional measurement then applied to the system .the probability of obtaining the product state of the coherent states becomes ,\\ & = & \frac{1}{\mathcal{c}_1}\sum_{m_1,\ldots , m_a}\tilde{p}^{m_1,\ldots , m_a}_{m_1,\ldots , m_a}\;\lambda^{m_1,\ldots , m_a}_{m_1,\ldots , m_a}(k,2).\end{aligned}\ ] ] the conditioned state can then be written as },\\ & = & \frac{1}{\mathcal{c}_2}\sum_{\substack{m_1,\ldots , m_a\\ m_1',\ldots , m_a'}}\tilde{p}^{m_1,\ldots , m_a}_{m_1',\ldots , m_a'}(\tilde{t}_2)\:\lambda^{m_1,\ldots , m_a}_{m_1',\ldots , m_a'}(k,2)\prod_{j}|m_j\rangle_j\;{}_{j'}\langle{m_j'}|\otimes\prod_{k , x}|\alpha^{(k2)}_x(t_2)\rangle_k\;{}_k\langle\alpha^{(k2)}_x(t_2)|,\nonumber\\\end{aligned}\ ] ] where is the normalization constant the probability of finding the factors is enhanced by a factor after the second iteration . from eqs .( [ appc_1 ] ) and ( [ appc_2 ] ) , the normalization constant is larger than the constant because the coefficient is less than one .therefore , the probabilities of the target states increase after the second iteration .after iterations , the reduced density matrix of the oscillators can be written as where the states of the harmonic oscillators are prepared in the product of the coherent states , for .let the coupled harmonic oscillators evolve for a time .the state evolves as by performing a conditional measurement , the state becomes },\\ & = & \frac{1}{\mathcal{c}_{l}}\sum_{\substack{m_1,\ldots , m_a\\ m_1',\ldots , m_a'}}\tilde{p}^{m_1,\ldots , m_a}_{m_1',\ldots , m_a'}(\tilde{t}_{l})\:\lambda^{m_1,\ldots , m_a}_{m_1',\ldots , m_a'}(k , l)\prod_{j}|m_j\rangle_j\;{}_{j}\langle{m_j'}| \otimes\prod_{k , x}|\alpha^{(kl)}_x(t_l)\rangle_k\;{}_k\langle\alpha^{(kl)}_x(t_l)|.\nonumber\\\end{aligned}\ ] ] after the -th iteration , the probability of finding the target states is increased by a factor . from eq .( [ appc_l-1 ] ) , the probability of obtaining the product of coherent states is we now analyze the performance of this amplitude amplification procedure . after the -th iteration , the probability of finding the target state , the amplification ratio of the two probabilities and is this ratio is the reciprocal in eq .( [ apppr_e^k_l ] ) of the probability of obtaining the coherent states . the amplification ratio is taken to be roughly equal to for each iteration . herewe assume that the number of trial solutions is about for each integer variable .thus , the probability of the target state before the iterations are of order of if the uniform distribution of the initial state in eq .( [ appinitialden ] ) is assumed .after iterations , the probability of the target state can be increased by a factor . herewe require therefore , we obtain that the number of necessary iterations is of order of . in each iteration , it is required at most steps to perform the conditional measurements for oscillators .this approach can solve the functions in eq .( [ nonlinear_equation ] ) in .ideal quantum algorithms usually assume that quantum computing is performed continuously by a sequence of unitary transformations. however , there always exist idle finite time intervals between consecutive operations in a realistic quantum computing process . during these delays, coherent errors will accumulate from the dynamical phases of the superposed wave functions .reference explores the sensitivity of shor s quantum factoring algorithm to such errors .those results clearly show a severe sensitivity of shor s factorization algorithm to the presence of ( the experimentally unavoidable ) delay times between successive unitary transformations .specifically , in the presence of these coherent errors , the probability of obtaining the correct answer decreases _ exponentially _ with the number of qubits of the work register .a particularly simple phase - matching approach was proposed in to avoid or suppress these coherent errors when using shor s algorithm to factorize integers .the robustness of that phase - matching condition was evaluated analytically and numerically for the factorization of several integers : 4 , 15 , 21 , and 33 . in spite of these , and many other efforts , shor s algorithm has been found to be problematic , in its implementation .thus , it is imperative to explore very different approaches to factorize integers .this is one of the goals of this work .note that the method for factorization presented here is totally different from the integer factoring approach in ref . , which is based on gauss sums .the standard gauss - sum approach can find a factor of by scanning through all numbers from 2 to .it therefore does not provide a speed - up over classical computation .we also note the following very recent proposal to perform quantum computation using a spin chain .this method is also related to recent methods for hamiltonian identification using networks of spin - half particles and even general many - body systems described by quadratic hamiltonians . in this approach , quantum computation and estimating coupling strengths in a hamiltonian can be done by just locally controlling a few quantum registers .indeed , in our approach presented here and also in their methods , very few resources are required and only a few quantum registers are necessary to be operated for quantum information processing .the approaches described above are quite different from the method presented in our work , where the solutions are directly `` written '' to the coherent states and then the probabilities of these solutions are increased through a number of iterations . moreover , the philosophies behind these two approaches are entirely different .moreover , let us now we briefly discuss the difference between this harmonic - oscillator - based quantum computation and other existing models of quantum computation .our proposal here obviously differs from the conventional quantum - circuit based model , which only uses quantum gates for quantum computation .a very different model of quantum computation , relying on measurements of single qubits , is called one - way quantum computation . to perform a specific task using a one - way quantum computer , an appropriate sequence of measurmentsis performed on qubits in the cluster , which is initially prepared in a highly - entangled state .apart from using harmonic oscillators instead of qubits , there are two main differences between this harmonic - oscillator and one - way quantum computations .first , in this harmonic - oscillator quantum computation , the specific nonlinear interactions between oscillators are prepared for tackling the specific nonlinear problem .second , it is only required to perform conditional measurements on the same set of harmonic oscillators throughout the entire computation .we believe that this idea of amplitude amplification may be generalized to quantum computation using discrete quantum states such as qubits .let us briefly summarize this approach of using the harmonic - oscillator quantum computer described here . in general , this method can be used for solving linear or nonlinear functions with integer unknowns subject to a number of constraints .a system of coupled harmonic oscillators are used for computing the functions , while another system of coupled oscillators act as `` markers '' for solutions and non - solutions of these functions .initially , a system of coupled harmonic oscillators are prepared in the number - state basis for all possible solutions , and a system of coupled oscillators are prepared in a product state of coherent states . by quantum parallelism , the answers of all possible solutions can be simultaneously computed .then , all the answers are `` written '' to the rotation frequencies of the coherent states of the oscillators .we can obtain the solutions by performing a conditional measurement of coherent states subject to the constraints .however , the probability of obtaining the solutions is low .this can be resolved by iterating the procedures of time evolution and the conditional measurement .the number of iterations is of order of , where is the input size of the trial solutions .the solutions can be obtained with a high probability .we have proposed a new algorithm for amplitude amplification using only a system of coupled harmonic oscillators .we have shown that this approach can be used for factoring integers , and the factors of an integer can be obtained , with a high probability , in linear time . as an example , using this approach , we show how to factorize an integer of order of within 12 iterations . in each iteration ,the probability of obtaining this coherent state , with the rotation frequency being proportional to , is not less than 0.1 .this approach can also be applied to search - based problems . moreover ,the solutions to search problems can be achieved in polynomial time .this implies that np - complete problems can be exponentially sped up .we stress out that the nonlinear interactions between the coupled oscillators and conditional measurements are essential in this approach . by appropriately controlling nonlinear interactions between the coupled harmonic oscillators ,the functions with integer inputs can be evaluated in a single operation . to implement this harmonic - oscillator quantum computation ,it is necessary to engineer `` many - body '' interactions of the system of harmonic oscillators .for example , to perform quantum factorization , it is required to generate `` three - body '' interactions between the harmonic oscillators .we have briefly discussed the possible implementations in ref .one of the promising candidates is neutral atoms or polar molecules trapped in optical lattices .the `` three - body '' interactions can be tuned by external fields .bearing in mind that `` two - body '' interactions is only necessary for the implementation of quantum searching .this type of nonlinear interactions has been widely studied various physical systems such as optical kerr mediums , two - component bose - einstein condensates of atomic gases , trapped ions within a cavity , cavity qed , superconducting resonator and optomechanical systems . the `` proof - of - principle '' experiment can be realized with current technology .however , it is challenging to engineer `` many - body '' nonlinear interactions for solving multi - variables functions in sec .we hope that future theoretical and experimental advances can overcome these issues .fn acknowledges partial support from the laboratory of physical sciences , national security agency , army research office , defense advanced research projects agency , air force office of scientific research , national science foundation grant no .0726909 , jsps - rfbr contract no .09 - 02 - 92114 , grant - in - aid for scientific research ( s ) , mext kakenhi on quantum cybernetics , and the funding program for innovative r&d on science and technology ( first ) .h. mack , m. bienert , f. haug , m. freyberger and w. p. schleich , phys .status solidi b * 233 * , 408 ( 2002 ) ; m. mehring , k. mller , i. sh .averbukh , w. merkel and w. p. schleich , phys .rev . lett . * 98 * , 120502 ( 2007 ) .
using three coupled harmonic oscillators , we have recently proposed [ arxiv:1007.4338 ] an alternative approach for quantum factorization of an integer . however , the probability of obtaining the factors becomes low when this number is large . here , we circumvent this limitation by using a new iterative method which can efficiently increase the probability of finding the factors . we show that the factors of a number can be obtained , in linear time , with a high probability . this amplitude - amplification method can be applied to search problems . this implies that search - based problems including np - complete problems can be exponentially sped up . in addition , we generalize the method of amplitude amplification by using a system of coupled harmonic oscillators . this can be used for solving a system of linear or nonlinear functions with integer inputs subject to a number of constraints . the total number of oscillators in the system is equal to the number of integer variables in the functions plus the number of constraints .
the information contained by an individual finite object ( like a finite binary string ) is objectively measured by its kolmogorov complexity the length of the shortest binary program that computes the object .such a shortest program contains no redundancy : every bit is information ; but is it meaningful information ? if we flip a fair coin to obtain a finite binary string , then with overwhelming probability that string constitutes its own shortest program .however , also with overwhelming probability all the bits in the string are meaningless information , random noise . on the other hand ,let an object be a sequence of observations of heavenly bodies .then can be described by the binary string , where is the description of the laws of gravity , and the observational parameter setting , while is the data - to - model code accounting for the ( presumably gaussian ) measurement error in the data . this way we can divide the information in into meaningful information and data - to - model information .the main task for statistical inference and learning theory is to distil the meaningful information present in the data .the question arises whether it is possible to separate meaningful information from accidental information , and if so , how . in statistical theory ,every function of the data is called a `` statistic '' of the data . the central notion in probabilistic statistics is that of a `` sufficient '' statistic , introduced by the father of statistics r.a .fisher : `` the statistic chosen should summarise the whole of the relevant information supplied by the sample .this may be called the criterion of sufficiency in the case of the normal curve of distribution it is evident that the second moment is a sufficient statistic for estimating the standard deviation . '' for traditional problems , dealing with frequencies over small sample spaces , this approach is appropriate .but for current novel applications , average relations are often irrelevant , since the part of the support of the probability density function that will ever be observed has about zero measure .this is the case in , for example , complex video and sound analysis .there arises the problem that for individual cases the selection performance may be bad although the performance is good on average .there is also the problem of what probability means , whether it is subjective , objective , or exists at all . to simplify matters , and because all discrete data can be binary coded , we consider only data samples that are finite binary strings .the basic idea is to found statistical theory on finite combinatorial principles independent of probabilistic assumptions , as the relation between the individual data and its explanation ( model ) .we study extraction of meaningful information in an initially limited setting where this information be represented by a finite set ( a model ) of which the object ( the data sample ) is a typical member . using the theory of kolmogorov complexity, we can rigorously express and quantify typicality of individual objects .but typicality in itself is not necessarily a significant property : every object is typical in the singleton set containing only that object .more important is the following kolmogorov complexity analog of probabilistic minimal sufficient statistic which implies typicality : the two - part description of the smallest finite set , together with the index of the object in that set , is as concise as the shortest one - part description of the object .the finite set models the regularity present in the object ( since it is a typical element of the set ) .this approach has been generalized to computable probability mass functions .the combined theory has been developed in detail in and called `` algorithmic statistics . '' here we study the most general form of algorithmic statistic : recursive function models . in this setting the issue of meaningful information versus accidental information is put in its starkest form ; and in fact , has been around for a long time in various imprecise forms unconnected with the sufficient statistic approach : the issue has sparked the imagination and entered scientific popularization in as `` effective complexity '' ( here `` effective '' is apparently used in the sense of `` producing an effect '' rather than `` constructive '' as is customary in the theory of computation ) .it is time that it receives formal treatment .formally , we study the minimal length of a total recursive function that leads to an optimal length two - part code of the object being described .( `` total '' means the function value is defined for all arguments in the domain , and `` partial '' means that the function is possibly not total . ) this minimal length has been called the `` sophistication '' of the object in in a different , but related , setting of compression and prediction properties of infinite sequences . that treatment is technically sufficiently vague so as to have no issue for the present work .we develop the notion based on prefix turing machines , rather than on a variety of monotonic turing machines as in the cited papers . below we describe related work in detail and summarize our results .subsequently , we formulate our problem in the formal setting of computable two - part codes .kolmogorov in 1974 proposed an approach to a non - probabilistic statistics based on kolmogorov complexity .an essential feature of this approach is to separate the data into meaningful information ( a model ) and meaningless information ( noise ) .cover attached the name `` sufficient statistic '' to a model of which the data is a `` typical '' member . in kolmogorovs initial setting the models are finite sets . as kolmogorov himself pointed out , this is no real restriction : the finite sets model class is equivalent , up to a logarithmic additive term , to the model class of computable probability density functions , as studied in .related aspects of `` randomness deficiency '' were formulated in and studied in . despite its evident epistemological prominence in the theory of hypothesis selection and prediction ,only selected aspects of the theory were studied in these references .recent work can be considered as a comprehensive investigation into the sufficient statistic for finite set models and computable probability density function models .here we extend the approach to the most general form : the model class of total recursive functions .this idea was pioneered by who , unaware of a statistic connection , coined the cute word `` sophistication . ''the algorithmic ( minimal ) sufficient statistic was related to an applied form in : the well - known `` minimum description length '' principle in statistics and inductive reasoning . in another paper ( chronologically following the present paper )we comprehensively treated all stochastic properties of the data in terms of kolmogorov s so - called structure functions .the sufficient statistic aspect , studied here , covers only part of these properties .the results on the structure functions , including ( non)computability properties , are valid , up to logarithmic additive terms , also for the model class of total recursive functions , as studied here .it will be helpful for the reader to be familiar with initial parts of . in , kolmogorov observed that randomness of an object in the sense of having high kolmogorov complexity is being random in just a `` negative '' sense .that being said , we define the notion of sophistication ( minimal sufficient statistic in the total recursive function model class ) .it is demonstrated to be meaningful ( existence and nontriviality ) .we then establish lower and upper bounds on the sophistication , and we show that there are objects the sophistication achieves the upper bound .in fact , these are objects in which all information is meaningful and there is ( almost ) no accidental information .that is , the simplest explanation of such an object is the object itself . in the simplersetting of finite set statistic the analogous objects were called `` absolutely non - stochastic '' by kolmogorov .if such objects have high kolmogorov complexity , then they can only be a random outcome of a `` complex '' random process , and kolmogorov questioned whether such random objects , being random in just this `` negative '' sense , can occur in nature .but there are also objects that are random in the sense of having high kolmogorov complexity , but simultaneously are are typical outcomes of `` simple '' random processes .these were therefore said to be random in a `` positive '' sense .an example are the strings of maximal kolmogorov complexity ; those are very unsophisticated ( with sophistication about 0 ) , and are typical outcomes of tosses with a fair coin a very simple random process .we subsequently establish the equivalence between sophistication and the algorithmic minimal sufficient statistics of the finite set class and the probability mass function class .finally , we investigate the algorithmic properties of sophistication : nonrecursiveness , upper semicomputability , and intercomputability relations of kolmogorov complexity , sophistication , halting sequence .a _ string _ is a finite binary sequence , an element of .if is a string then the _ length _ denotes the number of bits in .we identify , the natural numbers , and according to the correspondence here denotes the _empty word_. thus , .the emphasis is on binary sequences only for convenience ; observations in any alphabet can be so encoded in a way that is ` theory neutral ' . below we will use the natural numbers and the strings interchangeably .a string is a _ proper prefix _ of a string if we can write for .a set is _ prefix - free _ if for any pair of distinct elements in the set neither is a proper prefix of the other .a prefix - free set is also called a _ prefix code _ and its elements are called _code words_. an example of a prefix code , that is useful later , encodes the source word by the code word this prefix - free code is called _ self - delimiting _ , because there is fixed computer program associated with this code that can determine where the code word ends by reading it from left to right without backing up . this way a composite code message can be parsed in its constituent code words in one pass , by the computer program .( this desirable property holds for every prefix - free encoding of a finite set of source words , but not for every prefix - free encoding of an infinite set of source words . for a single finite computer program to be able to parse a code message the encoding needs to have a certain uniformity property like the code . )since we use the natural numbers and the strings interchangeably , where is ostensibly an integer , means the length in bits of the self - delimiting code of the string with index . on the other hand , where is ostensibly a string , means the self - delimiting code of the string with index the length of . using this code we define the standard self - delimiting code for to be .it is easy to check that and .let denote a standard invertible effective one - one encoding from to a subset of .for example , we can set or .we can iterate this process to define , and so on . for definitions , notation , and an introduction to kolmogorov complexity ,see .informally , the kolmogorov complexity , or algorithmic entropy , of a string is the length ( number of bits ) of a shortest binary program ( string ) to compute on a fixed reference universal computer ( such as a particular universal turing machine ) .intuitively , represents the minimal amount of information required to generate by any effective process .the conditional kolmogorov complexity of relative to is defined similarly as the length of a shortest program to compute , if is furnished as an auxiliary input to the computation . for technical reasons we use a variant of complexity , so - called prefix complexity , which is associated with turing machines for which the set of programs resulting in a halting computation is prefix free .we realize prefix complexity by considering a special type of turing machine with a one - way input tape , a separate work tape , and a one - way output tape .such turing machines are called _ prefix _ turing machines . if a machine halts with output after having scanned all of on the input tape , but not further , then and we call a _ program _ for .it is easy to see that is a _prefix code_. a function from the natural numbers to the natural numbers is _ partial recursive _ , or _, if there is a turing machine that computes it : for all for which either or ( and hence both ) are defined .this definition can be extended to ( multi - tuples of ) rational arguments and values .let be a standard enumeration of all prefix turing machines with a binary input tape , for example the lexicographical length - increasing ordered syntactic prefix turing machine descriptions , , and let be the enumeration of corresponding functions that are computed by the respective turing machines ( computes ) .these functions are the partial recursive functions of effectively prefix - free encoded arguments .the kolmogorov complexity of is the length of the shortest binary program from which is computed by such a function .the _ prefix kolmogorov complexity _ of is where the minimum is taken over and . for the development of the theorywe actually require the turing machines to use _ auxiliary _ ( also called _ conditional _ ) information , by equipping the machine with a special read - only auxiliary tape containing this information at the outset .then , the _ conditional version _ of the prefix kolmogorov complexity of given ( as auxiliary information ) is is defined similarly as before , and the unconditional version is set to . from now on, we will denote by an inequality to within an additive constant , and by the situation when both and hold .let be the standard enumeration of turing machines , and let be a standard universal turing machine satisfying for all indices and programs .we fix once and for all and call it the _ reference universal prefix turing machine_. the shortest program to compute by is denoted as ( if there is more than one of them , then is the first one in standard enumeration ) .it is a deep and useful fact that the shortest effective description of an object can be expressed in terms of a _ two - part code _ : the first part describing an appropriate turing machine and the second part describing the program that interpreted by the turing machine reconstructs .the essence of the theory is the invariance theorem , that can be informally stated as follows : for convenience , in the sequel we simplify notation and write for .rewrite here the minima are taken over and .the last equalities are obtained by using the universality of with . as consequence , , and differ by at most an additive constant depending on the choice of .it is standard to use instead of as the definition of _ prefix kolmogorov complexity _ , .however , we highlighted definition to bring out the two - part code nature . by universal logical principles ,the resulting theory is recursively invariant under adopting either definition or definition , as long as we stick to one choice .if stands for a literal description of the prefix turing machine in standard format , for example the index when , then we can write .the string is a shortest self - delimiting program of bits from which can compute , and subsequent execution of the next self - delimiting fixed program will compute from . altogether , this has the effect that . if minimizes the expression above , then , and hence , and .it is straightforward that , and therefore we have .altogether , . replacing the minimizing by the minimizing and by , we can rewrite the last displayed equation as expression emphasizes the two - part code nature of kolmogorov complexity : using the regular aspects of to maximally compress .suppose we consider an ongoing time - series and we randomly stop gathering data after having obtained the initial segment we can encode this by a small turing machine representing `` the repeating pattern is 01 , '' and which computes , for example , from the program `` 13 . '' intuitively , the turing machine part of the code squeezes out the _ regularities _ in .what is left are irregularities , or _ random aspects _ of relative to that turing machine .the minimal - length two - part code squeezes out regularity only insofar as the reduction in the length of the description of random aspects is greater than the increase in the regularity description . in this setupthe number of repetitions of the significant pattern is viewed as the random part of the data .this interpretation of as the shortest length of a two - part code for , one part describing a turing machine , or _ model _, for the _ regular _ aspects of and the second part describing the _ irregular _ aspects of in the form of a program to be interpreted by , has profound applications .the `` right model '' is a turing machine among the ones that halt for all inputs , a restriction that is justified later , and reach the minimum description length in ( [ eq.kcmdl ] ) .this embodies the amount of useful information contained in .it remains to decide which such to select among the ones that satisfy the requirement .following occam s razor we opt here for the shortest one a formal justification for this choice is given in .the main problem with our approach is how to properly define a shortest program for that divides into parts such that represents an appropriate .the following central notions are used in this paper .the _ information in about _ is . by the symmetry of information ,a deep result of , rewriting according to symmetry of information we see that and therefore we call the quantity the _ mutual information _ between and .instead of the model class of finite sets , or computable probability density functions , as in , in this work we focus on the most general form of algorithmic model class : total recursive functions .we define the different model classes and summarize the central notions of `` randomness deficiency '' and `` typicality '' for the canonical finite set models to obtain points of reference for the related notions in the more general model classes .the model class of _ finite sets _ consists of the set of finite subsets .the _ complexity of the finite set _ is length ( number of bits ) of the shortest binary program from which the reference universal prefix machine computes a listing of the elements of and then halts .that is , if , then .the _ conditional complexity _ of given , is the length ( number of bits ) in the shortest binary program from which the reference universal prefix machine , given literally as auxiliary information , computes . for every finite set containing we have indeed , consider the selfdelimiting code of consisting of its bit long index of in the lexicographical ordering of code is called _ data - to - model code_. its length quantifies the maximal `` typicality , '' or `` randomness , '' data ( possibly different from ) can have with respect to this model .the lack of typicality of with respect to is measured by the amount by which falls short of the length of the data - to - model code , the _ randomness deficiency _ of in , defined by for , and otherwise .data is _ typical with respect to a finite set _ , if the randomness deficiency is small . if the randomness deficiency is close to 0 , then there are no simple special properties that single it out from the majority of elements in .this is not just terminology .let . according to common viewpoints in probability theory , each property represented by defines a large subset of consisting of elements having that property , and , conversely , each large subset of represents a property . for probabilistic ensembles we take high probability subsets as properties ;the present case is uniform probability with finite support .for some appropriate fixed constant , let us identify a property represented by with a subset of of cardinality .if is close to 0 , then satisfies ( that is , is an element of ) _ all _ properties ( that is , sets ) of low kolmogorov complexity . the precise statements and quantifications are given in , and we do not repeat them here . the model class of _ computable probability density functions _ consists of the set of functions ] .we define if , and otherwise .\(iii ) the set of models are the total recursive functions mapping to .we define , and if no such exists .if is a model class , then we consider _ distortion balls _ of given radius centered on : this way , every model class and distortion measure can be treated similarly to the canonical finite set case , which , however is especially simple in that the radius not variable .that is , there is only one distortion ball centered on a given finite set , namely the one with radius equal to the log - cardinality of that finite set .in fact , that distortion ball equals the finite set on which it is centered .let be a model class and a distortion measure .since in our definition the distortion is recursive , given a model and diameter , the elements in the distortion ball of diameter can be recursively enumerated from the distortion function . giving the index of any element in that enumeration we can find the element .hence , .on the other hand , the vast majority of elements in the distortion ball have complexity since , for every constant , there are only binary programs of length available , and there are elements to be described .we can now reason as in the similar case of finite set models . with data and , if , then belongs to every large majority of elements ( has the property represented by that majority ) of the distortion ball , provided that property is simple in the sense of having a description of low kolmogorov complexity .the _ randomness deficiency _ of with respect to model under distortion is defined as data is _ typical _ for model ( and that model `` typical '' or `` best fitting '' for ) if if is typical for a model , then the shortest way to effectively describe , given , takes about as many bits as the descriptions of the great majority of elements in a recursive enumeration of the distortion ball .so there are no special simple properties that distinguish from the great majority of elements in the distortion ball : they are all typical or random elements in the distortion ball ( that is , with respect to the contemplated model ) . continuing example [ ex.11 ] by applying to different model classes : \(i ) _ finite sets : _ for finite set models , clearly .together with we have that is typical for , and best fits , if the randomness deficiency according to satisfies .\(ii ) _ computable probability density functions : _ instead of the data - to - model code length for finite set models , we consider the data - to - model code length ( the shannon - fano code ) .the value measures how likely is under the hypothesis . for probability models , define the conditional complexity as follows .say that a function approximates if for every and every positive rational .then is defined as the minimum length of a program that , given and any function approximating as an oracle , prints .clearly .together with , we have that is typical for , and best fits , if .the right - hand side set condition is the same as , and there can be only such , since otherwise the total probability exceeds 1 . therefore , the requirement , and hence typicality , is implied by . define the randomness deficiency by altogether , a string is _typical for a distribution _ , or is the _ best fitting model _ for , if .\(iii ) _ total recursive functions : _ in place of for finite set models we consider the data - to - model code length ( actually , the distortion above ) define the conditional complexity as the minimum length of a program that , given and an oracle for , prints .clearly , . together with , we have that is typical for , and best fits , if .there are at most - many satisfying the set condition since .therefore , the requirement , and hence typicality , is implied by .define the randomness deficiency by altogether , a string is _typical for a total recursive function _ , and is the _ best fitting recursive function model _ for if , or written differently , note that since is given as conditional information , with and , the quantity represents the number of bits in a shortest _ self - delimiting _ description of .we required in the conditional in .this is the information about the radius of the distortion ball centered on the model concerned .note that in the canonical finite set model case , as treated in , every model has a fixed radius which is explicitly provided by the model itself . but in the more general model classes of computable probability density functions , or total recursive functions , models can have a variable radius .there are subclasses of the more general models that have fixed radiuses ( like the finite set models ) .\(i ) in the computable probability density functions one can think of the probabilities with a finite support , for example for , and otherwise .\(ii ) in the total recursive function case one can similarly think of functions with finite support , for example for , and for .the incorporation of te radius in the model will increase the complexity of the model , and hence of the minimal sufficient statistic below .a _ statistic _ is a function mapping the data to an element ( model ) in the contemplated model class . with some sloppiness of terminologywe often call the function value ( the model ) also a statistic of the data .the most important concept in this paper is the sufficient statistic . for an extensive discussion of this notion for specific model classessee .a statistic is called sufficient if the two - part description of the data by way of the model and the data - to - model code is as concise as the shortest one - part description of .consider a model class .a model is a _sufficient statistic _ for if [ lem.v2 ] if is a sufficient statistic for , then , that is , is typical for .we can rewrite .the first three inequalities are straightforward and the last equality is by the assumption of sufficiency . altogether , the first sum equals the second sum , which implies the lemma .thus , if is a sufficient statistic for , then is a typical element for , and is the best fitting model for .note that the converse implication , `` typicality '' implies `` sufficiency , '' is not valid .sufficiency is a special type of typicality , where the model does not add significant information to the data , since the preceding proof shows . using the symmetry of informationthis shows that this means that : \(i ) a sufficient statistic is determined by the data in the sense that we need only an -bit program , possibly depending on the data itself , to compute the model from the data .\(ii ) for each model class and distortion there is a universal constant such that for every data item there are at most sufficient statistics . _finite sets : _ for the model class of finite sets , a set is a sufficient statistic for data if _ computable probability density functions : _ for the model class of computable probability density functions , a function is a sufficient statistic for data if for the model class of _ total recursive functions _ , a function is a _ sufficient statistic _ for data if following the above discussion , the meaningful information in is represented by ( the model ) in bits , and the meaningless information in is represented by ( the noise in the data ) with in bits .note that , since the two - part code for can not be shorter than the shortest one - part code of bits , and therefore the -part must already be maximally compressed . by lemma [ lem.v2 ] , , is typical for , and hence .consider the model class of total recursive functions . a _minimal sufficient statistic _ for data is a sufficient statistic for of minimal prefix complexity .its length is known as the _ sophistication _ of , and is defined by .recall that the _ reference _ universal prefix turing machine was chosen such that for all and .looking at it slightly more from a programming point of view , we can define a pair to be a _ description _ of a finite string , if prints and is a turing machine computing a function so that . for the notion of minimal sufficient statistic to be nontrivial , it should be impossible to always shift , if and with , always information information from to and write , for example , with with .if the model class contains a fixed universal model that can mimic all other models , then we can always shift all model information to the data - to-(universal ) model code .note that this problem does nt arise in common statistical model classes : these do not contain universal models in the algorithmic sense . first we show that the partial recursive recursive function model class , because it contains a universal element , does not allow a straightforward nontrivial division into meaningful and meaningless information .assume for the moment that we allow all partial recursive programs as statistic .then , the sophistication of all data is .let the index of ( the reference universal prefix turing machine ) in the standard enumeration of prefix turing machines be .let be a turing machine computing .suppose that .then , also .this shows that unrestricted partial recursive statistics are uninteresting .naively , this could leave the impression that the separation of the regular and the random part of the data is not as objective as the whole approach lets us hope for . if we consider complexities of the minimal sufficient statistics in model classes of increasing power : finite sets , computable probability distributions , total recursive functions , partial recursive functions , then the complexities appear to become smaller all the time eventually reaching zero .it would seem that the universality of kolmogorov complexity , based on the notion of partial recursive functions , would suggest a similar universal notion of sufficient statistic based on partial recursive functions .but in this case the very universality trivializes the resulting definition : because partial recursive functions contain a particular universal element that can simulate all the others , this implies that the universal partial recursive function is a universal model for all data , and the data - to - model code incorporates all information in the data .thus , if a model class contains a universal model that can simulate all other models , then this model class is not suitable for defining two - part codes consisting of meaningful information and accidental information .it turns out that the key to nontrivial separation is the requirement that the program witnessing the sophistication be _total_. that the resulting separation is non - trivial is evidenced by the fact , shown below , that the amount of meaningful information in the data does not change by more than a logarithmic additive term under change of model classes among finite set models , computable probability models , and total recursive function models .that is , very different model classes all result in the same amount of meaningful information in the data , up to negligible differences .so if deterioration occurs in widening model classes it occurs all at once by having a universal element in the model class .apart from triviality , a class of statistics can also possibly be vacuous by having the length of the minimal sufficient statistic exceed .our first task is to determine whether the definition is non - vacuous .we will distinguish sophistication in different description modes : [ lem.exists ] for every finite binary string , the sophistication satisfies . by definition of the prefix complexitythere is a program of length such that .this program can be partial .but we can define another program where is a program of a constant number of bits that tells the following program to ignore its actual input and compute as if its input were . clearly , is total and is a sufficient statistic of the total recursive function type , that is , .the previous lemma gives an upper bound on the sophistication .this still leaves the possibility that the sophistication is always , for example in the most liberal case of unrestricted totality . but this turns out to be impossible .[ h - sophi ] ( i ) for every , if a sufficient statistic satisfies , then and .\(ii ) for as a variable running through a sequence of finite binary strings of increasing length , we have \(iii ) for every , there exists an of length , such that every sufficient statistic for that satisfies has .\(iv ) for every there exists an of length such that .\(i ) if is a sufficient statistic for , then since , given an bit program we can retrieve both and and also from .therefore , we can retrieve from .that shows that .this proves both the first statement , and the second statement follows by ( [ eq.pd ] ) .\(ii ) an example of very unsophisticated strings are the individually random strings with high complexity : of length with complexity .then , the _ identity _ program with for all is total , has complexity , and satisfies .hence , witnesses that .this shows ( [ eq.liminf ] ) .\(iii ) consider the set .by we have .let .since there are strings of length , there are strings of length not in .let be any such string , and denote .then , by construction and by definition . let be a sufficient statistic for .then , . by assumption, there is an -bit program such that .let witness by with .define the set . clearly , . since can be retrieved from and the lexicographical index of in , and , we have .since we can obtain from we have . on the other hand ,since we can retrieve from and the index of in , we must have , which implies .altogether , therefore , .we now show that we can choose so that , and therefore . for every length , there exists a of complexity such that a minimal sufficient finite set statistic for has complexity at least , by theorem iv.2 of .since is trivially a sufficient statistic for , it follows .this implies .therefore , we can choose for a large enough constant so as to ensure that .consequently , we can choose above as such a .since every finite set sufficient statistic for has complexity at least that of an finite set minimal sufficient statistic for , it follows that .therefore , , which was what we had to prove .\(iv ) in the proof of ( i ) we used . without using this assumption , the corresponding argument yields .we also have and . since we can retrieve from and its index in , the same argument as above shows , and still following the argument above , . since we have .this proves the statement .the useful ( [ eq.pcondx ] ) states that there is a constant , such that for every there are at most that constant many sufficient statistics for , and there is a constant length program ( possibly depending on ) , that generates all of them from .in fact , there is a slightly stronger statement from which this follows : there is a universal constant , such that for every , the number of such that and , is bounded above by .let the prefix turing machine compute . since and , the combination ( with self - delimiting ) is a shortest prefix program for . from ,exercise 3.3.7 item ( b ) on p. 205 , it follows that the number of shortest prefix programs is upper bounded by a universal constant .previous work studied sufficiency for finite set models , and computable probability mass functions models , .the most general models that are still meaningful are total recursive functions as studied here .we show that there are corresponding , almost equivalent , sufficient statistics in all model classes .[ lem.explimpl ] \(i ) if is a sufficient statistic of ( finite set type ) , then there is a corresponding sufficient statistic of ( probability mass function type ) such that , , and .\(ii ) if is a sufficient statistic of of the computable total probability density function type , then there is a corresponding sufficient statistic of of the total recursive function type such that , , and .\(i ) by assumption , is a finite set such that and .define the probability distribution for and otherwise . since is finite , is computable . since , and , we have .since is a computable probability mass function we have , by the standard shannon - fano code construction that assigns a code word of length to .since by ( [ eq.soi ] ) we have it follows that . hence , .therefore , by ( [ eq.soi ] ) , and , by rewriting in the other way according to ( [ eq.soi ] ) , .\(ii ) by assumption , is a computable probability density function with and .the witness of this equality is a shortest program for and a code word for according to the standard shannon - fano code , , with .given , we can reconstruct from by a fixed standard algorithm .define the recursive function from such that .in fact , from this only requires a constant length program , so that is a program that computes in the sense that for all .similarly , can be retrieved from .hence , and .that is , is a sufficient statistic for . also , is a total recursive function .since we have , and .this shows that , and since can by definition be reconstructed from and a program of length , it follows that equality must hold .consequently , , and hence , by ( [ eq.soi ] ) , and .we have now shown that a sufficient statistic in a less general model class corresponds directly to a sufficient statistic in the next more general model class .we now show that , with a negligible error term , a sufficient statistic in the most general model class of total recursive functions has a directly corresponding sufficient statistic in the least general finite set model class .that is , up to negligible error terms , a sufficient statistic in any of the model classes has a direct representative in any of the other model classes .let be a string of length , and be a total recursive function sufficient statistic for .then , there is a finite set such that . by assumptionthere is an -bit program such that . for each ,let .define .we can compute by computation of , on all arguments of at most bits , since by assumption is total .this shows . since , we have .moreover , . since , , where we use the sufficiency of to obtain the last inequalitywe investigate the recursion properties of the sophistication function . in , gcs gave an important and deep result ( [ eq.gacs ] ) below , that quantifies the uncomputability of ( the bare uncomputability can be established in a much simpler fashion ) .for every length there is an of length such that : note that the right - hand side holds for every by the simple argument that and hence .but there are s such that the length of the shortest program to compute almost reaches this upper bound , even if the full information about is provided .it is natural to suppose that the sophistication function is not recursive either .the following lemma s suggest that the complexity function is more uncomputable than the sophistication .the function is not recursive .given , let be the least such that . by theorem [ h - sophi ] we know that there exist such that for , hence exists. assume by way of contradiction that the sophistication function is computable .then , we can find , given , by simply computing the successive values of the function .but then , while by lemma [ lem.exists ] and by assumption , which is impossible .the _ halting sequence _ is the infinite binary characteristic sequence of the halting problem , defined by if the reference universal prefix turing machine halts on the input : , and 0 otherwise .[ lem.compks ] let be a total recursive function sufficient statistic of .\(i ) we can compute from and , up to fixed constant precision , which implies that .\(ii ) if also , then we can compute from , up to fixed constant precision , which implies that .\(i ) since is total , we can run on all strings in lexicographical length - increasing order .since is total we will find a shortest string such that .set . since , and by assumption , ,we now can compute .\(ii ) follows from item ( i ) .given an oracle that on query answers with a sufficient statistic of and a as required below .then , we can compute the kolmogorov complexity function and the halting sequence . by lemma [ lem.compks ]we can compute the function , up to fixed constant precision , given the oracle ( without the value ) in the statement of the theorem .let in the statement of the theorem be the difference between the computed value and the actual value of . in ,exercise 2.2.7 on p. 175 , it is shown that if we can solve the halting problem for plain turing machines , then we can compute the ( plain ) kolmogorov complexity , and _vice versa_. the same holds for the halting problem for prefix turing machines and the prefix turing complexity .this proves the theorem .[ lem.chisoph ] there is a constant , such that for every there is a program ( possibly depending on ) of at most bits that computes and the witness program from .that is , . with some abuse of notationwe can express this as .by definition of sufficient statistic , we have . by ( [ eq.pcondx ] ) the number of sufficient statistics for is bounded by an independent constant , and we can generate all of them from by a length program ( possibly depending on ) . then , we can simply determine the least length of a sufficient statistic , which is .there is a subtlety here : lemma [ lem.chisoph ] is nonuniform .while for every we only require a fixed number of bits to compute the sophistication from , the result is nonuniform in the sense that these bits may depend on . given a program , how do we verify if it is the correct one ? trying all programs of length up to a known upper bound , we do nt know if they halt or if they halt they halt with the correct answer .the question arising is if there is a single program that computes the sopistication and its witness program for all . in much more difficult question is answered in a strong negative sense : there is no algorithm that for every , given , approximates the sophistication of to within precision . for every of length , and the program that witnesses the sophistication of , we have . for every length , there are strings of length , such that .let witness the : that is , , and . using the conditional version of ( [ eq.soi ] ) ,see , we find that in lemma [ lem.compks ] , item ( i ) , we show , hence also . by lemma [ lem.chisoph ] , ,hence also .substitution of the constant terms in the displayed equation shows this shows that the shortest program to retrieve from is essentially the same program as to retrieve from or from . using , this shows that since is the witness program for , we have .a function from the rational numbers to the real numbers is _ upper semicomputable _ if there is a recursive function such that and .here we interprete the total recursive function as a function from pairs of natural numbers to the rationals : .if is upper semicomputable , then is _lower semicomputable_. if is both upper - a and lower semicomputable , then it is _ computable_. recursive functions are computable functions over the natural numbers . since is upper semicomputable , , and from we can compute , we have the following : \(i ) the function is not computable to any significant precision .\(ii ) given an initial segment of length of the halting sequence , we can compute from .that is , .\(i ) the fact that is not computable to any significant precision is shown in .\(ii ) we can run for all ( program , argument ) pairs such that .( not since we are dealing with self - delimiting programs . )if we know the initial segment of , as in the statement of the theorem , then we know which ( program , argument ) pairs halt , and we can simply compute the minimal value of for these pairs .`` sophistication '' is the algorithmic version of `` minimal sufficient statistic '' for data in the model class of total recursive functions .however , the full stochastic properties of the data can only be understood by considering the kolmogorov structure function ( mentioned earlier ) that gives the length of the shortest two - part code of as a function of the maximal complexity of the total function supplying the model part of the code .this function has value about for close to 0 , is nonincreasing , and drops to the line at complexity , after which it remains constant , for , everything up to a logarithmic addive term .a comprehensive analysis , including many more algorithmic properties than are analyzed here , has been given in for the model class of finite sets containing , but it is shown there that all results extend to the model class of computable probability distributions and the model class of total recursive functions , up to an additive logarithmic term .the author thanks luis antunes , lance fortnow , kolya vereshchagin , and the referees for their comments. 99 a.r .barron , j. rissanen , and b. yu , the minimum description length principle in coding and modeling , _ ieee trans . inform . theory _ , it-44:6(1998 ) , 27432760 .cover , kolmogorov complexity , data compression , and inference , pp .2333 in : _ the impact of processing techniques on communications _ , j.k .skwirzynski , ed . ,martinus nijhoff publishers , 1985 .t.m . cover and j.a .thomas , _ elements of information theory _ , wiley , new york , 1991 .r. a. fisher , on the mathematical foundations of theoretical statistics , _ philosophical transactions of the royal society of london , ser .a _ , 222(1922 ) , 309368 .p. gcs , on the symmetry of algorithmic information , _soviet math ._ , 15 ( 1974 ) 14771480 .correction : ibid ., 15 ( 1974 ) 1480 .p. gcs , j. tromp , and p. vitnyi , algorithmic statistics , _ ieee trans .inform . theory _, 47:6(2001 ) , 24432463 .q. gao , m. li and p.m.b .vitnyi , applying mdl to learn best model granularity , _ artificial intelligence _ , 121(2000 ) , 129 .m. gell - mann , _ the quark and the jaguar _ , w. h. freeman and company , new york , 1994 .grnwald and p.m.b .vitnyi , shannon information and kolmogorov complexity , manuscript , cwi , december 2003 .kolmogorov , three approaches to the quantitative definition of information , _ problems inform . transmission _ 1:1 ( 1965 )complexity of algorithms and objective definition of randomness .a talk at moscow math .meeting 4/16/1974 .abstract in _ uspekhi mat .nauk _ 29:4(1974),155 ( russian ) ; english translation in .kolmogorov , on logical foundations of probability theory , pp .15 in : _ probability theory and mathematical statistics _ ,notes math .1021 , k. it and yu.v .prokhorov , eds ., springer - verlag , heidelberg , 1983 .kolmogorov and v.a .uspensky , algorithms and randomness , _ siam theory probab. appl . _ , 32:3(1988 ) , 389412 .m. koppel , complexity , depth , and sophistication , _ complex systems _ , 1(1987 ) , 10871091 m. koppel , structure , _ the universal turing machine : a half - century survey _ , r. herken ( ed . ) , oxford univ . press , 1988 , pp .435452 .m. li and p. vitanyi , _ an introduction to kolmogorov complexity and its applications _ , springer - verlag , new york , 1997 ( 2nd edition ) .the mathematical theory of communication ., 27:379423 , 623656 , 1948 .coding theorems for a discrete source with a fidelity criterion . in _ irenational convention record , part 4 _ , pages 142163 , 1959 .shen , the concept of -stochasticity in the kolmogorov sense , and its properties , _soviet math ._ , 28:1(1983 ) , 295299 .shen , discussion on kolmogorov complexity and statistical analysis , _ the computer journal _ , 42:4(1999 ) , 340342 .vereshchagin and p.m.b .vitnyi , kolmogorov s structure functions and model selection , _ ieee trans ._ , to appear .vereshchagin and p.m.b .vitnyi , rate distortion theory for individual data , draft , cwi , 2004 .vitnyi and m. li , minimum description length induction , bayesianism , and kolmogorov complexity , _ ieee trans .inform . theory _ ,it-46:2(2000 ) , 446464 . v.v .vyugin , on the defect of randomness of a finite object with respect to measures with given complexity bounds , _ siam theory probab . appl ._ , 32:3(1987 ) , 508512 . v.v .vyugin , algorithmic complexity and stochastic properties of finite binary sequences , _ the computer journal _ , 42:4(1999 ) , 294317 .paul m.b . vitnyi is a fellow of the center for mathematics and computer science ( cwi ) in amsterdam and is professor of computer science at the university of amsterdam .he serves on the editorial boards of distributed computing ( until 2003 ) , information processing letters , theory of computing systems , parallel processing letters , international journal of foundations of computer science , journal of computer and systems sciences ( guest editor ) , and elsewhere .he has worked on cellular automata , computational complexity , distributed and parallel computing , machine learning and prediction , physics of computation , kolmogorov complexity , quantum computing .together with ming li they pioneered applications of kolmogorov complexity and co - authored `` an introduction to kolmogorov complexity and its applications , '' springer - verlag , new york , 1993 ( 2nd edition 1997 ) , parts of which have been translated into chinese , russian and japanese .
the information in an individual finite object ( like a binary string ) is commonly measured by its kolmogorov complexity . one can divide that information into two parts : the information accounting for the useful regularity present in the object and the information accounting for the remaining accidental information . there can be several ways ( model classes ) in which the regularity is expressed . kolmogorov has proposed the model class of finite sets , generalized later to computable probability mass functions . the resulting theory , known as algorithmic statistics , analyzes the algorithmic sufficient statistic when the statistic is restricted to the given model class . however , the most general way to proceed is perhaps to express the useful information as a recursive function . the resulting measure has been called the `` sophistication '' of the object . we develop the theory of recursive functions statistic , the maximum and minimum value , the existence of absolutely nonstochastic objects ( that have maximal sophistication all the information in them is meaningful and there is no residual randomness ) , determine its relation with the more restricted model classes of finite sets , and computable probability distributions , in particular with respect to the algorithmic ( kolmogorov ) minimal sufficient statistic , the relation to the halting problem and further algorithmic properties . _ index terms_ constrained best - fit model selection , computability , lossy compression , minimal sufficient statistic , non - probabilistic statistics , kolmogorov complexity , kolmogorov structure function , sufficient statistic , sophistication
extensible markup language ( xml ) has reached a great success in the internet era .xml documents are similar to html documents , but do not restrict users to a single vocabulary , which offers a great deal of flexibility to represent information . to define the structure of documents within a certain vocabulary , schema languages such as _ document type definition _( dtd ) or _ xml schema _ are used .xml has been adopted as the most common form of encoding information exchanged by web services . this success to two reasons .the first one is that the xml specification is accessible to everyone and it is reasonably simple to read and understand .the second one is that several tools for processing xml are readily available .we add to these reasons that as xml is _ vocabulary - agnostic _ , it can be used to represent data in basically any domain .for example , we can find the _universal business language _ ( ubl ) in the business domain , or the standards defined by the _open geospatial consortium _( ogc ) in the geospatial domain .ubl defines a standard way to represent business documents such as electronic invoices or electronic purchase orders .ogc standards define _ web service interfaces _ and _ data encodings _ to exchange geospatial information .all of these standards ( ubl and ogc s ) have two things in common .the first one is that they use xml schema to define the structure of xml documents .the second one is that the size and complexity of the standards is very high , making very difficult its manipulation or implementation in certain scenarios .the use of such large schemas can be a problem when xml processing code based on the schemas is produced for a resource - constrained device , such as a mobile phone .this code can be produced using a manual approach , which will require the low - level manipulation of xml data , often producing code that is hard to modify and maintain .another option is to use an xml data binding code generator that maps xml data into application - specific concepts .this way developers can focus on the semantics of the data they are manipulating .the problem with generators is that they usually make a straightforward mapping of schema components to programming languages constructs that may result in a binary code with a very large size that can not be easily accommodated in a mobile device .although schemas in a certain domain can be very large this does not imply that all of the information contained on them is necessary for all of the applications in the domain .for example , in a study of the use of xml in a group of 56 servers implementing the _ _ ogc s sensor observation service ( sos ) specification _ _ revealed that only 29.2% of the sos schemas were used in a large collection of xml documents gathered from those servers . based on this information we proposed in an algorithm to simplify large xml schema sets in an application - specific manner by using a set of xml documents conforming to these schemas .the algorithm allowed a 90% reduction of the size of the schemas for a real case study .this reduction was translated in a reduction of binary code ranging between 37 to 84% when using code generators such as jaxb , xmlbeans and xbinder . in this paperwe extend the schema simplification algorithm presented in to a more complete _ instance - based xml data binding _ approach .this approach allows to produce very compact application - specific xml processing code for mobile devices . in order to make the code as small as possiblethe approach will use , similarly to , a set of xml documents conforming to the application schemas . from these documents , in addition to extract the subset of the schemas that is needed , we extract other relevant information about the use of schemas that can be utilised to reduce the size of the final code . a prototype implementation targeted to android and the java programming language has been developed .the remainder of this paper is structured as follows .section 2 presents an introduction to xml schema and xml data binding . in section 3 ,related work is presented .the _ instance - based data binding approach _ is presented in section 4 .section 5 overviews some implementation details and limitations found during the development of the prototype .section 6 presents experiments to measure size an execution times of the code generated by the tool in a real scenario .last , conclusions and future work are presented .in this section we present a brief introduction to the topics of xml schema and xml data binding .xml schema files are used to assess the validity of well - formed element and attribute information items contained in xml instance files .the term xml data binding refers to the idea of taking the information in an xml document and convert it to instances of application objects .an xml schema document contains components in the form of complex and simple type definitions , element declarations , attribute declarations , group definitions , and attribute group definitions .this language allows users to define their own types , in addition to a set of predefined types defined by the language .elements are used to define the content of types and when global , to define which of them are valid as top - level element of an xml document .xml schema provides a derivation mechanism to express subtyping relationships .this mechanism allows types to be defined as subtypes of existing types , either by extending or restricting the content model of base types .apart from type derivation , a second subtyping mechanism is provided through substitution groups .this feature allows global elements to be substituted by other elements in instance files .a global element e , referred to as _ head element _ , can be substituted by any other global element that is defined to belong to the e s substitution group . with _ xml data binding _, an abstraction layer is added over the raw xml processing code , where xml information is mapped to data structures in an application data model .xml data binding code is often produced by using code generators that use a description of the structure of xml documents using some schema language .the use of generators potentially gives benefits such as increased productivity , consistent quality throughout all the generated code , higher levels of abstraction as we usually work with an abstract model of the system ; and the potential to support different programming languages , frameworks and platforms .although most of the generators available nowadays are targeted to desktop or server applications , several tools have been develop for mobile devices such as xbinder and codesysnthesis xsd / e , or for building complete web services communication end - points for resource constrained environments , such as gsoap .all of the tools mentioned before map xml schema structures to programming languages construct in a straightforward way , which is not adequate when large schemas sets are used .problems related with having large and complex schemas have been presented in several articles .for example , deal with problems of large schemas in schema matching in the business domain . in the context of schema and ontology mapping , states that current match systems still struggle to deal with large - scale match tasks to achieve both good effectiveness and good efficiency . , the work extended here , expose the problems related to using xml data binding tools to generate xml processing code for mobile geospatial applications .last , present an algorithm to extract fragments of large conceptual schemas arguing that the largeness of these schemas makes difficult the process of getting the knowledge of interest to users .when considering xml processing in the context of mobile devices , literature is focused in two main competing requirements : _ compactness _ ( of information ) and _ processing efficiency _ . to achieve compactness compression techniquesare used to reduce the size of xml - encoded information . about processing efficiency, not much work has been done in the mobile devices field .a prominent exception in this topic is the work presented in , and .these articles are all related to the implementation of a middleware platform for mobile devices : the _ fuego mobility middleware _ , where xml processing has a large impact . the proposed _xml stack _ provides a general - purpose xml processing api called _ xas _ , an xml binary format called _ xebu _ , and others apis such as _ trees - with - references _ ( reftrees ) and _ random access xml store _( raxs) . regarding the use of instance files to drive the manipulation of schemas, presents a review of different methods that use instance files for ontology matching . in the field of schema inference ,instance files are used as well to generated adequate schema files that can be used to assess their validity ( e.g. ) ._ instance - based xml data binding _ , is a two - step process .the first step , _ instance - based schema simplification _ , extracts the information about how schema components are used by a specific application , based on the assumption that a representative subset of xml documents that must be manipulated by the application is available .the second step , _ code generation _, consists of using all of the information extracted in the previous step to generate xml processing code as optimised as possible for a target platform .the whole process is shown in figure [ fig : flow - xmldatabinding ] , the inputs to the first step are a set of schemas and a set of xml documents conforming to them .the outputs will be the subset of the schemas used by the xml documents and other information about the use of certain features of the schemas that can be used to optimise the code in the following step .the outputs of the first step are the inputs of the code generation step .the two steps of the process are detailed in the following subsections .+ the _ instance - based schema simplification _step extracts the subset of the schemas used on a set of xml documents .the algorithm used to perform this simplification was first presented in and has been extended here to extracts other information that can be used to produce more compact xml processing code .the idea behind this algorithm , is depicted graphically in figure [ fig : simplificationalg ] .the figure shows to the left the graph of relationships between schemas components .the different planes represent different namespaces .links between schema components represent dependencies between them . to the rightwe have the tree of information items ( xml nodes ) contained in xml documents . for the sake of simplicitywe show in the figure only the tree of nodes corresponding to a single document .an edge between an xml node and a schema component represents that the component describes the structure of the node . to simplify the figure we have shown only a few edges ,although an edge for every xml node must exist .starting from a set of xml documents and the schema files defining their structure , it is possible to calculate which schema components are used and which are not . in doing so , the following information is also recorded : * _ types that are instanced in xml documents _ : for each xml node exists a schema type describing its structure .while xml documents are processed the type of each xml node is recorded .this way we can know which types are instanced and which are not . * _ types and elements substitutions _ : the subtyping mechanisms mentioned in section 2.1 allow the _ real _ or _ dynamic type _ of an element to be different from its _ declared type_. elements declared as having type a , may have any type derived from a in an xml document . in this casethe real type must be specified with the attribute _ xsi : type_. something similar happens with substitution groups , although in this case the attribute _ xsi : type _ is not necessary .the information about xml nodes whose dynamic type is different from its declared type is recorded . * _ wildcards substitutions _ : the elements used to substitute wildcards are recorded . *_ elements occurrence constraints information _ : for all of the elements it is checked that if they allow multiple occurrences there is at least one document where several occurrences of the element are present . *_ elements with a single child _ : all of the elements that contain a single child are also recorded .+ a more detailed view of the code generation process is shown in figure [ fig : flow_gen ] .the outputs of the schema simplification step are used as inputs to the _ schema processor _ , the component of the generator in charge of creating the data model that will be used later by the _ template engine_. the _ template engine _ combines pre - existing _ class templates _ with the data model to generate the final source code .the use of a template engine allows the generation of code for other platforms and programming languages by just defining new class templates .+ a summary of the features of the code generation process that contribute to the generation of optimised code is listed next : * _ use of information extracted from xml documents _ : the use of information about schema use allows to apply the following optimisations : * * _ remove unused schema components _ : the schema components that are not used are not considered for code generation . by removing the unused components we can substantially reduce the size of the generated code .the amount of the reduction will depend on how specific applications make use of the original schemas .* * _ efficient handling of subtyping and wildcards _ : the number of possible substitutions of a type by its subtypes , and a head element of a substitution group by the members of the group can be bounded with the information gathered from the instances files . in the general case , where no instance - based information is available , generic code to face any possible type or element substitution must be written .limiting the number of possible substitutions to only a few allows the production of simpler and faster code .the same reasoning is applied to wildcards .* * _ inheritance flattening _ : by flattening subtyping hierarchies for a given type , i.e. , including explicitly in its type definition all of the fields inherited from base types and eliminating the subtype relationship with its parents , we can reduce the number of classes in the generated code .the application of this technique will not necessarily result in smaller generated code , as the fields defined in base types must be replicated in all of their child types , but it will have a positive impact in the work of the class loader because a lower number of classes have to be loaded while the application is executed .let us consider the case of the geospatial schemas introduced in section 1 .these schemas typically present deep subtyping hierarchies with six or more levels , as a consequence when an xml node of a type in the lowest levels of the hierarchy must be processed , all of its parent types must be loaded first .the technique of inheritance flattening has been widely explored and used in different computer science and engineering fields as is proven by the abundant literature found in the topic . * * _ adjust occurrence constraints _ : if an element is declared to have multiple occurrences it must be mapped to a data structure in the target programming language that allows the storage of the multiple instances of the elements , e.g. an array or a linked list . in practiceif the element has at most one occurrence in the xml documents that must be processed by the application it can be mapped to a single object instance .using this optimisation the final code will make a better use of memory because instead of creating a collection ( array , linked list , etc . )that will only contain a single object , it creates a single object instance . *_ collapse elements containing single child elements _: information items that will always contain single elements can be replaced directly by its content . by applying this optimisation we can reduce the number of classes in the generated code , which will have a positive impact in the size of the final code , the amount of work that has to be done by the class loader , and the use of memory during execution .this optimization is used by mainstream xml data binding tools such as jibx and the xml schema definition tool . * _ disabling parsing / serialization operations as needed _ : some code generators always includes code for parsing and serialization even when only one of these functions is needed . for example , in the context of geospatial web services , most of the time spent in xml processing by client - side applications is dedicated to parsing , as messages received from the servers are potentially large . on the other hand , most of these services allows request to be sent to the server encoded in an http get request , therefore xml serialisation is not needed at all . * _ ignoring sections of xml documents _ : frequently , we are not interested in all of the information contained in xml files , ignoring the unneeded portions of the file will improve the speed of the parsing process and it may have a significant impact in the amount of memory used by the application .in addition , the following features not related directly with code optimisation are also supported : * _ source code based on simple code patterns _ : the generated source code is straightforward to understand and modify in case it is necessary . * _ tolerate common validation errors : _ occasionally , xml documents that are not valid against their respective schemas must be processed by our applications . in many cases ,the validation errors can be ignored following simple coding rules .a detailed explanation of each of the features presented in this section can be found in .as mentioned before , the approach presented in this paper is based on the assumption that a representative set of xml documents exists . by _ representativewe mean that these documents contain instances of all of the possible xml schema elements and types that will be processed by the application in the future . nevertheless , this subset might not always be available . in this case, we can still take advantage of the approach by building _synthetic _ xml documents containing relevant information .whether xml processing code is produced manually or automatically developers typically have some knowledge of the structure of the documents that must be processed by the applications .therefore , we can use this knowledge to build sample xml documents that can be used as input to the algorithm . in caseit were necessary , the final code can be manually modified later , or the sample files changed and used to regenerate the code . if we were using synthetic documents instead of actual documents some of the optimisations related to the information extracted from them should not be applied .the reason for this is that we do not have enough information about how the related schema features are used .for example , we can not apply optimisations such as the efficient handling of subtyping and wildcards , as we might not know all of the possible type substitutions .something similar happens with the adjustment of occurrence constraints .nevertheless , other optimisations such as inheritance flattening or removing unused schema components can be still safely applied ._ dbmobilegen _ ( dbmg for short ) is the current implementation of the _ instance - based xml data binding _approach .it includes components implementing both the simplification algorithm and code generation process .it is implemented in java and relies on existing libraries such as _ _ eclipse xsd _ _ for processing xml schemas , _ _ freemarker _ _ as template engine library , and as well as the generated code , _ _ kxml _ _ for low - level xml processing .this tool produces code targeted to android mobile devices and the java programming language .the current implementation has some limitations . because of the complexity of the xml schema language itself, support for certain features and operations have been only included if it is considered necessary for the case study or applications where the tool has been used .some of these limitations are listed next : * _ serialization is not supported yet _ : the role of parsing for our sample applications and case studies is far more important than serialization .this is mainly because we have preferred to use http get to issue server requests wherever possible . *_ dynamic typing using xsi : type not fully supported _ : the mechanism of dynamic type substitution by using the _ xsi : type _ attribute has not been fully implemented yet , as the xml documents processed in the applications developed so far do not use this feature .in this section we present two experiments .the first one tries to test how much the size of the generated code can be reduced by using dbmg .the second one measures the execution times of generated code in a mobile phone . in this experimentwe borrow the test case presented in that implements the communication layer for an sos client .sos is a standard web service interface defined to enhance interoperability between sensor data producers and consumers .the sos schemas are among the most complex geospatial web service schemas as they are comprised of more than 80 files and they contain more than 700 complex types and global elements .the client must process data retrieved from a server that contains information about air quality for the valencian community .this information is gathered by 57 control stations located in that area .the stations measure the level of different contaminants in the atmosphere .a set of 2492 xml documents was gathered from the server to be used as input , along with the sos schemas , to the instance - based data binding process .the source code generated by dbmg is compiled to the _ compressed jar _ format and compared with the final code generated by other generators : xbinder , jaxb and xmlbeans .the last two are not targeted to mobile devices but are used here as reference to compare the size of similar code for other types of applications .table [ generatedcode2 ] shows the size of the code produced with the different generators from the full sos schemas ( full ) and from the subset of the schemas used in the input instance files ( reduced ) .the reduced schemas are calculated applying the schema simplification algorithm to the full sos schemas .the last row of the table ( libs ) includes the size of the supporting libraries needed to execute the generated code in each case .|p1.5cm| p1.1cm| p1cm| p1.6cm| p1cm| ' '' '' ' '' '' & xbinder & jaxb & xmlbeans & dbmg + full & 3,626 & 754 & 2,822 & 88 + reduced & 567 & 90 & 972 & 88 + libs & 100 & 1,056 & 2,684 & 30 + figure [ fig : codegen_full ] shows the total size of xml processing code when using the full and reduced schemas . in both cases, we can see the enormous difference that exists between the code generated by dbmg and the code generated by other tools .+ the size for dbmg is the same in both cases because it implicitly performs the simplification of the schemas before generating source code .it must be noted that serialisation is not still implemented in dbmg .we roughly estimate that including serialisation code will increase the final size in about 30% . in any case , the code generated by this tool is about 6 times smaller than the code generated by xbinder from the reduced schemas and about 30 times smaller than the code generated from the full schemas .one of the reasons for this difference in size is the lack of serialisation support in dbmg .another reason is that xbinder generates code to ensure all of the restrictions related to user - defined simple types .this is an advantage if we parse data obtained for a non - trusted source and the application requires the data to be carefully validated , but it is a disadvantage in the opposite case , as unneeded verification increase processor usage and memory footprint . in the case of dbmg , as it aggressively tries to lower final code size , these simple type restrictions are ignored and these types do not even have a counterpart in the generated code .when compared to jaxb , using the reduced schemas , the main difference in size is in the supporting libraries , as the code generated by jaxb is very simple .still , the code generated by dbmg is slightly smaller because the step of removing elements with single child elements and inheritance flattening eliminates a large number of classes . in all of the cases ,xmlbeans has the largest size .this tool is mostly optimised for speed at the expense of generating a more sophisticated and complex code and the use of bigger supporting libraries . to test the performance of the generated codewe will parse a set of 38 capabilities files obtained from different sos servers .the code needed to parse these files is generated and deployed to a htc desire android smartphone with a 1 ghz qualcomm quaddragon cpu and 576 mb of ram .the 38 files have sizes ranging from less than 4 kb to 3.5 mb , with a mean size of 315 kb and a standard deviation of 26.7 kb . asthe size range is large and with the purpose of simplifying presentation we divide the files in two groups , those with a size below 100 kb , caps - s ( 30 files ) , and those with size equal to or higher than 100 kb , caps - l ( 8 files ) . to obtain accurate measures of the execution time for the code we selected the methodology presented in .this methodology provides a statistically rigorous approach to deal with all of the non - deterministic factors that may affect the measurement process ( multi - threading , background processes , etc . ) .as our goal is only to measure the execution times of xml processing code , we stored the files to be parsed locally to avoid interferences related to network delays . besides , to minimise the interference of data transfer delays from the storage medium all of the files below 500 kb were read into memory before being parsed .it was impossible to do the same for files with sizes above 500 kb because of the device memory restrictions .figures [ mobile_execution_small ] and [ fig : mobile_execution ] shows the execution times of code generated by dbmg .the figures also include the execution times needed by _ kxml _ , the underlying parser used by dbmg , to process the same group of files .the execution times for _ kxml _ were calculated by creating a simple test case where files are processed using this parser , but no action is taken when receiving the events generated by it .+ + when files below 100 kb are processed it can be observed that the overhead added by the generated code is not high ( figure [ mobile_execution_small ] ) .nevertheless , we can see in figure [ fig : mobile_execution ] that when file size is above 1 mb , the overhead starts to be important ( > 1s ) .this happens because the large amount of memory that is required to store the information that is being processed forces the execution of the garbage collector with a high frequency. we have to keep in mind that code produced manually can have similar problems if it were necessary to retain most of the information read from the xml files in memory . the experiment described abovewas extended in to compare the code generated by dbmg with other data binding tools and to measure also the performance of this code when executed in a windows pc .the experiments showed that the execution times for the mobile devices were around 30 to 90 times slower than those for the personal computer .the experiments also showed that the code generated by dbmg was as fast as code generated by other data binding tools for the android platform .in this paper we have presented an approach to generate compact xml processing code based on large schemas for mobile devices .it utilises information about how xml documents make use of its associated schemas to reduce the size of the generated code as much as possible .the solution proposed here is based on the observation that applications that makes use of xml data based on large schemas do not use all of the information included in these schemas . a code generator implementing the approach that produces code targeted to android mobile devices and the java programming language has been developed .this tool has been tested in a real case study showing a large reduction in the size of the final xml processing code when compared with other similar tools generating code for mobile , desktop and server environments .nevertheless , this result must be looked at with caution as the magnitude of the reduction will depend directly from the use that specific applications make of their schemas .this work has been partially supported by the `` espaa virtual '' project ( ref .cenit 2008 - 1030 ) through the instituto geogrfico nacional ( ign ) ; and project geocloud , spanish ministry of science and innovation ipt-430000 - 2010 - 11 .d. beyer , c. lewerentz , and f. simon .impact of inheritance on metrics for size , coupling , and cohesion in object - oriented systems . in _ proceedings of the 10th international workshop on new approaches in software measurement _ ,iwsm 00 , pages 117 , london , uk , 2000 .springer - verlag .c. bogdan chirila , m. ruzsilla , p. crescenzo , d. pescaru , and e. tundrea . towards a reengineering tool for java based on reverse inheritance .in _ in proceedings of the 3rd romanian - hungarian joint symposium on applied computational intelligence ( saci 2006 ) _ , pages 9637154 , 2006 .bungartz , w. eckhardt , m. mehl , and t. weinzierl . .in _ proceedings of the 8th international conference on computational science , part iii _ , iccs 08 , pages 213222 , berlin , heidelberg , 2008 .springer - verlag .a. cicchetti , d. d. ruscio , r. eramo , and a. pierantonio .automating co - evolution in model - driven engineering . in _ proceedings of the 2008 12th international ieee enterprise distributed object computing conference _ , pages 222231 , washington , dc , usa , 2008 .ieee computer society .a. georges , d. buytaert , and l. eeckhout .statistically rigorous java performance evaluation . in _ proceedings of the 22nd annual acm sigplan conference on object - oriented programming systems and applications _ , oopsla 07 , pages 5776 , new york , ny , usa , 2007 .acm .s. kbisch , d. peintner , j. heuer , and h. kosch . .in _ proceedings of the 24th international conference on advanced information networking and applications workshops , waina 10 _ , volume 0 , pages 508513 , los alamitos , ca , usa , 2010 .ieee computer society .j. kangasharju , s. tarkoma , and t. lindholm .xebu : a binary format with schema - based optimizations for xml data . in a.ngu , m. kitsuregawa , e. neuhold , j .- y .chung , and q. sheng , editors , _ web information systems engineering - wise 2005 _ , volume 3806 of _ lecture notes in computer science _ , pages 528535 .springer berlin / heidelberg , 2005 .e. rahm . towards large - scale schema and ontology matching . in z.bellahsene , a. bonifati , and e. rahm , editors , _ schema matching and mapping _ , data - centric systems and applications , pages 327 .springer berlin heidelberg , 2011 .a. tamayo , c. granell , and j. huerta . .in _ proceedings of the 2nd international conference and exhibition on computing for geospatial research and application ( com.geo 2011 ) _ , pages 17:117:9 , new york , ny , usa , 2011 .a. tamayo , c. granell , and j. huerta . .in _ proceedings of the 2nd international conference and exhibition on computing for geospatial research and application ( com.geo 2011 ) _ , pages 16:116:9 , new york , ny , usa , 2011 .a. tamayo , p. viciano , c. granell , and j. huerta . . in s. geertman , w. reinhardt , and f. toppen , editors ,_ advancing geoinformation science for a changing world _ , volume 1 of _ lecture notes in geoinformation and cartography _ , pages 185209 .springer berlin heidelberg , 2011 .r. a. van engelen and k. a. gallivan . .in _ proceedings of the 2nd ieee / acm international symposium on cluster computing and the grid , ccgrid 02 _ , pages 128 , washington , dc , usa , 2002 .ieee computer society .a. villegas and a. oliv . a method for filtering large conceptual schemas . in _ proceedings of the 29th international conference on conceptual modeling _ ,er10 , pages 247260 , berlin , heidelberg , 2010 .springer - verlag .
xml and xml schema are widely used in different domains for the definition of standards that enhance the interoperability between parts exchanging information through the internet . the size and complexity of some standards , and their associated schemas , have been growing with time as new use case scenarios and data models are added to them . the common approach to deal with the complexity of producing xml processing code based on these schemas is the use of xml data binding generators . unfortunately , these tools do not always produce code that fits the limitations of resource - constrained devices , such as mobile phones , in the presence of large schemas . in this paper we present _ instance - based xml data binding _ , an approach to produce compact application - specific xml processing code for mobile devices . the approach utilises information extracted from a set of xml documents about how the application make use of the schemas . [ languages and system , standards ]
many cardiovascular diseases such as due to the arterial occlusion is one of the leading cause of death world wide .the partial occlusion of the arteries due to stenotic obstruction not only restrict the regular blood flow but also characterizes the hardening and thickening of the arterial wall .however , the main cause of the formation of stenosis is still unknown but it is well established that the fluid dynamical factors play an important role as to further development of stenosis . therefore ,during the past few decay several studies were conducted by young , young and tsai to understand the effects of stenosis on blood flow through arteries . [ cols= " < , < " , ] tu and deville investigated pulsatile flow of blood in stenosed arteries .misra and shit studied in two different situations on the blood flow through arterial stenosis by treating blood as a non - newtonian ( herschel - bulkley ) fluid model .it is generally well known that blood , being a suspension of red cells in plasma , behaves like a non - newtonian fluid at low shear rates .however , misra and chakravarty and shit and roy put forwarded a mathematical analysis for the unsteady flow of blood through arteries having stenosis , in which blood was treated as a newtonian viscous incompressible fluid .the hemodynamics associated with a single stenotic lesion are significantly affected by the presence of a second lesion . in many situationsthere are evidences of the occurrence of the multiple or overlapping stenosis such as the patients of angiograms .misra et al . conducted a theoretical study for the effects of multiple stenosis .an experimental study of blood flow through an arterial segment having multiple stenoses were made by talukder et al . .the effects of overlapping stenosis through an arterial stenosis have been successfully carried out analytically as well as numerically by chakravarty and mandal and layek et al . respectively . however , all these studies are restricted in the consideration of magnetic field and the porous medium .since blood is electrically conducting fluid , its flow characteristics is influenced by the application of magnetic field .if a magnetic field is applied to a moving and electrically conducting fluid , it will induce electric as well as magnetic fields .the interaction of these fields produces a body force per unit volume known as lorentz force , which has significant impact on the flow characteristics of blood .such an analysis may be useful for the reduction of blood flow during surgery and magnetic resonance imaging ( mri ) .the effect of magnetic field on blood flow has been analyzed theoretically and experimentally by many investigators - under different situations .shit and his co - investigators - explored variety of flow behaviour of blood in arteries by treating newtonian/ non - newtonian model in the presence of a uniform magnetic field . very recently , the studies of blood flow through porous medium has gained considerable attention to the medical practitioners/ clinicians because of its enormous changes in the flow characteristics .the capillary endothelium is , in turn , covered by a thin layer lining the alveoli , which has been treated as a porous medium .dash et al . considered the brinkman equation to model the blood flow when there is an accumulation of fatty plaques in the lumen of an arterial segment and artery - clogging takes place by blood clots .they considered the clogged region as a porous medium .bhargava et al . studied the transport of pharmaceutical species in laminar , homogeneous , incompressible , magneto - hydrodynamic , pulsating flow through two - dimensional channel with porous walls containing non - darcian porous materials .misra et al . presented a mathematical model as well as numerical model for studying blood flow through a porous vessel under the action of magnetic field , in which the viscosity varies in the radial direction .hematocrit is the most important determinant of whole blood viscosity .therefore , blood viscosity and vascular resistance ( due to the presence of stenosis ) affect total peripheral resistance to blood flow , which is abnormally high in the primary stage of hypertension .again hematocrit is a blood test that measures the percentage of red blood cells present in the whole blood of the body .the percentage of red cells in adult human body is approximately 40 - 45 % .red cells may affect the viscosity of whole blood and thus the velocity distribution depends on the hematocrit .so blood can not be considered as homogeneous fluid . due to the high shear rate near the arterial wall ,the viscosity of blood is low and the concentration of cells is high in the central core region .therefore , blood may be treated as newtonian fluid with variable viscosity particularly in the case of large blood vessels .the present study is motivated towards a theoretical investigation of blood flow through a tapered and overlapping stenosed artery in the presence of magnetic field .the study pertains to a situation in which the variable viscosity of blood depending upon hematocrit is taken into consideration .the present model is designed in such a way that it could be applicable to both converging/ diverging artery depending on the choice of tapering angle .thus , the study will answers the question of mechanism of further deposition of plaque under various aspects .we consider the laminar , incompressible and newtonian flow of blood through axisymmetric two - dimensional tapered and overlapping stenosed artery .any material point in the fluid is representing by the cylindrical polar coordinate , where measures along the axis of the artery and that of and measure along the radial and circumferential directions respectively .the mathematical expression that corresponds to the geometry of the present problem is given by where the onset of the stenosis is located at a distance from the inlet , the length of the overlapping stenosis and representing the distance between two critical height of the stenoses .the expression for is responsible for the artery to be converging or diverging depending on tapering angle has the form + fig. 1 schematic diagram of the model geometry .+ + we assumed that blood is incompressible , suspension of erythrocytes in plasma and has uniform dense throughout but the viscosity varies in the radial direction . according to einstein s formula for the variable viscosity of blood taken to be ,\end{aligned}\ ] ] where is the coefficient of viscosity of plasma , is a constant ( whose value for blood is equal to 2.5 ) and stands for the hematocrit . the analysis will be carried out by using the following empirical formula for hematocrit given by in lih ,\ ] ] + in which represents the radius of a normal arterial segment , is the maximum hematocrit at the center of the artery and a parameter that determines the exact shape of the velocity profile for blood .the shape of the profile given by eq.(4 ) is valid only for very dilute suspensions of erythrocytes , which are considered to be of spherical shape . according to our considerations , the equation that governs the flow of blood under the action of an external magnetic field through porous mediummay be put as + where denotes the ( axial ) velocity component of blood , the blood pressure , the electrical conductivity , the permeability of the porous medium and is the applied magnetic field strength . to solve our problem , we use no - slip boundary condition at the arterial wall , that is , further we consider axi - symmetric boundary condition of axial velocity at the mid line of the artery as order to simplify our problem , let us introduce the following transformation with the use of the transform defined in ( 8) and the equations ( 3 ) and ( 4 ) , the governing equation ( 5 ) reduces to -m^2u-\frac{1}{k}(a_1-a_2\xi^m)u=\frac{r_0 ^ 2}{\mu_0}\frac{\partial p}{\partial z}\ ] ] with , , and .+ similarly the boundary conditions transformed into the equation ( 9 ) can be solved subjected to the boundary conditions ( 10 ) and ( 11 ) using frobenius method .for this , of course , has to be bounded at .then only admissible series solution of the equation ( 9 ) will exists and can put in the form where , and are arbitrary constants . + to find the arbitrary constant , we use the no - slip boundary condition ( 10 ) and obtained as substituting the value of from equation ( 12 ) into equation ( 9 ) and we get +\frac{r_{0}^{2}\frac{dp}{dz}}{4a_{1}\mu_{0}}\big[\sum_{i=0}^{\infty}(i+1)(i+2)(a_{1}-a_{2}\xi^{m})b_{i}\xi^{i}+\sum_{i=0}^{\infty}(i+2)(a_{1}-(m-1)a_{2}\xi^{m})b_{i}\xi^{i}\nonumber\\ -(m^{2}+\frac{a_{1}}{k})\sum_{i=0}^{\infty}b_{i}\xi^{i+2}+\sum_{i=0}^{\infty}\frac{a_{2}}{k}b_{i}\xi^{i+m+2}\big]=\frac{r_{0}^{2}\frac{dp}{dz}}{4a_{1}\mu_{0}}\end{aligned}\ ] ] equating the coefficients of and other part in equation ( 14 ) we have , and hence the constants and are obtained by equating the coefficients of and from both side of equations ( 15 ) and ( 16 ) respectively and can be put in the form with substituting the expression for in the equation ( 12 ) , we have }{\sum\limits_{i=0}^{\infty}{a_i(\frac{r}{r_0})^{i}}}\ ] ] the average velocity has the form where is the pressure gradient of the flow field in the normal artery in the absence of magnetic field . the non - dimensional expression for is given by }{\sum\limits_{i=0}^{\infty}{a_i(\frac{r}{r_0})^{i}}}\ ] ] the volumetric flow rate across the arterial segment is given by substituting from eq.(20 ) into eq.(23 ) and then integrating with respect to , we obtain }{\sum\limits_{i=0}^{\infty}{a_i(\frac{r}{r_0})^{i}}}\ ] ] if be the volumetric flow rate in the normal portion of the artery , in the absence of magnetic field and porosity effect , then therefore , the non - dimensional volumetric flow rate has the following form }{\sum\limits_{i=0}^{\infty}{a_i(\frac{r}{r_0})^{i}}}\ ] ] if the flow is steady and no outward / inward flow takes place through the arterial segment , then the mass flux is constant and hence .the expression for pressure gradient from ( 26 ) can be put as }\ ] ] the wall shear stress on the endothelial surface is given by {r = r(z)}\ ] ] substituting from eq.(20 ) into eq.(28 ) , we obtain }{4a_1}\frac{\bigg [ { \sum\limits_{i=0}^{\infty } { b_i(\frac{r}{r_0})^{i+2}}\sum\limits_{i=0}^{\infty}{ia_i { ( \frac{r}{r_0})}^{i-1}}}- \sum \limits_{i=0}^{\infty } { ( i+2)b_i { ( \frac{r}{r_0})}^{i+1}}\sum \limits_{i=0}^{\infty}{a_i(\frac{r}{r_0})^{i}}\bigg]}{\sum\limits_{i=0}^{\infty}{a_i(\frac{r}{r_0})^{i}}}\ ] ] if be the shear stress at the normal portion of the arterial wall , in the absence of magnetic field , the non - dimensional form of the wall shear stress is given by \frac{\frac{dp}{dz}}{\sum\limits_{i=0}^{\infty}{a_i(\frac{r}{r_0})^{i } } } \bigg [ { \sum\limits_{i=0}^{\infty } { b_i(\frac{r}{r_0})^{i+2}}\sum\limits_{i=0}^{\infty}{ia_i { ( \frac{r}{r_0})}^{i-1}}}\nonumber\\ & & \hspace{1.0 cm}-\sum \limits_{i=0}^{\infty } { ( i+2)b_i { ( \frac{r}{r_0})}^{i+1}}\sum \limits_{i=0}^{\infty}{a_i(\frac{r}{r_0})^{i}}\bigg].\end{aligned}\ ] ]in the previous section we have obtained analytical expressions for different flow characteristics of blood through porous medium under the action of an external magnetic field . in this sectionwe are to discuss the flow characteristics graphically with the use of following valid numerical data which is applicable to blood . to continue this section we have use the following standard values of physical parameter : fig .2 indicates the different locations of the stenosis in the axial direction . in fig .2 , and correspond to the onset and outset of the stenosis and and represent the throat of the primary stenosis and secondary stenosis respectively .it is interesting to note from this figure that is the location where further deposition takes place and hence it is known as overlapping stenosis .locations of the points z=0.5 , 1.0 , 2.0 , 3.0 , 3.5 .+ the variation of axial velocity at different axial position along the radial direction are shown in fig .we observe from this figure that the velocity is maximum at the central line of the vessel for all position of . among all these positions ,the velocity is high at the throat of the primary stenosis and low at the onset of the overlapping stenosis .however , the central line velocity at the throat of secondary stenosis is about 30 % less than the primary stenosis .but the central line velocity at ( between the throat of two stenosis ) suddenly falls about 55 % than that of secondary one .this observation may leads to the flow circulation zone and may causes further deposition of plaque .4 depicts the variation of axial velocity for different length between the throat of two stenoses .it has been observed that the velocity at the throat of the stenoses significantly increases with the increase of .thus , the effect of the shape of stenosis has important role on the flow characteristics .similar is the observation from fig .5 that the axial velocity decreases as the tapering angle increases . fig .6 illustrates the variation of axial velocity at the throat of the secondary stenosis for different values of the hartmann number .we observe that the axial velocity significantly decreases at the central line of the artery with the increase of the magnetic field strength . while the velocity in the vicinity of the arterial wall increases with the increasing values of in order to maintain constant volumetric flow rate .it is also well known that when a magnetic field is applied in an electrically conducting fluid ( here for blood ) there arises lorentz force , which has a tendency to slow down the motion of the fluid .it has been observed from fig . 7that the axial velocity near the central line of the channel increases with the increase of the permeability parameter , while the trend is reversed in the vicinity of the arterial wall .this phenomena is noticed because of the permeability parameter is depend as the reciprocal of the permeability of the porous medium .8 gives the distribution of axial velocity for different values of the hematocrit .we note from fig . 8 that the axial velocity decreases at the core region of the artery with the increase of hematocrit level , whereas the opposite trend is observed in the peripheral region .this fact lies with in the hematocrit as the blood viscosity is high in the core region due to the aggregation of blood cells rather than low viscosity in the plasma near the arterial wall .9 - 12 illustrate the variation of pressure gradient along the length of the stenosis for different values of the physical parameters of interest . fig .9 shows that the axial pressure gradient increases with the increase of magnetic field strength . we have already observed that the lorentz force has reducing effect of blood velocity , so as more pressure is needed to pass the same amount of fluid under the action of an external magnetic field .however , the opposite trend is observed in the case of porous permeability parameter as shown in fig .it has been seen from fig . 11 that the pressure gradient increases with the increase of the hematocrit .it indicates from this figure that when the aggregation of blood cells increase at the core region that is hematocrit is high , more pressure gradient is needed to pass the same amount of the fluid through the stenotic region .it is interesting to note from these three figures that the magnitude of the pressure gradient is high enough at the throat of the primary stenosis than that of the secondary one .but from fig .12 , we observed that at the throat of secondary stenosis , the pressure gradient is high in comparison to the throat of the primary stenosis .this happens due to the increasing of the tapering angle .therefore , in the case of diverging artery more pressure is needed as the flow advances in the downstream direction .figs . 13 and 14give the distribution of the wall shear stress for different values of the hematocrit and tapering angle .we observe from fig .13 that the wall shear stress increases as the hematocrit increases .one can note from this figure that the wall shear stress is low at the throat of the secondary stenosis as well as at the downstream of the artery .it is generally well known that at the low shear stress region mass transportation takes place and thereby occurs further deposition .however , it is interesting to note from fig .14 that the wall shear stress decreases significantly with the increasing values of the tapering angle .moreover , the magnitude of the wall shear stress is same at both the throat of the stenosis in the absence of tapering angle .therefore , we may conclude that there is a chance of further deposition at the downstream of the diverging artery .a theoretical study of blood flow through overlapping stenosis in the presence of magnetic field has been carried out . in this study the variable viscosity of blood depending on hematocrit and the has been treated as the porous medium .the problem is solved analytically by using frobenius method .the effects of various key parameters including the tapering angle , percentage of hematocrit , the magnetic field and permeability parameter are examined .the main findings of the present study may be listed as follows : + * the effect of primary stenosis on the secondary one is significant in case of diverging artery * the flow velocity at the central region decreases gradually with the increase of magnetic field strength . *the permeability parameter has an enhancing effect on the flow characteristics of blood .* at the core region , the axial velocity decreases with the increase of the percentage of hematocrit . *the hematocrit and the blood pressure has a linear relationship as reported in .* the lower range of hematocrit may leads to the further deposition of cholesterol at the endothelium of the vascular wall .finally we can conclude that further potential improvement of the model are anticipated .since the hematocrit positively affects blood pressure , further study should examine the other factors such as diet , tobacco , smoking , overweight etc . from a cardiovascular point of view .more over on the basis of the present results , it can be concluded that the flow of blood and pressure can be controlled by the application of an external magnetic field .99 young df , fluid mechanics of arterial stenosis , _ j. biomech .* 101 * ( 1979 ) , 157 - 175 .young df and tsi fy , flow characteristic in models of arterial stenosis - i , ssteady flow , _j. biomech . ,_ * 6 * , ( 1973 ) 395 - 410 .misra jc , sinha a and shit gc , mathematical modelling of blood flow in a porous vessel having double stenoses in the presence of an external magnetic field , _ int .j. biomath ._ , * 4 ( 2 ) * ( 2011 ) , 207 - 225 .misra jc , shit gc , chandra s and kundu pk , hydromagnetic flow and heat transfer of a second grade viscoelastic fluid in a channel with oscillatory stretching walls : application to the dynamics of blood flow , _j. eng . math ._ , * 69 * ( 2011 ) , 91 - 100 .+ fig . 3 velocity distribution in the radial direction at different axial position ,when , , , and .+ + fig .4 variation of axial velocity along axial direction for different values of , when , , and .+ + fig .5 variation of axial velocity at for different values of , when , , , and .+ + fig .6 velocity distribution at for different values of , when , , and .+ + fig .7 variation of axial velocity for different values of the permeability parameter at , when , , and .+ + fig.8 variation of axial velocity in the radial direction for different values of , when , , and . + + fig .9 variation of pressure gradient ( ) for different values of when , , and .+ + fig .10 variation of pressure gradient ( ) with for different values of when , , , when .+ + fig .11 variation of pressure gradient ( ) for different values of when , , and .+ + fig .12 variation of pressure gradient ( ) with for different values of when , , , and .+ + fig . 13 distribution of wall shear stress along with for different values of when , , , and + + fig .14 distribution of wall shear stress along with for different values of when , , and .
this paper presents a theoretical study of blood flow through a tapered and overlapping stenosed artery under the action of an externally applied magnetic field . the fluid ( blood ) medium is assumed to be porous in nature . the variable viscosity of blood depending on hematocrit ( percentage volume of erythrocytes ) is taken into account in order to improve resemblance to the real situation . the governing equation for laminar , incompressible and newtonian fluid subject to the boundary conditions is solved by using a well known frobenius method . the analytical expressions for velocity component , volumetric flow rate , wall shear stress and pressure gradient are obtained . the numerical values are extracted from these analytical expressions and are presented graphically . it is observed that the influence of hematocrit , magnetic field and the shape of artery have important impact on the velocity profile , pressure gradient and wall shear stress . moreover , the effect of primary stenosis on the secondary one has been significantly observed . * keywords : * _ overlapping stenoses , tapered artery , mhd flow , porous vessel , hematocrit , frobenius method _ 0.8 cm -.1 in -.1 in 0.0 cm [ section ] [ theorem]definition [ theorem]observation [ theorem ] [ theorem]proposition [ theorem]rule 23.0 cm = currsize
the question about how a piece of information ( a virus , a rumor , an opinion , etc ., ) is globally spread over a network , and which ingredients are necessary to achieve such a success , has motivated much research recently .the reason behind this interest is that identifying key aspects of spreading phenomena facilitates the prevention ( e.g. , minimizing the impact of a disease ) or the optimization ( e.g. the enhancement of viral marketing ) of diffusion processes that can reach system wide scales . in the context of political protest or social movements, information diffusion plays a key role to coordinate action and to keep adherents informed and motivated .understanding the dynamics of such diffusion is important to locate who has the capability to transform the emission of a single message into a global information cascade , affecting the whole system .these are the so - called `` privileged or influential spreaders '' . beyond purely sociological aspects, some valuable lessons might be extracted from the study of this problem .for instance , current viral marketing techniques ( which capitalizes on online social networks ) could be improved by encouraging customers to share product information with their acquaintances .since people tend to pay more attention to friends than to advertisers , targeting privileged spreaders at the right time may enhance the efficiency of a given campaign .the prominence ( importance , popularity , authority ) of a node has however many facets . from a static point of view, an authority may be characterized by the number of connections it holds , or the place it occupies in a network .this is the idea put forward in , where the authors seek the design of efficient algorithms to detect particular ( sub)graph structures : hierarchies and tree - like structures . turning to dynamics, a node may become popular because of the attention it receives in short intervals of time but that is a rather volatile way of being important , because it depends on activity patterns that change in the scale of hours or even minutes .a more lasting concept of influence comprises both a topological enduring ingredient and the dynamics it supports ; this is the case of centola s `` reinforcing signals '' or the -core , which we follow here . in this paper, we approach the problem of influential spreaders taking into consideration data from the spanish `` 15 m movement '' .this pacific civil movement is an example of the social mobilizations from the `` arab spring '' to the `` occupy wall - street '' movement that have characterized 2011 .although whether osns have been fundamental instruments for the successful organization and evolution of political movements is not firmly established , it is increasingly evident that at least they have been nurtured mainly in osns ( facebook , twitter , etc . ) before reaching classic mass media .data from these grassroots movements but also from less conflictive phenomena in the web 2.0 provide a unique opportunity to observe system - wide information cascades .in particular , paying attention to the network structure allows for the characterization of which users have outstanding roles for the success of cascades of information .our results complement some previous findings regarding dynamical influence both at the theoretical and the empirical levels .besides , our analysis of activity cascades reveals distinctive traits in different phases of the protests , which provides important hints for future modeling efforts .the `` 15 m movement '' is a still ongoing civic initiative with no party or union affiliation that emerged as a reaction to perceived political alienation and to demand better channels for democratic representation .the first mass demonstration , held on sunday may 15 ( from now on ) , was conceived as a protest against the management of the economy in the aftermath of the financial crisis .after the demonstrations on day , hundreds of participants decided to continue the protests camping in the main squares of several cities ( puerta del sol in madrid , plaa de catalunya in barcelona ) until may 22 , the following sunday and the date for regional and local elections . from a dynamical point of view , the data used in this study are a set of messages ( tweets ) that were publicly exchanged through _ www . twitter.com_. the whole time - stamped data collected comprises a period of one month ( between april 25th , 2011 at 00:03:26 and may 26th , 2011 at 23:59:55 ) and it was archived by _ cierzo development ltd ._ , a start - up company . to filter out the whole sample and choose only those messages related to the protests , 70 keywords ( _ hashtags _ ) were selected , those which were systematically used by the adherents to the demonstrations and camps .the final sample consists of 535,192 tweets . on its turn ,these tweets were generated by 85,851 unique users ( out of a total of 87,569 users of which 1,718 do not show outgoing activity , i.e. , they are only receivers ) .see for more details .twitter is most frequently used as a broadcasting platform .users subscribe to what other users say building a `` who - listens - to - whom '' network , i.e. , that made up of followers and followings in twitter .this means that any emitted message from a node will be immediately available to anyone following him , which is of utmost importance to understand the concept of activity cascade in the next sections .such relationships offer an almost - static view of the relationships between users , the `` follower network '' for short . to build it , data for all the involved userswere scrapped directly from _ www.twitter.com_. the scrap was successful for the 87,569 identified users , for whom we also obtained their official list of followers restricted to those who had some participation in the protests .the resulting structure is a directed network , direction indicates who follows who in the online social platform . in practice , we take this underlying structure as completely static ( does not change through time ) because its time scale is much slower , i.e. , changes occur probably in the scale of weeks and months . in - degree expresses the amount of users a node is following ; whereas out - degree represents the amount of users who follow a node .this network exhibits a high level of reciprocity : a typical user holds many reciprocal relationships ( with other users who the node probably knows personally ) , plus a few unreciprocated nodes which typically point at hubs . .in yellow , the cumulative proportion of emitted messages as a function of time .note that the two lines evolve in almost the same way . according to this evolution ,we have distinguished two sub - periods : one of them characterized as `` slow growth '' due to the low activity level and the other one tagged as `` explosive '' or `` bursty '' due to the intense information traffic within it . ]the main topological features of the follower network fit well in the concept of `` small - world '' , i.e. , low average shortest path length and high clustering coefficient . furthermore ,both in- and out - degree distribute as a power - law , indicating that connectivity is extremely heterogenous .thus , the network supporting users interactions is scale - free with some rare nodes that act as hubs .an activity cascade or simply `` cascade '' , for short , starting at a _ seed _ , occurs whenever a piece of information or replies to it are ( more or less unchanged ) repeatedly forwarded towards other users .if one of those who `` hear '' the piece of information decides to reply to it , he becomes a _ spreader _ , otherwise he remains as a mere _ listener_. the cascade becomes global if the final number of affected users ( including the set of spreaders and listeners , plus the seed ) is comparable to the size of the whole system .intuitively , the success of an activity cascade greatly depends on whether spreaders have a large set of followers or not ( figure [ example ] ) ; remarkably , the seed is not necessarily very well connected .this fact highlights the entanglement between dynamics and the underlying ( static ) structure .note that the previous definition is too general to attain an _ operative _ notion of cascade .one possibility is to leave time aside , and consider only identical pieces of information traveling across the topology ( a _ retweet _ , in the twitter jargon ) .this may lead to inconsistencies , such as the fact that a node decides to forward a piece of information long after receiving it ( perhaps days or weeks ) .it is impossible to know whether his action is motivated by the original sender , or by some exogenous reason , i.e. , invisible to us .one may , alternatively , take into consideration time , thus considering that , regardless of the exact content of a message , two nodes belong to the same cascade as consecutive spreaders if they are connected ( the latter follows the former ) and they show activity within a certain ( short ) time interval , . the probability that exogenous factors are leading activation is in this way minimized . also , this concept of cascade is more inclusive , regarding dialogue - like messages ( which , we emphasize , are typically produced in short time spans ) .this scheme exploits the concept of spike train from neuroscience , i.e. , a series of discrete action potentials from a neuron taken as a time series . at a larger scale ,two brain regions are identified as functionally related if their activation happens in the same time window .consequently , message chains are reconstructed assuming that activity is contagious if it takes place in short time windows .we apply the latter definition to explore the occurrence of information cascades in the data . in practice , we take a seed message posted by at time and mark all of s followers as listeners .we then check whether any of these listeners showed some activity at time .this is done recursively until no other follower shows activity , see figure [ example ] . in our scheme, a node can only belong to one cascade ; this constraint introduces a bias in the measurements , namely , two nodes sharing a follower may show activity at the same time , so their follower may be counted in one or another cascade ( with possible important consequences regarding average cascades size and penetration in time ) . to minimize this degeneration, we perform calculations for many possible cascade configurations , randomizing the way we process data .we distinguish information cascades ( or just cascades , for short ) from spreader - cascades . in information cascadeswe count any affected user ( listeners and spreaders ) , whereas in spreader - cascades only spreaders are taken into account .we measure cascades and spreader - cascades size distributions for three different scenarios : one in which the information intensity is low ( slow growth phase , from to ) , one in which activity is bursty ( explosive phase , to ) and one that considers all available data ( which spans a whole month , and includes the two previous scenarios plus the time in - between , to ) .figure [ growth ] illustrates these different periods .the green line represents the cumulative proportion of nodes in the network that had shown some activity , i.e. , had sent at least one message , measured by the hour .we tag the first 10 days of study as `` slow growth '' because , for that period , the amount of active people grew less than 5% of the total of users , indicating that recruitment for the protests was slow at that time .the opposite arguments apply in the case of the bursty or `` explosive '' phase : in only 8 days the amount of active users grew from less than 10% up to over an 80% .the same can be said about global activity ( in terms of the total number of emitted directed messages the activity network ) , which shows an almost exact growth pattern .besides , within the different time periods slow growth , explosive and total , different time windows have been set to assess the robustness of our results .our proposed scheme relies on the contagious effect of activity , thus large time windows , i.e. , hours , are not considered .the -core decomposition of a network consists of identifying particular subsets of the network , called -cores , each obtained by recursively removing all the vertices of degree less than , where indicates the total number of in- and out - going links of a node , until all vertices in the remaining graph have degree at least . in the end , each node is assigned a natural number ( its coreness ) , the higher the coreness the closer a node is to the nucleus or core of the network .the main advantage of this centrality measure is , in front of other quantities , its low computational cost that scales as , where is the number of vertices of the graph and is the number of links it contains .this decomposition has been successfully applied in the analysis of the internet and the autonomous systems structure . in the following section, we will use the -core decomposition as a means to identify influence in social media .in particular , we discuss which , degree or coreness , is a better predictor of the extent of an information cascade .the upper panels ( ) of figure [ fig3 ] reflect that a cascade of a size can be reached at any activity level ( slow growth , explosive or both ) . as expected , these large cascades occur rarely as the power - law probability distributions evidence .this result is robust to different temporal windows up to 24h .in contrast , lower ( ) panels show significant differences between periods .specifically , the distribution of involved spreaders in the different scenarios changes radically from the `` slow growth '' phase ( figure [ fig3]d ) to the `` explosive '' period ( figure [ fig3]f ) ; the distribution that considers the whole period of study just reflects that the bursty period ( in which most of the activity takes place ) dominates the statistics .the importance of this difference is that one may conclude that , to attain similar results a proportionally much smaller amount of spreaders is needed in the slow growth period .going to the detail , however , it seems clear ( and coherent with the temporal evolution of the protests , fig .[ growth ] ) that although cascades in the slow period ( panel a ) affect as much as of the population , the system is in a different dynamical regime than in the explosive one : indeed , distributions suggest that there has been a shift from a subcritical to a supercritical phase .the previous conclusions raise further questions : is there a way to identify `` privileged spreaders '' ?are they placed randomly throughout the network s topology ? or do they occupy key spots in the structure ? and , will these influential users be more easily detected in a bursty period ( where large cascades occur more often ) ? inwhat context will influential spreaders single out ? to answer these questions , we capitalize on previous work suggesting that centrality ( measured as the -core ) enhances the capacity of a node to be key in disease spreading processes .the authors in discussed whether the degree of a node ( its total number of neighbors , ) or its -core ( a centrality measure ) can better predict the spreading capabilities of such node .note that the -shell decomposition splits a network in a few levels ( over a hundred ) , while node degrees can range from one or two up to several thousands .we have explored the same idea , but in relation to activity cascades which are the object of interest here .the upper left panel of fig .[ coredegree ] shows the spreading capabilities as a function of classes of -cores .specifically , we take the seed of each particular cascade and save its coreness and the final size of the cascade it triggers .having done so for each cascade , we can average the success of cascades for a given core number .remarkably , for every scenario under consideration ( slow , explosive , whole ) , a higher core number yields larger cascades .this result supports the ideas developed in , but it is at odds with those reported in , which shows that the -core of a node is not relevant in rumor dynamics .exactly the same conclusion ( and even more pronounced ) can be drawn when considering degree ( lower left panel ) , which appears to be in contradiction with the mentioned previous evidence . at a first sight ,our findings seem to point out that if privileged spreaders are to be found , one should simply identify the individuals who are highly connected. however , this procedure might not be the best choice .the right panels in figure [ coredegree ] show the -core ( upper ) and degree ( lower ) distributions , indicating the number of nodes which are seeds at one time or another , classified in terms of their coreness or degree .unsurprisingly , many nodes belong to low cores and have low degrees .the interest of these histograms lies however in the tails of the distributions , where one can see that , while there are a few hundred nodes in the high cores ( and even over a thousand in the last core ) , highest degrees account only for a few dozen of nodes . in practice, this means that by looking at the degree of the nodes , we will be able to identify quite a few influential spreaders ( the ones that produce the largest cascades ) .however , the number of such influential individuals are far more than a few . as a matter of fact, high cascading capabilities are distributed over a wider range of cores , which in turn contain a significant number of nodes .focusing on fig .[ coredegree ] , note that triggering cascades affecting over of the network s population demands nodes with . checking the distribution of degrees ( right - hand side ), it is easy to see that an insignificant amount of nodes display such degree range . in the same line, we may wonder what it takes to trigger cascades affecting over of the network s population , from the -core point of view . in this case , nodes with -core around 125 show such capability .a quick look at the core distribution yields that over 1500 nodes accomplish these conditions , i.e. , they belong to the 125th -shell or higher .we may now distinguish between scenarios in figure [ coredegree ] .while any of the analyzed periods shows a growing tendency , i.e. , cascades are larger the larger is the considered descriptor , we highlight that it is in the slow growth period ( black circles ) where the tendency is more clear , i.e. , results are less noisy . between the other two periods , the explosive one ( red squares )is distinctly the less robust , in the sense that cascade sizes oscillate very much across -cores , and the final plot shows a smaller slope than the other two .this subtle fact is again of great importance : it means that during `` information storms '' a large cascade can be triggered from anywhere in the network ( and , conversely , small cascades may have begun in important nodes ) .the reason for this is that in periods where bursty activity dominates the system suffers `` information overflow '' , the amount of noise flattens the differences between nodes . for instance , in these periods a node from the periphery ( low coreness ) may balance his unprivileged situation by emitting messages very frequently .this behavior yields a situation in which , from a dynamical point of view , nodes become increasingly indistinguishable .the plot corresponding to the whole period analyzed ( green triangles ) lies consistently between the other two scenarios , but closer to the relaxed period .this is perfectly coherent , the study spans for 30 days and the explosive period represents only 25% of it , whereas the relaxed period stands for over 33% .furthermore , those days between and , and beyond , resemble the relaxed period as far as the flow of information is concerned .online social networks are called to play an ever increasing role in shaping many of our habits ( be them commercial or cultural ) as well as in our position in front of political , economical or social issues not only at a local , country - wide level , but also at the global scale .it is thus of utmost importance to uncover as many aspects as possible about topological and dynamical features of these networks .one particular aspect is whether or not one can identify , in a network of individuals with common interests , those that are influentials to the rest .our results show that the degree of the nodes seems to be the best topological descriptor to locate such influential individuals .however , there is an important caveat : the number of such privileged seeds is very low as there are quite a few of these highly connected subjects . on the contrary , by ranking the nodes according to their -core index , which can be done at a low computational cost , one can safely locate the ( more abundant in number ) individuals that are likely to generate large ( near to ) system - wide cascades .the results here presented also lead to a surprising conclusion : periods characterized by explosive activity are not convenient for the spreading of information throughout the system using influential individuals as seeds .this is because in such periods , the high level of activity mainly coming from users which are badly located in the network introduces noise in the system .consequently , influential individuals lose their unique status as generators of system wide cascades and therefore their messages are diluted . on more general grounds ,our analysis of real data remarks the importance of empirical results to validate theoretical contributions .in particular , fig .[ coredegree ] , together with the observations in , raises some doubts about rumor dynamics as a good proxy to real information diffusion .we hypothesize that such models approach information diffusion phenomena in a too simplistic way , thus failing to comprise relevant mechanisms such as complex activity patterns . finally , although the underlying topology may be regarded as constant , any modeling effort should also contemplate the time evolution of the dynamics . indeed , fig .[ fig3 ] suggests that the system is in a sub - critical phase when activity level is low , and critical or supercritical during the explosive period .this is related to the rate at which users are increasingly being recruited as active agents , i.e. the speed at which listeners become spreaders .this work has been partially supported by micinn through grants fis2008 - 01240 and fis2011 - 25167 , and by comunidad de aragn ( spain ) through a grant to the group fenol .
social media have provided plentiful evidence of their capacity for information diffusion . fads and rumors , but also social unrest and riots travel fast and affect large fractions of the population participating in online social networks ( osns ) . this has spurred much research regarding the mechanisms that underlie social contagion , and also who ( if any ) can unleash system - wide information dissemination . access to real data , both regarding topology the network of friendships and dynamics the actual way in which osns users interact , is crucial to decipher how the former facilitates the latter s success , understood as efficiency in information spreading . with the quantitative analysis that stems from complex network theory , we discuss who ( and why ) has privileged spreading capabilities when it comes to information diffusion . this is done considering the evolution of an episode of political protest which took place in spain , spanning one month in 2011 .
pablo picasso `` i paint objects as i think them , not as i see them '' recently , generative adversarial networks ( gans ) have shown significant promise in synthetically generate natural images using the mnist , cifar-10 , cub-200 and lfw datasets . however, we could notice that all these datasets have some common characteristics : i ) most of the background / foreground are clearly distinguishable ; ii ) most of the images contain only one object per image and finally iii ) most of the objects have fairly structured shape such as numeric , vehicles , birds , face etc . in this paper, we would like to investigate if machine can create ( more challenging ) images that do not exhibit any of the above characteristics , such as the artwork depicted in fig .[ fig : clscom ] .artwork is a mode of creative expression , coming in different kind of forms , including drawing , naturalistic , abstraction , etc .for instance , artwork can be non - figurative nor representable , e.g _ abstract _ paintings .therefore , it is very hard to understand the background / foreground in the artwork .in addition , some artwork do not follow natural shapes , e.g _ cubism _ paintings . in the philosophy of art , aesthetic judgementis always applied to artwork based on one s sentiment and taste , which shows one s appreciation of beauty .an artist teacher wrote an online article and pointed out that an effective learning in art domain requires one to focus on a particular type of skills ( e.g practice to draw a particular object or one kind of movement ) at a time .meanwhile , the learning in gans only involves unlabeled data that does nt necessarily reflect on a particular subject . in order to imitate such learning pattern ,we propose to train gans focuses on a particular subject by inputting some additional information to it .a similar approach is the conditional gans ( condgan ) .the work feed a vector into and as an additional input layer . however , there is no feedback from to the intermediate layers .a natural extension is to train as a classifier with respect to alike to the categorical gans ( catgan ) and salimans et al . . in the former , the work extended in gans to classes , instead of a binary output .then , they trained the catgan by either minimize or maximize the shannon entropy to control the uncertainty of . in the latter ,the work proposed a semi - supervised learning framework and used classes with an additional fake class .an advantage of such design is that it can be extended to include more ( adversarial ) classes , e.g introspective adversarial networks ( ian ) used a ternary adversarial loss that forces to label a sample as reconstructed in addition to real or fake .however , such work do not use the information from the labels to train . to this end, we propose a novel adversarial networks namely as agan that is close to condgan but it differs in such a way that we feed to only and back - propagate errors to .this allows to learn better by using the feedback information from the labels .at the same time , agan outputs classes in as to the but again we differ in two ways : first , we set a label to each generated images in based on .secondly , we use sigmoid function instead of softmax function in .this generalizes the agan architecture so that it can be extended to other works , e.g multi - labels problem , open set recognition problem , etc .inspired by larsen et al . , we also added the l2 pixel - wise reconstruction loss along with the adversarial loss to train in order to improve the quality of the generated images .empirically , we show qualitatively that our model is capable to synthesize descent quality artwork that exhibit for instance famous artist styles such as vincent van vogh ( fig .[ vangogh2 ] ) . at the same time ,our model also able to create samples on cifar-10 that look more natural and contain clear object structures in them , compared to dcgan ( fig .[ fig : cifar ] ) .in this section , we present a novel framework built on gans . we begin with a brief concept of the gans framework .then , we introduce the agan .the gans framework was established with two competitors , the generator and discriminator .the task of is to distinguish the samples from and training data . while , is to confuse by generating samples with distribution close to the training data distribution .the gans objective function is given by : ) \label{eq : gan}\ ] ] where is trained by maximizing the probability of the training data ( first term ) , while minimizing the probability of the samples from ( second term ) .the basic structure of agan is similar to gans : it consists of a discriminator and a generator that are simultaneously trained using the minmax formulation of gans , as described in eq .[ eq : gan ] .the key innovation of our work is to allow feedback from the labels given to each generated image through the loss function in to .that is , we feed additional ( label ) information to the gans network to imitate how human learn to draw .this is almost similar to the condgan which is an extension of the gans in which both and receive an additional vector of information as input .that is , encodes the information of either the attributes or classes of the data to control the modes of the data to be generated .however , it has one limitation as the information of is not fully utilized through the back - propagation process to improve the quality of the generated images .therefore , a natural refinement is to train as a classifier with respect to . to this end, we modify to output probability distribution of the labels , as to catgan except that we set a label to each generated images in based on and use cross entropy to back - propagate the error to .this allows to learn better by using the feedback information from the labels .conceptually , this step not only help in speeding up the training process , but also assists the agan to grasp more abstract concepts , such as artistic styles which are crucial when generating fine art paintings .also , we use sigmoid function instead of softmax function in , and employ an additional l2 pixel - wise reconstruction loss as to larsen et al . along with adversarial loss to improve the training stability .contrast to larsen et al . , in agan architecture , the decoder shares the same network with encoder only . fig .[ fig : ccgan ] depicts the overall architecture of the proposed agan .formally , maps an input image to a probability distribution , .generally , can be separated into two parts : an encoder that produces a latent feature followed by a classifier .similarly , is fed with a random vector concatenated with the label information and outputs a generated image , such that \rightarrow \hat{\mathbf{x}} ] and ] randomly set , \hat{k}_i\in\mathbf{k} ] and $ ] , , , this work , we used the publicly available wikiart dataset for our experiments .wikiart is the largest public available dataset that contains around 80,000 annotated artwork in terms of genre , artist and style class .however , not all the artwork are annotated in the 3 respective classes . to be specific ,all artwork are annotated for the _ style _ class .but , there are only 60,000 artwork annotated for the _ genre _ class , and only around 20,000 artwork are annotated for the _ artist _ class .we split the dataset into two parts : for testing and the rest for training . in terms of the agan architectures , we used for all leaky relu . on the other hand , shares the layers deconv3 to deconv6 in ; and shares the layers conv1 to conv4 in .we trained the proposed agan and other models in the experiments for epochs with minibatch size of 128 . for stability, we used the adaptive learning method rmsprop for optimization .we set the decay rate to and initial learning rate to .we found out that reducing the learning rate during the training process will help in improving the image quality .hence , the learning rate is reduced by a factor of at epoch .0.1 [ dore1 ] 0.36 [ dore2 ] 0.1 0.36 0.1 0.36 * genre : * we compare the quality of the generated artwork trained based on the _ genre_. fig .[ fig : clscom ] shows sample of the artwork synthetically generated by our proposed agan , dcgan and gan / vae , respectively .we can visually notice that the generated artwork from the dcgan is relatively poor , with a lot of noises ( artefacts ) in it . in gan / vae, we could notice that the generated artwork are less noisy and look slightly more natural .however , we can observe that they are not as compelling .in contrast , the generated artwork from the proposed agan are a lot more natural visually in overall. + * artist : * fig . [ gogh ] illustrates artwork created by agan based on _ artist _ and interestingly , the agan is able to recognize the artist s preferences .for instance , most of the _ gustave dore s _ masterpieces are completed using engraving , which are usually dull in color as in fig .[ vangogh1]-top .such pattern was captured and led the agan to draw greyish images as depicted in figure [ vangogh2]-(top ) .similarly , most of the vincent van gogh s masterpieces in the wikiart dataset are annotated as _ sketch and study _ genre as illustrated in fig .[ vangogh1]-bottom . in this genre , van gogh s palette consisted mainly of sombre earth tones , particularly dark brown , and showed no sign of the vivid colours that distinguish his later work , e.g the famous _ * the starry night * _ masterpiece . this explains why the artwork synthetically generated by agan is colourless ( fig .[ vangogh2]-bottom ) . + * style : * fig .[ ukiyo ] presents the artwork synthetically generated by agan based on _style_. one interesting observation can be seen on the _ ukiyo - e _ style paintings .generally , this painting style is produced using the woodblock printing for mass production and a large portion of these paintings appear to be yellowish as shown in figure [ unreal11 ] due to the paper material .such characteristic can be seen in the generated _ ukiyo - e _ style paintings .although the subjects in the paintings are hardly recognizable , it is noticeable that agan is trying to mimic the pattern of the subjects ..comparison between different gan models using log - likehood measured by parzen - window estimate .[ cols="^,^",options="header " , ] we trained both the dcgan and agan to generate natural images using the cifar-10 dataset .the generated samples on cifar-10 are presented in fig .[ fig : cifar ] . as aforementioned ,the dcgan is able to generate much recognizable images , contrast to its failure in generating artwork .this implies our earlier statements that the objects in cifar-10 have a fairly structured shape , and so it is much easier to learn compared to the artwork that are abstract .even so , we could still notice some of the generated shapes are not as compelling due to cifar-10 exhibits huge variability in shapes compared to cub-200 dataset of birds and lfw dataset of face .meanwhile , we can observed that the proposed agan is able to generate much better images .for instance , we can see the auto - mobile and horse with clear shape . by using the gan models trained previously, we measure the log - likelihood of the generated artwork .following goodfellow et al . , we measure the log - likehood using the parzen - window estimate . the results are reported in table [ parzen ] and show that the proposed agan performs the best among the compared models .however , we should note that these measurements might be misleading .in addition , we also find the nearest training examples of the generated artwork by using exhaustive search on l2 norm in the pixel space .the comparisons are visualized in fig .[ fig : nearest ] and it shows that the proposed agan does not simply memorize the training set .in this work , we proposed a novel agan to synthesize much challenging and complex images . in the empirical experiments , we showed that the feedback from the label information during the back - propagation step improves the quality of the generated artwork . a natural extension to this workis to use a deeper agan to encode more detail concepts .furthermore , we are also interested in jointly learn these modes , so that agan can create artwork based on the combination of several modes .in figure [ genccgan]-[fig : artist ] , we show more results on the _ genre _ and _ artist _ class . for instance , _ nicholas roerich _ had travelled to many asia countries and finally settled in the indian kullu valley in the himalayan foothills . hence , he has many paintings that are related to mountain using * symbolism * style .this can be seen in the generated paintings ( figure [ fig : artist ] , no . 8 from left ) which look like mountain even - though unrealistic . on another example, agan also shows that _ ivan shishkin _s persistent in drawing forest landscape paintings ( figure [ fig : artist ] , no .6 from left ) ._ ivan shishkin _ is one of the most prominent russian landscape painters . by his contemporaries, shishkin was given the nicknames titan of the russian forest " , forest tsar " , old pine tree " and lonely oak " as there was no one at that time who depicted trees more realistically , honestly and with greater love .in this section , we report more results on the cifar-10 dataset .figure [ cifar10 ] shows the generated images in each of the class . even though the objects in cifar-10 exhibit huge variability in shapes, we can see that agan is still able to generate object - specific appearances and shapes .
this paper proposes an extension to the generative adversarial networks ( gans ) , namely as agan to synthetically generate more challenging and complex images such as artwork that have abstract characteristics . this is in contrast to most of the current solutions that focused on generating natural images such as room interiors , birds , flowers and faces . the key innovation of our work is to allow back - propagation of the loss function w.r.t . the labels ( randomly assigned to each generated images ) to the generator from the discriminator . with the feedback from the label information , the generator is able to learn faster and achieve better generated image quality . empirically , we show that the proposed agan is capable to create realistic artwork , as well as generate compelling real world images that globally look natural with clear shape on cifar-10 . oldmaketitlemaketitle image synthesis , generative adversarial networks , deep learning
in recent years there have been significant advances in the tomographic characterization of materials . as a result , it is now possible to carry out detailed investigations of the 3d grain structures of polycrystalline materials ; see , e.g. , .a fundamental ingredient in any such investigation is a suitable quantitative description of the grain morphology .such a description contains the key features of the structure , ideally free from noise and imaging artifacts .a good description usually results in significant data compression , describing large 3d voxel data sets using only a small number of parameters .data compression is necessary , for example , when carrying out analysis of sequences of tomographic data sets ( e.g. , the high time resolution in - situ synchrotron images considered in ) .in addition , the description of tomographic data via tessellations provides a basis for the statistical analysis of grain structures and , in some cases , can be used to develop stochastic models of material microstructures ; see , e.g. , .the most commonly used quantitative descriptions of space - filling grain ensembles are based on tessellations , which divide the space into disjoint regions called _ cells_. the cells represent the individual grains .the most widely used tessellation model is the _ voronoi tessellation _ ( see , e.g. , ) , which takes , as parameters , a set of generating points .the space is then divided into convex cells by assigning each point to its nearest generator .laguerre tessellation _( see , e.g. , ) is a generalization of the voronoi tessellation that also partitions the space into convex cells .like the voronoi tessellation , the laguerre tessellation is generated by a set of points ; however , unlike the voronoi tessellation , these points are weighted , with the weights influencing the size of the cells .consequently , the laguerre tessellation is able to describe a wider range of structures than the voronoi tessellation .for this reason , the laguerre tessellation is a popular choice for modeling polycrystalline grain structures and other materials , such as foams . in order to describe a tessellation by a set of generating points, it is necessary to solve an inverse problem : that is , a set of generating points that produce the observed cells must be found . the _ voronoi inverse problem _ ( vip ) is well - studied ; see , for example , . recently , duan et al . proposed an algorithm that finds solutions to the _ laguerre inverse problem _although the examples considered in are restricted to 2d , the methodology is easily applied in higher dimensions .the solutions to the vip and the lip assume that the empirical data constitute perfect descriptions of the observed cells .however , this is not true when working with tomographic data , which is distorted by noise and also contains imprecision arising from discretization during the imaging process .it is also worth noting that real - world materials are not perfectly described by laguerre tessellations ( though the descriptions can be quite good ) .these limitations mean that methods that attempt to invert a tessellation extracted from the tomographic data do not , in general , result in good fits .the lip is solved by iteratively finding the generating points of the given tessellations . when applied to imperfect data, this iterative procedure propagates errors , resulting in tessellations that have little correspondence to the tomographic data .thus , when dealing with empirical data , it is not appropriate to attempt to solve the lip . instead , the generating points of a laguerre tessellation that is a good approximation of the material must be found .this is , at its core , an optimization problem .we call this problem the _ laguerre approximation problem _ ( lap ) .the corresponding voronoi approximation problem has been considered in the literature , beginning with .a simple heuristic approach for solving the lap was proposed in .more sophisticated approaches , which formulate and solve an optimization problem , are described in .although these techniques provide good fits in certain settings , they are either limited to small sample sizes or require the considered tessellations to be sufficiently regular . in this paper, we present a fast and robust algorithm for fitting laguerre approximations to large data sets .more precisely , we formulate an optimization problem where we minimize the discrepancy between the grain boundaries observed in the image data and the grain boundaries produced by our laguerre approximation .the cost function is chosen so that it can be evaluated very efficiently and that all necessary information can be easily obtained from image data .we then solve the optimization problem using the cross - entropy ( ce ) method , a stochastic optimization algorithm that is able to escape local minima .we carry out experiments on both real and artificially - generated image data that show our approach is able to produce very good fits .this paper is structured as follows . in section[ sec : laguerre ] , we review some key properties of laguerre tessellations . in section [ sec : optimization ] , we give a more complete description of the lap and formulate our optimization problem .then , in section [ sec : ce - method ] , we introduce the ce method as a robust tool for solving this optimization problem .section [ sec : results ] gives results for both artificial and experimental data that demonstrate the effectiveness of our approach .finally , section [ sec : conclusions ] summarizes our results and suggests directions for further research .in the following section , we define voronoi and laguerre tessellations and give a number of properties that we will use to solve the lap . for notational convenience ,we only consider tessellations in .however , our methods are easily applied in other dimensions .the voronoi tessellation is defined by a locally finite set of generating points , , where denotes the index set consisting of natural numbers .the cell corresponding to the generating point , , is given by where is the euclidean norm on .in the case of the laguerre tessellation , the generating points , , are weighted , where we assume that .the cells of the laguerre tessellation , , are then defined by for all , where the euclidean norm used in is replaced by the so - called power distance given the generator points , the faces of the cells can be computed efficiently ; see , e.g. , . as we take the weights to be positive real numbers , the generating points have a geometric interpretation as spheres .that is , we can represent a generating point , , as a sphere with center and radius .the flexibility of the laguerre tessellation comes at a cost .for example , while each cell of a voronoi tessellation contains its generating point , the generating points of a laguerre tessellation may not be contained in their corresponding cells . in some cases , a generating point of a laguerre tessellation may not even produce a cell .that is , it is possible that there exists a point , , such that ; see , for example , .in addition , while the cells of a voronoi tessellation uniquely determine its generating points , there are uncountably many sets of generating points that can generate a given laguerre tessellation ; see , .thus , while the vip has a unique solution , the lip has uncountably many solutions . under mild conditions, however , it can be shown that the generating points of a laguerre tessellation are uniquely determined given one generating point and the weight ( or one coordinate ) of a generating point in an adjacent cell ; see .we will often find it convenient to consider planes that are equidistant from two generating points ( under their respective power distances ) .that is , given two generating points , and , we consider the separating plane given by where is the unit normal vector of and is the shortest distance from the origin to the plane ; see , e.g. , ( * ? ? ?* section 2.1 ) for more details .the plane defines a half - space which covers the cell .note that the intersection of all such half - spaces , , defines the cell . because the normal vectors are taken to be unit vectors , the distance from an arbitrary point , , to the plane is given by an exact description of a laguerre tessellation ( e.g. , in terms of the half - spaces defined above ) , it is not too difficult to solve the lip .that is , it is straightforward to find a set of weighted points that is able to generate the given tessellation . furthermore , these points can be chosen to satisfy certain constraints ; see .when dealing with tomographic data , however , the description of the tessellation is not exact .this is because , even if the material itself can be perfectly described by a laguerre tessellation , the noise and discretization errors inherent in the imaging process mean that the cell boundaries extracted from the data are subject to error .the lip can be solved as described in .however , the iterative computation of generators is sensitive to imperfect data and errors propagate quickly . as a result, the ensuing tessellations often do not correspond well to the tomographic data . therefore , solving the lip for empirical data is generally ill advised .instead , we wish to find generating points that produce a laguerre tessellation that is as close as possible to the tomographic data ( with respect to some metric ) .this is , at its core , an optimization problem . in order to properly formulate this optimization problem, a discrepancy measure needs to be defined .we then choose the generating points of the approximating tessellation in order to minimize this discrepancy .the choice of discrepancy measure depends on the nature of empirical data . in this paper, we work with tomographic data .we assume that the tomographic image data constitutes a collection of voxels in a convex window .furthermore , we assume that the image has already been segmented e.g. , by the watershed transform ; see and that the voxels have been labeled by their corresponding grains .thus , the data is of the form , where denotes the number of grains and is a grid of voxel coordinates .the grain region , , is then given by .note that does not correspond to an actual grain , but is either the empty set or a collection of one voxel thick layers that separate grains .such thin layers often arise in segmentation procedures such as the watershed transformation .a grain region may correspond to the grain itself or a region that contains the grain ( if the space is not completely filled with grains ) , cf .figure [ fig : labeled - image ] .we assume that the grain regions are roughly convex and that the segmentation is of a high quality . the most direct way to measure the discrepancy between 3d image data and a laguerre tessellation generated by the points is by counting the number of incorrectly assigned voxels .that is , we calculate where is a discretized version of the tessellation generated by and denotes the number of elements in some set .many other discrepancy measures considered in the literature are of a similar form . for example , in , the difference between a cell in the approximating tessellation and its equivalent in the empirical data is measured by the intersection of the approximating cell with the corresponding adjacent cells in the empirical data .minimizing the total overlap of the approximating cells is equivalent to minimizing the number of incorrectly assigned voxels .we call discrepancy measures that aim to minimize the volume of the difference between the empirical and approximating tessellations _ volume - based _ measures .although minimizing a volume - based measure gives a very good fit to the data , such measures are poorly suited to most numerical optimization methods .this is because evaluating such a discrepancy measure is computationally expensive .for example , when considering the number of incorrectly assigned voxels , a discretized version of the approximating tessellation needs to be generated . in practice , this limits the number of times such a discrepancy measure can be evaluated . as a result , approaches that aim to minimize volume - based discrepancies are restricted to small data sets with small numbers of grains ( e.g. , 109 grains in ) or are forced to use non - optimal optimization techniques , such as gradient - descent , which do not require too many evaluations of the discrepancy and , as such , limit the time taken by the fitting procedure ( ideally to substantially less than 24 hours ) .as we propose to use stochastic optimization techniques to solve this problem , we need a discrepancy measure that is significantly faster to evaluate .we can achieve this by considering an _ interface - based _ discrepancy measure a measure that considers only the boundaries between cells instead of a volume - based one .if an approximating tessellation can accurately reproduce these interfaces , it will be a good approximation of the data .the primary advantage of interface - based discrepancy measures is that they can be calculated from the generating points of the approximating tessellation without the need to generate the tessellation itself . in order to define such a discrepancy measure, we consider the sets of voxels that separate adjacent cells .we define the interface between two grains , and , in the empirical data by where denotes the 26-neighborhood of i.e. , the voxels that have a distance less than or equal to from .this set contains all voxels that touch both grains .note that if the grains are not adjacent . if the tessellation generated by is a good approximation to the empirical data , then the plane separating the generating points of adjacent cells and ( which defines the boundary of the two cells ) should be close to .thus , we measure the distance between and , the plane separating cells and in the approximating tessellation .the discrepancy between the approximating tessellation and the empirical data is then given by the sum of squares of these distances .that is , we define where is given in and is the set of separating planes determined by the generating points .note that this discrepancy measure can be calculated without generating the approximating tessellation .a separating plane can be computed for all pairs of generators ( or generator candidates , when performing the optimization ) even if their cells are empty or not adjacent in the laguerre tessellation .therefore , such ` degenerate ' cases are not a problem for our approach . by matching all separating planes to the corresponding test points at once, laguerre cells computed based on ` good ' configurations of generators will match the desired cell configuration well enough .when aiming to minimize a volume - based discrepancy , the lap reduces to the optimization problem of finding there are a number of significant difficulties which must be overcome in solving . in particular , a. the optimization problem is high - dimensional , b. the optimization problem has many local minima , c. the discrepancy is expensive to evaluate .the approach developed in avoids problems ( ii ) and ( iii ) by finding the exact solution of a linear program ( an optimization problem where a linear cost function is minimized subject to linear constraints ) .however , when the lap is transformed into a linear program , the size of the resulting problem grows very quickly in both the number of voxels and the number of grains .this means that the linear programming approach can not be applied to large 3d data sets containing many grains . in ,gradient - descent methods are used to obtain laguerre approximations ( with a relatively low number of evaluations of the discrepancy ) .unlike the linear programming approach , the dimensionality of the optimization problem grows only linearly in the number of grains ( and does not depend on the number of voxels ) .thus , avoids problem ( iii ) and , to a lesser extent , problem ( i ) .however , the gradient - descent methods used there become stuck in local minima .thus , the approach considered in is reliant on the ability to find initial conditions that are close to global optima . although it is often possible to find good initial conditions e.g. , when fitting tessellations to foams with regular structures this is not always the case , as will be seen in section [ sec : results ] .in addition , even when good initial conditions can be found , the quality of the approximating tessellation can usually be significantly improved when the optimization algorithm is able to escape local minima .stochastic optimization methods are widely used tools for solving high - dimensional optimization problems with many local minima ; see for an overview .such methods have been used to solve problems related to the lap . in ,the ce method is used to find a solution to the lip and , in , simulated annealing is used to fit 3d laguerre tessellations to 2d image data .however , no stochastic approach has yet been proposed in order to solve the lap for large , noisy 3d data sets . in our approach to solving the lap for large tomographic data sets , we use the ce method to minimize an interface - based discrepancy measure .the ce method has been widely applied to high - dimensional multi - extremal optimization problems ; see , .it has a number of advantages over other stochastic optimization methods .for example , simulated annealing ( described in ) can not be easily applied to the lap , as it is very difficult to find an appropriate cooling schedule .in addition , the ce method can be easily parallelized . by using the ce method to minimize an interface - based discrepancy measure, we have a method that can escape local minima and solves a problem whose dimensionality grows linearly in the number of grains .although the discrepancy is expensive to evaluate , we reduce this cost significantly by minimizing an interface - based measure .in addition , we further reduce the cost of calculating the discrepancy by replacing defined in with an approximation , . the interface - based discrepancy , , measures the distance between voxels on the boundaries between cells in the empirical data and the planes separating the generating points in the approximating tessellation . in order to calculate this discrepancy , every voxel in each grain boundaryis considered .although this is much faster to calculate than a volume - based discrepancy , it is still computationally expensive .however , very good approximations of can be obtained by instead considering sets of _ test points _ that describe the interfaces between the grains .we find that test points obtained by fitting approximating planes to the interfaces , using the orthogonal regression approach introduced in , work very well . in order to calculate the test points , we consider only the points separating the two cells being considered ( i.e. , we ignore points of contact between three or more grains ) .thus , instead of , we consider this avoids some numerical issues that could arise when using orthogonal regression .we then approximate the boundary between the and grain using the plane that minimizes the total squared distance to all points in . determining this plane is a least - squares problem , which we solve using singular value decomposition .the approximating plane passes through the centroid , , of the set .we obtain a normal vector , , to the plane by taking the right - singular vector corresponding to the smallest singular value of the matrix containing the voxel coordinates in shifted by . for more details on the singular value decomposition approach ,see .the test points , are chosen to lie on this approximating plane . more precisely , we put a circle with radius around the centroid with the same orientation as the plane .we then place test points equidistantly on this circle .an illustration is given in figure [ fig : test_points_on_plane ] . in cases where the estimation of the plane is not ` stable ' ( i.e., the number of voxels in is too low to determine the location and orientation ) , we simply use the centroid for all 10 test points . the criterion for ` stable ' is simple : the smallest singular value must be smaller than half the second largest one . this way we can be confident that the third and shortest axis ( which is perpendicular to the plane )is correctly identified .given the test points for the boundary between adjacent grains and , we define the approximate discrepancy at the boundary by where is given in and is the plane separating the generating points and .the total approximate discrepancy is then given by note that , here , we normalize the discrepancy .this is done in order to make the cost function easier to interpret : it can be thought of as the average squared distance of test points from their corresponding separating plane .the cross - entropy ( ce ) method is a stochastic optimization method that is able to solve many difficult optimization problems , including combinatorial optimization problems and continuous optimization problems with many local minima ; see .the fundamental idea of the method is to describe the location of the global minimum of an -dimensional cost function , , in terms of a degenerate -dimensional probability distribution .that is , a random variable with this distribution takes only a single value , namely .if there is more than one global minimum , each minimum will have a corresponding degenerate distribution .the ce method works by ` learning ' one of these distributions . in the continuoussetting , this is done as follows . a parametric density , , is used to describe the possible locations of a global minimum .a sample of size is drawn from this density and ordered by the corresponding values of . the ( where is the ceiling function ) members of the sample with the lowest values of identified as the ` elite ' sample , where .the elite sample is then used to update the parameter vector , .the updating is done by choosing to minimize the cross - entropy distance ( also called the kullback - leibler divergence ) between and the targeted degenerate probability distribution .the process is continued until a stopping condition is met , e.g. , the probability distribution is nearly degenerate .thus , the ce method uses information about good choices of arguments to find better choices . the ce method and its convergence propertiesare discussed extensively in .usually , the parametric density , , is chosen to be a product of normal densities .that is , we choose where is the density of a normal distribution with mean and standard deviation and .thus , each component of the argument of is drawn independently from a normal distribution .there are two main reasons for using normal distributions .first , as the standard deviation of a normal distribution goes to zero , the distribution converges to a degenerate distribution .the second reason is that the parameter vector , , which minimizes the cross - entropy distance , is simply given by the maximum - likelihood estimates obtained from the elite sample . in other words, we update the parameters by setting the means equal to the sample means of the elite sample and the standard deviations equal to the sample standard deviations of the elite sample . using this approach, we have the following general algorithm for estimating . 1 . * initialization . *identify an initial parameter vector , and set .2 . * sampling . *draw an independent sample from and sort it so that .we denote the component of by .* updating . *calculate the sample means and standard deviations of the elite sample .that is , calculate and set and .set .* iteration . *if a predetermined stopping condition is met , terminate .otherwise , set and repeat from step .the parameters of the ce algorithm help to improve the speed of convergence and quality of solutions .for example , a good choice of improves the convergence properties of the algorithm .the means , , should be chosen as close as possible to the optimal solution .the initial standard deviations , , should be chosen large enough that the algorithm is able to escape local minima but not so large that good configurations are quickly abandoned .ideally , the size of the sample in each step , , should be quite large . however , there are often memory and performance constraints that limit this size .the parameter , which controls the size of the elite sample , should be chosen large enough that a representative sample of good solutions are included in the elite sample . however ,if is chosen too large , the algorithm will take too long to converge .in general , the bigger is , the smaller should be .a standard stopping condition for the algorithm is that it terminates when the cost function does not decrease significantly for a given number of steps .when using the ce approach outlined above , the standard deviations of the normal densities may shrink too quickly .in this case , the ce algorithm can converge to a sub - optimal solution . in order to guard against this, we use variance injection .the basic idea is to occasionally increase the variances of the distributions so that the algorithm can easily escape local minima .usually , variance is injected when the cost function does not decrease significantly enough over a given number of iterations .the magnitude of this increase can depend on the current value of the cost function .if variance injection is used a number of times without a significant decrease in the cost function , then the algorithm is terminated .an alternative to variance injection is to use smoothing when updating .the updating step for the means at the iteration is then of the form and the updating step for the standard deviations is given by where ] ) .it is also possible to carry out dynamic smoothing , with both and taken to be functions of the number of steps . in our experience, variance injection is more effective than smoothing . for a detailed discussion of both variance injection and dynamic smoothing ,see .we solve the lap by minimizing the approximate discrepancy described in section [ sec : approx ] .that is , we solve where is given in . putting this in the terminology of section [ sec : ce_description ] , we find the arguments that minimize the cost function , namely the generating points , each of which is described by three coordinates and an associated radius .we associate a normal distribution with each of the values that need to be determined .thus , for , the coordinates and radius of the generating point , , are each described by a normal density .we denote the initial means and standard deviations by and . in order to ensure that the radii are positive , we truncate the corresponding normal densities to the positive real line ( this is possible without changing the updating rules ,see , e.g. , ) .we apply variance injection when the cost function does not decrease significantly over a period of iterations .more precisely , at each iteration of the algorithm , we record , the minimal value of the approximate discrepancies calculated from the sample in that step .if , at the step , we perform variance injection .because many generating points may already be close to their optimal positions and sizes , variance injection is carried out locally with a magnitude controlled by a parameter .this is done by calculating the local cost of each cell and increasing the variance of the associated densities accordingly .we calculate the average cost of the cell by the local cost of a cell , , is defined to be the maximum of its own average cost and the average cost of its adjacent cells .that is , the variance injection is performed by setting for .if , at any stage , the benefit of variance injection becomes negligible , we stop performing it . more precisely , if the current minimum cost , divided by the minimum cost immediately prior to the last variance injection is larger than , we no longer carry out variance injection .the algorithm is terminated when where .reasonably good initial tessellations can be obtained directly from the tomographic data .the centroids and equivalent radii of the grains in the tomographic data are used for the coordinates and radii of the generating points .the centroids , , are given by for .the equivalent radii are given by {\frac{3}{4\pi } \# r_i}\ ] ] for .the initial means of the densities are then given by , , and for .the initial standard deviations are chosen proportional to the local cost of the cells in this approximation .the local costs are calculated as in section [ sec : vi_details ] .thus , for .our investigations showed that a parameter choice of , , , , , and works well for a large number of different data sets .the elite set then consists of samples , which results in a sufficiently large sample of good solutions .note that other parameter values may improve the speed of convergence ( or , in the case of the parameters controlling variance injection and termination , improve the quality of the approximation ) . however , the basic performance of the ce algorithm is relatively robust to parameter choice .that is , the convergence behavior and quality of approximations are good for a wide range of parameters .the approximations obtained using the ce algorithm have unbounded cells at the edge of the observation window .this is because the corresponding grains in the tomographic data are not delimited by other grains .the unbounded cells are then intersected with the observation window , .note that , in most cases , edge grains are of little scientific interest and are not considered when analyzing the data . in practice , edge grains may be only partially observed or may not be representative ( e.g. , when stuying the dynamics of grains undergoing grain coarsening ) . in this paper , we explicitly ignore grains and their approximating cells if they lie at the edge of the observation window .in this section , we present the results of a number of numerical experiments which we carried out in order to demonstrate the effectiveness of the ce approach .we consider three distinct data sets .two of these data sets are produced using stochastic models .these ` artificial ' data sets allow us to investigate how well our method is able to reconstruct tessellations when we are certain the underlying tessellation is , indeed , laguerre .the first artificial data set is produced using a stochastic model that describes a polycrystalline material undergoing grain coarsening .the second artificial data set is , by design , much more pathological .it exhibits large variation in the size and shape of its grains , as well as the number of neighbors each grain has .this data set allows us to explore the effectiveness of our approach in an extreme setting .in particular , it provides an example of a setting in which the standard choice of initial conditions is far from optimal .finally , we consider an empirical data set : tomographic data obtained from a sample of al-5 wt% cu .we demonstrate that our method is able to produce an excellent approximation to this data set using a laguerre tessellation .the artificial data sets are produced using two distinct stochastic models .the polycrystalline model ( pcm ) , developed in , describes a polycrystalline material undergoing grain coarsening .the second data set is produced using a randomly marked poisson process ; see .the resulting tessellation is known as a poisson - laguerre tessellation ( plt ) ; see . in the first case, there is a strong correlation between the relative positions of the generating points and their weights . in the second example , the weights are independent of the positions of the generating points . as a result , the second data set is much less regular than the first data set .figure [ fig : artificial : cross - sections ] shows 2d cross sections of the data sets , together with their generating points . in the pcm model , a random laguerre tessellation is produced using a set of spheres that may overlap slightly .the centers of the spheres give the locations of the generating points and the radii define the weights .the spheres have hard - cores which may not overlap .this was shown to be a suitable model for polycrystalline materials in . both the density of the sphere packing and the radii of the spheres influence the sizes and shapes of the cells in the resulting tessellation .in particular , when highly dense packings are used , very `` spherical '' cells are generated with a narrow coordination number distribution .we generated a sample from this model in a bounded window , ^ 3 ] with intensity ( so that the expected number of points is roughly 2500 ) , where we used parameters and for the mark distribution .the realizations of the pcm and plt models were both discretized on a voxel grid , resulting in two images .these images are of the form described in section [ sec : data ] .the ce method was then used to solve the lap .the results are illustrated in figure [ fig : artificial : cross - sections - overlay ] , where 2d cross - sections of the original tessellations are shown with the approximating tessellations superimposed .using multi - threading on a standard quad - core processor ( intel core i5 - 3570k ) , the computing time was roughly 3 hours for the pcm data and 4 hours for the plt data . for smaller test sets , with 500 grains each ,the time required was roughly 20 to 30 minutes .the memory requirements when fitting the full data sets were minimal ( especially in comparison to approaches where it is necessary to store the complete voxelized data ) . basically , it is necessary to store the test points and all the generators from one iteration of the ce method .the cost of storing additional variables , such as the means and standard deviations of the densities describing the generators and the cost values of the test points are negligible . for the pcm data , the number of test points used was approximately 140000 .approximately 160000 points were used for the plt data .together with generators in one iteration , this sums up to less than 200 mb of ram when using 32-bit floating point coordinates .figure [ fig : artificial : convergence ] illustrates the convergence behavior of the ce algorithm .when approximating the pcm data , roughly 650 iterations of the algorithm were required . in the plt case ,about 900 iterations were required .it is not surprising that the ce algorithm took longer to terminate in the plt case . as will be seen in section [ sec : results : artificial : evaluation ] the initial conditions in this setting are much further from the optimal solution . in order to evaluate the quality of our approximations, we compare them to approximations obtained using the heuristic approach presented in and the orthogonal regression method proposed in .note that , although the orthogonal regression approach is able to achieve quite good approximations of the cells , it does not result in a parametrized tessellation .table [ tab : artificial : evaluation ] shows an evaluation of the approximations with respect to a volume - based discrepancy measure : the number of voxels that are correctly labeled . our method results in an almost perfect approximation of the pcm data .the heuristic approach considered in also works very well .however , it is not able to reproduce the neighbor structure as successfully as our approach , as evidenced by rows 2 through 5 of table [ tab : artificial : evaluation ] .our method is slightly less successful at approximating the plt data but , in this case , considerably outperforms the heuristics of .[ tab : artificial : evaluation ] the parameters used by the heuristic approach of are very close to the initial conditions of the ce algorithm . thus , from the difference in performance , it seems that the initial conditions are quite far from the optimal parameters . in order to find better configurations ,the ce algorithm has to be able to escape local minima around these initial conditions .this is made clear by the fact that increasing the number of times variance injection is used ( as well as the number of iterations ) , we are able to further improve the results for the ce algorithm .namely , by manually increasing the number of variance injections to 20 ( instead of the 8 used in the standard approach ) , we obtained an approximation which correctly labeled 97.3% of the voxels ( instead of 96.2% ) .this significant improvement indicates that local minima are present in which the ce algorithm is becoming trapped ( because variance injection works by helping the algorithm escape local minima ) .this , in turn , implies that there are local minima near to the initial conditions of the ce algorithm that are sufficiently deep to trap it .a direct implication of these results is that algorithms that converge to nearby local minima ( such as gradient - descent ) may not lead to good approximations when using a standard choice of initial conditions .furthermore , it is not clear that it is always straightforward to find good initial conditions .for example , the initial generating points are usually chosen to lie inside their cells .but , using the methods presented in and the exact description of the plt tessellation , it was not possible to find a solution to the lip subject to the restriction that generating points were contained in their cells .if a method such as gradient - descent starts with all of the points lying inside their cells , it is unclear that it will be able to obtain a solution where some generating points lie far outside their cells .figure [ fig : artificial : cell - volumes ] shows scatter plots of the original cell sizes vs. the cell sizes in the ce approximations .the quality of the ce approximation of the pcm data is immediately apparent .it is also clear that the difficulties in fitting the plt data lie primarily in the small cells .the experimental data we consider was used in to develop the pcm model discussed in section [ sec : results : artificial ] .an al-5 wt% cu sample ( cylindrical with mm length and mm diameter ) was heated to the semisolid state at a temperature of , at which point coarsening processes were measured _ in situ _ using synchrotron x - ray tomographic imaging . for the present paper ,we consider only data obtained after an annealing time of 200 minutes ( which corresponds to the first annealing step in ) .it consists of approximately 2500 grains , which is quite a large data set .a 2d cross - section of the data is shown in figure [ fig : experimental : cross - section ] .the experimental data set was segmented using the watershed transformation , resulting in a labeled image of grain regions , which are separated by a watershed layer of one - voxel thickness .further details regarding sample preparation , imaging and segmentation can be found in .the ce method was applied to solve the lap for the experimental data set .a cross - section of the resulting laguerre approximation is given in figure [ fig : experimental : cross - section - overlay ] .0.5pt fitting the approximation took approximately 70 minutes on the same intel core i5 - 3570k quad - core processor used to fit the artificial data .the number of test points extracted from the image data was roughly 150000 , which is almost the same as for the artificial data sets .the convergence behavior of the ce method is illustrated in figure [ fig : experimental : convergence ] .using the same parameters , only 283 iterations were necessary , which is substantially fewer than the number required for the artificial data .this is mainly due to the fact that only one variance injection was required . as with the artificial data ,we compare our results with the heuristic approach proposed in and the orthogonal regression approach introduced in .as mentioned above , the orthogonal regression approach reconstructs the individual cells quite well but does not result in a tessellation . the results are summarized in table [ tab : experimental : evaluation ] .the ce method correctly assigns % of the voxels .in contrast , the heuristic approach correctly assigns only % of the voxels .the orthogonal regression yields % .[ tab : experimental : evaluation ] note that the ce method significantly outperforms both the heuristic method and orthogonal regression when describing the neighborhood structure of the grains , although it does not seem possible for a tessellation containing only convex cells to accurately capture the full neighborhood structure .this is at least partially due to segmentation issues and contacts in the image data with very small areas .in contrast , the orthogonal regression approach performs surprisingly badly .it seems that the geometric properties of normal laguerre tessellations ( e.g. , coinciding faces , edges and vertices ) favor realistic reconstructions .we think this confirms that the laguerre tessellation is a good choice for representing polycrystalline microstructures .it is possible , of course , that the quality of the fit could be improved using non - convex cells .for example , in , the linear programming method was used to obtain a generalized power diagram approximation of similar data ( but with far fewer grains and voxels ) resulting in a fit that correctly assigned % of the voxels with non - convex cells .figure [ fig : experimental : cell - characteristics](a ) shows a scatter plot of the volumes of the original grains against the volumes of the laguerre cells ( with the volumes expressed as radii of volume - equivalent spheres ) .the overall fit of the cell volumes is excellent .figure [ fig : experimental : cell - characteristics](b ) shows that the locations of the grains are also quite accurate .note that the main issue with fitting seems to be small cells .this is not such a problem , however , as small cells ( with equivalent radii of up to 10 voxels ) are subject to image segmentation error and are , thus , not too reliable in the original data .in this paper , we considered the problem of approximating tomographic data by a laguerre tessellation .we expressed this problem as an optimization problem : the generating points of the approximating tessellation need to be chosen in order to minimize the discrepancy between the tessellation and the tomographic data .we considered an interface - based discrepancy measure , instead of the volume - based discrepancies more commonly considered in the literature .this allowed us to use the ce method , a stochastic optimization method that is able to escape local minima , to fit the approximating tessellation .we then carried out numerical experiments on both artificially generated and experimentally obtained tomographic data that demonstrated the broad effectiveness of our approach .our method is robust and easy to implement .thus it can be applied to fit laguerre tessellations to almost any material with a granular or cellular structure .an obvious next step is to extend our approach to tessellations that include non - convex cells . in particular , we believe the approach and philosophy outlined in this paper can be extended to fit generalized power diagrams ( see ) generalizations of laguerre tessellations that can include non - convex cells. this will be the subject of a forthcoming research paper .this work was partially supported by the australian research council under grant dp140101956 and by the deutsche forschungsgemeinschaft under grant kr 1658/4 - 1 .a software package including java code and data sets can be downloaded from https://github.com/stochastics-ulm-university/laguerre-approximation .n. limodin , l. salvo , m. sury , and m. dimichiel . in situ investigation by x - ray tomography of the overall and local microstructural changes occurring during partial remelting of an al15.8wt.% cu alloy . , 55:31773191 , 2007 .w. ludwig , a. king , p. reischig , m. herbig , e.m .lauridsen , s. schmidt , h. proudhon , s. forest , p. cloetens , s. rolland du roscoat , j.y .buffire , t.j .marrow , and h.f .poulsen . ., 524(12):6976 , 2009 . c. lautensack , h. ewe , p. klein , and t. sych .in j. hirsch , b. skrotski , and g. gottstein , editors , _ aluminium alloys their physical and mechanical properties _ , pages 13681374 , weinheim , 2008 .wiley - vch .d. westhoff , j. j. van franeker , t. brereton , d. p. kroese , r. a. j. janssen , and v. schmidt .stochastic modeling and predictive simulations for the microstructure of organic semiconductor films processed with different spin coating velocities ., 23(4):045003 , 2015 .botev , d.p .kroese , r.y .rubinstein , and p. lecuyer .the cross - entropy method for optimization . in v.govindaraju and c.r .rao , editors , _ machine learning : theory and applications _ , pages 3559 .north - holland , oxford , 2013 .
the analysis of polycrystalline materials benefits greatly from accurate quantitative descriptions of their grain structures . laguerre tessellations approximate such grain structures very well . however , it is a quite challenging problem to fit a laguerre tessellation to tomographic data , as a high - dimensional optimization problem with many local minima must be solved . in this paper , we formulate a version of this optimization problem that can be solved quickly using the cross - entropy method , a robust stochastic optimization technique that can avoid becoming trapped in local minima . we demonstrate the effectiveness of our approach by applying it to both artificially generated and experimentally produced tomographic data . image processing ; microstructural characterization ; grain boundary structure ; polycrystalline ; power diagram ; inverse problem ; cross - entropy method
when complex systems join to form even more complex systems , the interaction of the constituent subsystems is highly random .the complex stochastic interactions among these subsystems are commonly quantified by calculating the cross - correlations . this method has been applied in systems ranging from nanodevices , atmospheric geophysics , and seismology , to finance . herewe propose a method of estimating the most significant component in explaining long - range cross - correlations . studying cross - correlations in these diverse physical systemsprovides insight into the dynamics of natural systems and enables us to base our prediction of future outcomes on current information . in finance ,we base our risk estimate on cross - correlation matrices derived from asset and investment portfolios . in seismology ,cross - correlation levels are used to predict earthquake probability and intensity . in nanodevices used in quantum information processing ,electronic entanglement necessitates the computation of noise cross - correlations in order to determine whether the sign of the signal will be reversed when compared to standard devices .reference reports that cross - correlations for calculated between pairs of eeg time series are inversely related to dissociative symptoms ( psychometric measures ) in 58 patients with paranoid schizophrenia . in genomics data , reports spatial cross - correlations corresponding to a chromosomal distance of million base pairs . in physiology , reports a statistically significant difference between alcoholic and control subjects .many methods have been used to investigate cross - correlations ( i ) between pairs of simultaneously recorded time series or ( ii ) among a large number of simultaneously - recorded time series .reference uses a power mapping of the elements in the correlation matrix that suppresses noise .reference proposes detrended cross - correlation analysis ( dcca ) , which is an extension of detrended fluctuation analysis ( dfa ) and is based on detrended covariance .reference proposes a method for estimating the cross - correlation function of long - range correlated series and . for fractional brownian motions with hurst exponents and , the asymptotic expression for scales as a power of with exponents and .univariate ( single ) financial time series modeling has long been a popular technique in science .to model the auto - correlation of univariate time series , traditional time series models such as autoregressive moving average ( arma ) models have been proposed .the arma model assumes variances are constant with time .however , empirical studies accomplished on financial time series commonly show that variances change with time . to model time - varying variance , the autoregressive conditional heteroskedasticity ( arch ) modelwas proposed . since then , many extensions of arch has been proposed , including the generalized autoregressive conditional heteroskedasticity ( garch ) model and the fractionally - integrated autoregressive conditional heteroskedasticity ( fiarch ) model . in these models ,long - range auto - correlations in magnitudes exist , so a large price change at one observation is expected to be followed by a large price change at the next observation .long - range auto - correlations in magnitude of signals have been reported in finance , physiology , river flow data , and weather data . besides univariate timeseries models , modeling correlations in multiple time series has been an important objective because of its practical importance in finance , especially in portfolio selection and risk management . in order to capture potential cross - correlations among different time series , models for coupled heteroskedastic time serieshave been introduced . however , in practice , when those models are employed , the number of parameters to be estimated can be quite large .a number of researchers have applied multiple time series analysis to world indices , mainly in order to analyze zero time - lag cross - correlations .reference reported that for international stock return of nine highly - developed economies , the cross - correlations between each pair of stock returns fluctuate strongly with time , and increase in periods of high market volatility . by volatilitywe mean time - dependent standard deviation of return .the finding that there is a link between zero time lag cross - correlations and market volatility is `` bad news '' for global money managers who typically reduce their risk by diversifying stocks throughout the world . in order to determine whether financial crises are short - lived or long - lived , ref . recently reported that , for six latin american markets , the effects of a financial crisis are short - range . between two and four months after each crisis ,each latin american market returns to a low - volatility regime . in order to determine whether financial crisis are short - term or long - term at the world level , we study 48 world indices , one for each of 48 different countries . we analyze cross - correlations among returns and magnitudes , for zero and non - zero time lags .we find that cross - correlations between magnitudes last substantially longer than between the returns , similar to the properties of auto - correlations in stock market returns .we propose a general method in order to extract the most important factors controlling cross - correlations in time series . based on random matrix theory and principal component analysis propose how to estimate the global factor and the most significant principal components in explaining the cross - correlations .this new method has a potential to be broadly applied in diverse phenomena where time series are measured , ranging from seismology to atmospheric geophysics .this paper is organized as follows . in sectionii we introduce the data analyzed , and the definition of return and magnitude of return . in section iiiwe introduce a new modified time lag random matrix theory ( tlrmt ) to show the time - lag cross - correlations between the returns and magnitudes of world indices .empirical results show that the cross - correlations between magnitudes decays slower than that between returns . in section ivwe introduce a single global factor model to explain the short- or long - range correlations among returns or magnitudes .the model relates the time - lag cross - correlations among individual indices with the auto - correlation function of the global factor . in sectionv we estimate the global factor by minimizing the variance of residuals using principal component analysis ( pca ) , and we show that the global factor does in fact account for a large percentage of the total variance using rmt . in sectionvi we show the applications of the global factor model , including risk forecasting of world economy , and finding countries who have most the independent economies .in order to estimate the level of relationship between individual stock markets either long - range or short - range cross - correlations exist at the world level we analyze world - wide financial indices , , where denotes the financial index and denotes the time .we analyze one index for each of 48 different countries : 25 european indices , 15 asian indices ( including australia and new zealand ) , 2 american indices , and 4 african indices . in studying 48 economies that include both developed and developing markets we significantly extend previous studies in which only developed economies were included e.g . , the seven economies analyzed in refs . , and the 17 countries studied in ref .we use daily stock - index data taken from _ bloomberg _ , as opposed to weekly or monthly data .the data cover the period 4 jan 1999 through 10 july 2009 , 2745 trading days . for each index , we define the relative index change ( return ) as where denotes the time , in the unit of one day . by magnitude of returnwe denote the absolute value of return after removing the mean order to quantify the cross - correlations , random matrix theory ( rmt ) ( see refs . and references therein ) was proposed in order to analyze collective phenomena in nuclear physics . extended rmt to cross - correlation matrices in order to find cross - correlations in collective behavior of financial time series .the largest eigenvalue and smallest eigenvalue of the wishart matrix ( a correlation matrix of uncorrelated time series with finite length ) are where , and is the matrix dimension and the length of each time series .the larger the discrepancy between ( a ) the correlation matrix between empirical time series and ( b ) the wishart matrix obtained between uncorrelated time series , the stronger are the cross - correlations in empirical data .many rmt studies reported equal - time ( zero ) cross - correlations between different empirical time series . recentlytime - lag generalizations of rmt have been proposed . in one of the generalizations of rmt , based on the eigenvalue spectrum called time - lag rmt ( tlrmt ) , ref . found long - range cross - correlations in time series of price fluctuations in absolute values of 1340 members of the new york stock exchange composite , in both healthy and pathological physiological time series , and in the mouse genome .we compute for varying time lags the largest singular values of the cross - correlation matrix of n - variable time series we also compute of a similar matrix , where are replaced by the magnitudes .the squares of the non - zero singular values of are equal to the non - zero eigenvalues of or , where by we denote the transpose of . in a singular value decomposition ( svd ) the diagonal elements of are equal to singular values of , where the and correspond to the left and right singular vectors of the corresponding singular values .we apply svd to the correlation matrix for each time lag and calculate the singular values , and the dependence of the largest singular value on serves to estimate the functional dependence of the collective behavior of on .we make two modifications of correlation matrices in order to better describe correlations for both zero and non - zero time lags .* the first modification is a correction for correlation between indices that are not frequently traded .since different countries have different holidays , all indices contain a large number of zeros in their returns .these zeros lead us to underestimate the magnitude of the correlations . to correct for this problem ,we define a modified cross - correlation between those time series with extraneous zeros , here is the time period during which both and are non - zero . with this definition , the time periods during which or exhibit zero values have been removed from the calculation of cross - correlations .the relationship between and is * the second modification corrects for auto - correlations .the main diagonal elements in the correlation matrix are ones for zero - lag correlation matrices and auto - correlations for non - zero lag correlation matrices .thus , time - lag correlation matrices allow us to study both auto - correlations and time - lag cross - correlations .if we study the decay of the largest singular value , we see a long - range decay pattern if there are long - range auto - correlations for some indices but no cross - correlation between indices . to remove the influence of auto - correlations and isolate time - lag cross - correlations ,we replace the main diagonals by unity , with this definition the influence of auto - correlations is removed , and the trace is kept the same as the zero time - lag correlation matrix . in fig .1(a ) we show the distribution of cross - correlations between zero and non - zero lags . for the empirical pdf of the cross - correlation coefficients substantially deviates from the corresponding pdf of a wishart matrix , implying the existence of equal - time cross - correlations . in order to determine whether short - range or long - range cross - correlations accurately characterize world financial markets , we next analyze cross - correlations for . we find that with increasing the form of quickly approaches the pdf , which is normally distributed with zero mean and standard deviation . in fig .1(b ) we also show the distribution of cross - correlations between _ magnitudes_. in financial data , returns are generally uncorrelated or short - range auto - correlated , whereas the magnitudes are generally long - range auto - correlated .we thus examine the cross - correlations between for different . in fig .1(b ) we find that with increasing , approaches the pdf of random matrix more slowly than , implying that cross - correlations between index magnitudes persist longer than cross - correlations between index returns . in order to demonstrate the decay of cross - correlations with time lags , we apply modified tlrmt . fig .2 shows that with increasing the largest singular value calculated for decays more slowly than the largest singular value calculated for .this result implies that among world indices , the cross - correlations between magnitudes last longer than cross - correlations between returns . in fig .2 we find that vs. decays as a power law function with the scaling exponent equal to 0.25. the faster decay of vs. for implies very weak ( or zero ) cross - correlations among world - index returns for larger , which agrees with the empirical finding that world indices are often uncorrelated in returns .our findings of long - range cross - correlations in magnitudes among the world indices is , besides a finding in ref . , another piece of `` bad news '' for international investment managers .world market risk decays very slowly .once the volatility ( risk ) is transmitted across the world , the risk lasts a long time .the arbitrage pricing theory states that asset returns follow a linear combination of various factors .we find that the factor structure can also model time lag pairwise cross - correlations between the returns and between magnitudes .to simplify the structure , we model the time lag cross - correlations with the assumption that each individual index fluctuates in response to one common process , the `` global factor '' , here in the global factor model ( gfm ) , is the average return for index , is the global factor , and is the linear regression residual , which is independent of , with mean zero and standard deviation . here indicates the covariance between and , .this single factor model is similar to the sharpe market model , but instead of using a known financial index as the global factor , we use factor analysis to find , which we introduce in the next section .we also choose as a zero - mean process , so the expected return , and the global factor is only related with market risk .we define a zero - mean process as a second assumption is that the global factor can account for most of the correlations .therefore we can assume that there are no correlations between the residuals of each index , .then the covariance between and is the covariance between magnitudes of returns depends on the return distribution of and , but the covariance between squared magnitudes indicates the properties of the magnitude cross - correlations .the covariance between and is the above results in eqs .( [ cov1])-([cov2 ] ) show that the variance of the global factor and square of the global factor account for all the zero time lag covariance between returns and squared magnitudes . for timelag covariance between , we find here is the autocovariance of .similarly , we find here is the autocovariance of . in gfm ,the time lag covariance between each pair of indices is proportional to the autocovariance of the global factor .for example , if there is short - range autocovariance for and long - range autocovariance for , then for individual indices the cross - covariance between returns will be short - range and the cross - covariance between magnitudes will be long - range .therefore , the properties of time - lag cross - correlation in multiple time series can be modeled with a single time series the global factor .the relationship between time lag covariance among two index returns and autocovariance of the global factor also holds for the relationship between time lag cross - correlations among two index returns and auto - correlation function of the global factor , because it only need to normalize the original time series to mean zero and standard deviation one .in contrast to domestic markets , where for a given country we can choose the stock index as an estimator of the `` global '' factor , when we study world markets the global factor is unobservable . at the world level when we study cross - correlations among world markets , we estimate the global factor using principal component analysis ( pca ) . in this sectionwe use bold font for n dimensional vectors or matrix , and underscore for time series .suppose is the multiple time series , each row of which is an individual time series .we standardize each time series to zero mean and standard deviation 1 as the correlation matrix can be calculated as where is the transpose of , and the in the denominator is the length of each time series .then we diagonalize the correlation matrix here and are the eigenvalues in non - increasing order , is an orthonormal matrix , whose -th column is the basis eigenvector of , and is the transpose of , which is equal to because of orthonormality . for each eigenvalue and the corresponding eigenvector ,it holds according to pca , is defined as the -th principal component ( ) , and the eigenvalue indicates the portion of total variance of contributed to , as shown in eq .( [ pca_value ] ) .since the total variance of is the expression indicates the percentage of the total variance of that can be explained by the .according to pca ( a ) the principal components are uncorrelated with each other and ( b ) maximizes the variance of the linear combination of with the orthonormal restriction given the previous principal components . from the orthonormal property of obtain where * i * is the identity matrix .then the multiple time series can be represented as a linear combination of all the the total variance of all time series can be proved to be equal to the total variance of all principal components next we assume that is much larger than each of the rest of eigenvalues which means that the first , , accounts for most of the total variances of all the time series .we express as the sum of the first part of eq .( [ r2pc ] ) corresponding to and the error term combined from all other terms in eq .( [ r2pc ] ) .thus then is a good approximation of the global factor , because it is a linear combination of that accounts for the largest amount of the variance . is a zero - mean process because it is a linear combination of which are also zero - mean processes ( see eq .( [ z ] ) ) . comparing eqs .( [ z ] ) and ( [ r2pc1 ] ) with we find the following estimates : using eq . ( [ pca_value ] )we find that in the rest of this work , we apply the method of eq .( [ duan ] ) to empirical data .next we apply the method of eq .( [ duan ] ) to estimate the global factor of 48 world index returns .we calculate the auto - correlations of and , which are shown in figs . 3 and 4 .precisely , for the world indices , fig .3(a ) shows the time series of the global factor , and fig .3(b ) shows the auto - correlations in .we find only short - range auto - correlations because , after an interval , most auto - correlations in fall in the range of , which is the 95% confidence interval for zero auto - correlations , here .for the 48 world index returns , fig .4(a ) shows the time series of magnitudes , with few clusters related to market shocks during which the market becomes fluctuates more .4(b ) shows that , in contrast to , the magnitudes exhibit long - range auto - correlations since the values are significant even after .the auto - correlation properties of the global factor are the same as the auto - correlation properties of the individual indices , i.e. , there are short - range auto - correlations in and long - range power - law auto - correlations in .these results are also in agreement with fig .1(b ) where the largest singular value vs. calculated for decays more slowly than the largest singular value calculated for .as found in ref . for , approximately follows the same decay pattern as cross - correlation functions .although a ljung - box test shows that the return auto - correlation is significant for a 95% confidence level , the return auto - correlation is only 0.132 for and becomes insignificant after .therefore , for simplicity , we only consider magnitude cross - correlations in modeling the global factor .we model the long - range market - factor returns * m * with a particular version of the garch process , the gjr garch process , because this garch version explains well the asymmetry in volatilities found in many world indices .the gjr garch model can be written as where is the volatility and is a random process with a gaussian distribution with standard deviation 1 and mean 0 .the coefficients and are determined by a maximum likelihood estimation ( mle ) and if , if .we expect the parameter to be positive , implying that `` bad news '' ( negative increments ) increases volatility more than `` good news '' .for the sake of simplicity , we follow the usual procedure of setting in all numerical simulations . in this case , the gjr - garch(1,1 ) model for the market factor can be written as we estimate the coefficients in the above equations using mle , where the estimated coefficients are shown in table . 1 .next we test the hypothesis that a significant percentage of the world cross - correlations can be explained by the global factor . by using pcawe find that the global factor can account for 30.75% of the total variance .note that , according to rmt , only the eigenvalues larger than the largest eigenvalue of a wishart matrix calculated by eq .( [ rmt ] ) ( and the corresponding ) are significant . to calculate the percentage of variance the significant account for, we employ the rmt approach proposed in ref .the largest eigenvalue for a wishart matrix is for and we found in the empirical data .from all the 48 eigenvalues , only the first three are significant : , , and .this result implies that among the significant factors , the global factor accounts for of the variance , confirming our hypothesis that the global factor accounts for most variance of all individual index returns .pca is defined to estimate the percentage of variance the global factor can account for zero time lag correlations .next we study the time lag cross - correlations after removing the global trend , and apply the svd to the correlation matrix of regression residuals of each index [ see eq .( [ duan1 ] ) ] .our results show that for both returns and magnitudes , the remaining cross - correlations are very small for all time lags compared to cross - correlations obtained for the original time series .this result additionally confirms that a large fraction of the world cross - correlations for both returns and magnitudes can be explained by the global factor .the asymptotic ( unconditional ) variance for the gjr - garch model is . for the market factorthe conditional volatility can be estimated by recursion using the historical conditional volatilities and fitted coefficients in eq .( [ pr2d ] ) .for example , the largest cluster at the end of the graph shows the 2008 financial crisis . in fig .5(a ) we show the time series of the conditional volatility of eq .( [ pr2d ] ) of the global factor .the clusters in the conditional volatilities may serve to predict market crashes . in each cluster ,the height is a measure of the size of the market crash , and the width indicates its duration . in fig .5(b ) we show the forecasting of the conditional volatility of the global factor , which asymptotically converges to the unconditional volatility .next , in fig .[ cormarind ] we show the cross - correlations between the global factor and each individual index using eq .( [ pc12cor ] ) .there are indices for which cross - correlations with the global factor are very small compared to the other indices ; 10 of 48 indices have cross - correlations coefficients with the global factor smaller than 0.1 .these indices correspond to iceland , malta , nigeria , kenya , israel , oman , qatar , pakistan , sri lanka , and mongolia .the financial market of each of these countries is weakly bond with financial markets of other countries .this is useful information for investment managers because one can reduce the risk by investing in these countries during world market crashes which , seems , do not severely influence these countries .we have developed a modified time lag random matrix theory ( tlrmt ) in order to quantify the time - lag cross - correlations among multiple time series . applying the modified tlrmt to the daily data for 48 world - wide financial indices , we find short - range cross - correlations between the returns , and long - range cross - correlations between their magnitudes .the magnitude cross - correlations show a power law decay with time lag , and the scaling exponent is 0.25 . the result we obtain , that at the world level the cross - correlations between the magnitudes are long - range , is potentially significant because it implies that strong market crashes introduced at one place have an extended duration elsewhere which is `` bad news '' for international investment managers who imagine that diversification across countries reduces risk .we model long - range world - index cross - correlations by introducing a global factor model in which the time lag cross - correlations between returns ( magnitudes ) can be explained by the auto - correlations of the returns ( magnitudes ) of the global factor .we estimate the global factor as the first component by using principal component analysis . using random matrix theory , we find that only three principal components are significant in explaining the cross - correlations .the global factor accounts for 30.75% of the total variance of all index returns , and 75.34% of the variance of the three significant principle components .therefore , in most cases , a single global factor is sufficient .we also show the applications of the gfm , including locating and forecasting world risk , and finding individual indices that are weakly correlated to the world economy . locating and forecasting world risk can be realized by fitting the global factor using a gjr - garch(1,1 ) model , which explains both the volatility correlations and the asymmetry in the volatility response to both `` good news '' and `` bad news . ''the conditional volatilities calculated after fitting the gjr - garch(1,1 ) model indicates the global risk , and the risk can be forecasted by recursion using the historical conditional volatilities and the fitted coefficients . to find the indices that are weakly correlated to the world economy, we calculate the correlation between the global factor and each individual index .we find 10 indices which have a correlation smaller than 0.1 , while most indices are strongly correlated to the global factor with the correlations larger than 0.3 .to reduce risk , investment managers can increase the proportion of investment in these countries during world market crashes , which do not severely influence these countries .based on principal component analysis , we propose a general method which helps extract the most significant components in explaining long - range cross - correlations .this makes the method suitable for broad range of phenomena where time series are measured , ranging from seismology and physiology to atmospheric geophysics .we expect that the cross - correlations in eeg signals are dominated by the small number of most significant components controlling the cross - correlations .we speculate that cross - correlations in earthquake data are also controlled by some major components .thus the method may have significant predictive and diagnostic power that could prove useful in a wide range of scientific fields . c. k. peng _ et al _ phys .e * 49 * , 1685 ( 1994 ) ; a. carbone , g. castelli , and h. e. stanley , physica a * 344 * , 267 ( 2004 ) ; l. xu _et al _ , phys .e * 71 * , 051101 ( 2005 ) ; a. carbone , g. castelli , and h. e. stanley , phys .e * 69 * , 026105 ( 2004 ) .ftse 100 , dax , cac 40 , ibex 35 , swiss market , ftse mib , psi 20 , irish overall , omx iceland 15 , aex , bel 20 , luxembourg luxx , omx copenhagen 20 , omx helsinki , obx stock , omx stockholm 30 , austrian traded atx , athex composite share price , wse wig , prague stock exch , budapest stock exch indx , bucharest bet index , sbi20 slovenian total mt , omx tallin omxt , malta stock exchange ind , ftse / jse africa top40 ix ise national 100 , tel aviv 25 , msm30 , dsm 20 , mauritius stock exchange , nikkei 225 , hang seng , shanghai se b share , all ordinaries , nzx all , karachi 100 , sri lanka colombo all sh , stock exch of thai , jakarta composite , ftse bursa malaysia klci , psei - philippine se , mse top 20 .gjr - garch(1,1 ) coefficients of the global factor . the p - values and t - values comfirms that all these parameters are significant at 95% confidence level .the positive value of means bad news " has larger impact on the global market than good news " .we find , which is very close to 1 , and so indicate long - range volatility auto - correlations . [cols="^,^,^,^,^ " , ] world financial index returns each of size ( a ) the empirical pdf of the coefficients of the cross - correlation matrix calculated between index returns with increasing quickly converges to the gaussian form .the normal distribution is the distribution of the pairwise cross - correlations for finite length uncorrelated time series , which is a normal distribution with mean zero and standard deviation . between ( b ) the empirical pdf of the coefficients of the matrix calculated between index volatilities approaches the pdf of the random matrix more slowly than in ( a ) ., title="fig:",scaledwidth=50.0% ] + world financial index returns each of size ( a ) the empirical pdf of the coefficients of the cross - correlation matrix calculated between index returns with increasing quickly converges to the gaussian form .the normal distribution is the distribution of the pairwise cross - correlations for finite length uncorrelated time series , which is a normal distribution with mean zero and standard deviation . between ( b ) the empirical pdf of the coefficients of the matrix calculated between index volatilities approaches the pdf of the random matrix more slowly than in ( a ) ., title="fig:",scaledwidth=50.0% ] + obtained from the spectrum of the matrices and versus time lag . with increasing , the largest singular values obtained for of returns decays more quickly than calculated for absolute values of returns .the magnitude cross - correlations decay as a power law function with the scaling exponent of .,title="fig:",scaledwidth=60.0% ] + , and become insignificant after time lag , with no more than one significant auto - correlation for every 20 time lags .therefore , only short - range auto - correlations can be found in the global factor.,title="fig:",scaledwidth=60.0% ] + , and become insignificant after time lag , with no more than one significant auto - correlation for every 20 time lags .therefore , only short - range auto - correlations can be found in the global factor.,title="fig:",scaledwidth=60.0% ] , and is still larger than 0.2 until . for every time lag ,the autocorrelation is significant even after .therefore long - range auto - correlations exist in the magnitudes of the global factor.,title="fig:",scaledwidth=60.0% ] + , and is still larger than 0.2 until . for every time lag ,the autocorrelation is significant even after .therefore long - range auto - correlations exist in the magnitudes of the global factor.,title="fig:",scaledwidth=60.0% ] and each individual index , .the global factor has large correlation with most of the indices . however , there are indices that are not much correlated with the global factor .10 of the 48 indices have a correlation smaller than 0.1 between the global factor , corresponding to the indices for iceland , malta , nigeria , kenya , israel , oman , qatar , pakistan , sri lanka , and mongolia .hence , unlike most countries , the economies of these 10 countries are more independent of the world economy.,scaledwidth=60.0% ]
we propose a modified time lag random matrix theory in order to study time lag cross - correlations in multiple time series . we apply the method to 48 world indices , one for each of 48 different countries . we find long - range power - law cross - correlations in the absolute values of returns that quantify risk , and find that they decay much more slowly than cross - correlations between the returns . the magnitude of the cross - correlations constitute `` bad news '' for international investment managers who may believe that risk is reduced by diversifying across countries . we find that when a market shock is transmitted around the world , the risk decays very slowly . we explain these time lag cross - correlations by introducing a global factor model ( gfm ) in which all index returns fluctuate in response to a single global factor . for each pair of individual time series of returns , the cross - correlations between returns ( or magnitudes ) can be modeled with the auto - correlations of the global factor returns ( or magnitudes ) . we estimate the global factor using principal component analysis , which minimizes the variance of the residuals after removing the global trend . using random matrix theory , a significant fraction of the world index cross - correlations can be explained by the global factor , which supports the utility of the gfm . we demonstrate applications of the gfm in forecasting risks at the world level , and in finding uncorrelated individual indices . we find 10 indices are practically uncorrelated with the global factor and with the remainder of the world indices , which is relevant information for world managers in reducing their portfolio risk . finally , we argue that this general method can be applied to a wide range of phenomena in which time series are measured , ranging from seismology and physiology to atmospheric geophysics .
rules express general knowledge about actions or conclusions in given circumstances and also principles in given domains . in the if - then format , rules are an easy way to represent cognitive processes in psychology and a useful means to encode expert knowledge . in another perspective , rules are important because they can help scientists understand problems and engineers solve problems .these observations would account for the fact that rule learning or discovery has become a major topic in both machine learning and data mining research .the former discipline concerns the construction of computer programs which learn knowledge or skill while the latter is about the discovery of patterns or rules hidden in the data .the fundamental concepts of rule learning are discussed in [ 16 ] .methods for learning sets of rules include symbolic heuristic search [ 3 , 5 ] , decision trees [ 17 - 18 ] , inductive logic programming [ 13 ] , neural networks [ 2 , 7 , 20 ] , and genetic algorithms [ 10 ] . a methodology comparison can be found in our previous work [ 9 ] . despite the differences in their computational frameworks , these methods perform a certain kind of search in the rule space ( i.e. , the space of possible rules ) in conjunction with some optimization criterion .complete search is difficult unless the domain is small , and a computer scientist is not interested in exhaustive search due to its exponential computational complexity .it is clear that significant issues have limited the effectiveness of all the approaches described .in particular , we should point out that all the algorithms except exhaustive search guarantee only local but not global optimization .for example , a sequential covering algorithm such as cn2 [ 5 ] performs a greedy search for a single rule at each sequential stage without backtracking and could make a suboptimal choice at any stage ; a simultaneous covering algorithm such as id3 [ 18 ] learns the entire set of rules simultaneously but it searches incompletely through the hypothesis space because of attribute ordering ; a neural network algorithm which adopts gradient - descent search is prone to local minima . in this paper , we introduce a new machine learning theory based on multi - channel parallel adaptation that shows great promise in learning the target rules from data by parallel global convergence .this theory is distinct from the familiar parallel - distributed adaptation theory of neural networks in terms of channel - based convergence to the target rules .we describe a system named cfrule which implements this theory .cfrule bases its computational characteristics on the certain factor ( cf ) model [ 4 , 22 ] it adopts .the cf model is a calculus of uncertainty mangement and has been used to approximate standard probability theory [ 1 ] in artificial intelligence .it has been found that certainty factors associated with rules can be revised by a neural network [ 6 , 12 , 15 ] .our research has further indicated that the cf model used as the neuron activation function ( for combining inputs ) can improve the neural - network performance [ 8 ] .the rest of the paper is organized as follows .section [ sec : mcrlm ] describes the multi - channel rule learning model .section [ sec : mrr ] examines the formal properties of rule encoding .section [ sec : mac ] derives the model parameter adaptation rule , presents a novel optimization strategy to deal with the local minimum problem due to gradient descent , and proves a property related to asynchronous parallel convergence , which is a critical element of the main theory .section [ sec : re ] formulates a rule extraction algorithm .section [ sec : app ] demonstrates practical applications. then we draw conclusions in the final section .cfrule is a rule - learning system based on multi - level parameter optimization .the kernel of cfrule is a multi - channel rule learning model .cfrule can be embodied as an artificial neural network , but the neural network structure is not essential .we start with formal definitions about the model .[ def : mcrl ] the multi - channel rule learning model is defined by ( ) channels ( s ) , an input vector ( ) , and an output ( ) as follows : where and such that is the input dimensionality and for all .the model has only a single output because here we assume the problem is a single - class , multi - rule learning problem .the framework can be easily extended to the multi - class case .[ def : channel ] each channel ( ) is defined by an output weight ( ) , a set of input weights ( s ) , activation ( ) , and influence ( ) as follows : where is the bias , , and for all .the input weight vector defines the channel s pattern .[ def : chact ] each channel s activation is defined by where is the cf - combining function [ 4 , 22 ] , as defined below .[ def : cf ] the cf - combining function is given by where s are nonnegative numbers and s are negative numbers .as we will see , the cf - combining function contributes to several important computational properties instrumental to rule discovery .[ def : chinf ] each channel s influence on the output is defined by [ def : output ] the model output is defined by we call the class whose rules to be learned the _ target class _ , and define rules inferring ( or explaining ) that class to be the _ target rules_. for instance , if the disease diabetes is the target class , then the diagnostic rules for diabetes would be the target rules .each target rule defines a condition under which the given class can be inferred .note that we do not consider rules which deny the target class , though such rules can be defined by reversing the class concept .the task of rule learning is to learn or discover a set of target rules from given instances called training instances ( data ) .it is important that rules learned should be generally applicable to the entire domain , not just the training data .how well the target rules learned from the training data can be applied to unseen data determines the generalization performance .instances which belong to the target class are called positive instances , else , called negative instances .ideally , a positive training instance should match at least one target rule learned and vice versa , whereas a negative training instance should match none .so , if there is only a single target rule learned , then it must be matched by all ( or most ) positive training instances .but if multiple target rules are learned , then each rule is matched by some ( rather than all ) positive training instances .since the number of possible rule sets is far greater than the number of possible rules , the problem of learning multiple rules is naturally much more complex than that of learning single rules . in the multi - channel rulelearning theory , the model learns to sort out instances so that instances belonging to different rules flow through different channels , and at the same time , channels are adapted to accommodate their pertinent instances and learn corresponding rules .notice that this is a mutual process and it can not occur all at once . in the beginning , the rules are not learned and the channels are not properly shaped , both information flow and adaptation are more or less random , but through self - adaptation , the cfrule model will gradually converge to the correct rules , each encoded by a channel .the essence of this paper is to prove this property . in the model design ,a legitimate question is what the optimal number of channels is .this is just like the question raised for a neural network of how many hidden ( internal computing ) units should be used .it is true that too many hidden units cause data overfitting and make generalization worse [ 7 ] .thus , a general principle is to use a minimal number of hidden units .the same principle can be equally well applied to the cfrule model . however , there is a difference . in ordinary neural networks ,the number of hidden units is determined by the sample size , while in the cfrule model , the number of channels should match the number of rules embedded in the data . since , however , we do not know how many rules are present in the data , our strategy is to use a minimal number of channels that admits convergence on the training data .the model s behavior is characterized by three aspects : * information processing : compute the model output for a given input vector .* learning or training : adjust channels parameters ( output and input weights ) so that the input vector is mapped into the output for every instance in the training data . * rule extraction :extract rules from a trained model .the first aspect has been described already .the if - then rule ( i.e. , if the premise , then the action ) is a major knowledge representation paradigm in artificial intelligence . herewe make analysis of how such rules can be represented with proper semantics in the cfrule model .[ def : rule ] cfrule learns rules in the form of if , ... , , ... , , . ., , . ., then the target class with a certainty factor . where is a positive antecedent ( in the positive form ) , a negated antecedent ( in the negative form ) , and reads `` not . ''each antecedent can be a discrete or discretized attribute ( feature ) , variable , or a logic proposition .the if part must not be empty . the attached certainty factor in the then part , called the rule cf , is a positive real .the rule s premise is restricted to a conjunction , and no disjunction is allowed .the collection of rules for a certain class can be formulated as a dnf ( disjunctive normal form ) logic expression , namely , the disjunction of conjunctions , which implies the class .however , rules defined here are not traditional logic rules because of the attached rule cfs meant to capture uncertainty .we interpret a rule by saying when its premise holds ( that is , all positive antecedents mentioned are true and all negated antecedents mentioned are false ) , the target concept holds at the given confidence level .cfrule can also learn rules with weighted antecedents ( a kind of fuzzy rules ) , but we will not consider this case here . there is increasing evidence to indicate that good rule encoding capability actually facilitates rule discovery in the data . in the theorems that follow ,we show how the cfrule model can explicitly and precisely encode any given rule set .we note that the ordinary sigmoid - function neural network can only implicitly and approximately does this . also , we note although the threshold function of the perceptron model enables it to learn conjunctions or disjunctions , the non - differentiability of this function prohibits the use of an adaptive procedure in a multilayer construct .[ thm : rule - rep ] for any rule represented by definition [ def : rule ] , there exists a channel in the cfrule model to encode the rule so that if an instance matches the rule , the channel s activation is 1 , else 0 .( proof ) : this can be proven by construction .suppose we implement channel by setting the bias weight to 1 , the input weights associated with all positive attributes in the rule s premise to 1 , the input weights associated with all negated attributes in the rule s premise to , the rest of the input weights to 0 , and finally the output weight to the rule cf .assume that each instance is encoded by a bipolar vector in which for each attribute , 1 means true and false .when an instance matches the rule , the following conditions hold : if is part of the rule s premise , if is part of the rule s premise , and otherwise can be of any value .for such an instance , given the above construction , it is true that 1 or 0 for all .thus , the channel s activation ( by definition [ def : chact ] ) , must be 1 according to .on the other hand , if an instance does not match the rule , then there exists such that . since ( the bias weight ) = 1 ,the channel s activation is 0 due to . [ thm : ruleset - rep ] assume that rule cf s ( ) .for any set of rules represented by definition [ def : rule ] , there exists a cfrule model to encode the rule set so that if an instance matches any of the given rules , the model output is , else 0 .( proof ) : suppose there are rules in the set . as suggested in the proof of theorem [ thm : rule - rep ] , we construct channels , each encoding a different rule in the given rule set so that if an instance matches , say rule , then the activation ( ) of channel is 1 . in this case , since the channel s influence is given by ( where is set to the rule cf ) and the rule cf , it follows that .it is then clear that the model output must be since it combines influences from all channels that but at least one . on the other hand , if an instance fails to match any of the rules , all the channels activations are zero , so is the model output . neural computing , the backpropagation algorithm [ 19 ] can be viewed as a multilayer , parallel optimization strategy that enables the network to converge to a local optimum solution .the black - box nature of the neural network solution is reflected by the fact that the pattern ( the input weight vector ) learned by each neuron does not bear meaningful knowledge .the cfrule model departs from traditional neural computing in that its internal knowledge is comprehensible .furthermore , when the model converges upon training , each channel converges to a target rule .how to achieve this objective and what is the mathematical theory are the main issues to be addressed .the cfrule model learns to map a set of input vectors ( e.g. , extracted features ) into a set of outputs ( e.g. , class information ) by training .an input vector along with its target output constitute a training instance .the input vector is encoded as a bipolar vector .the target output is 1 for a positive instance and 0 for a negative instance . starting with a random or estimated weight setting ,the model is trained to adapt itself to the characteristics of the training instances by changing weights ( both output and input weights ) for every channel in the model .typically , instances are presented to the model one at a time .when all instances are examined ( called an epoch ) , the network will start over with the first instance and repeat .iterations continue until the system performance has reached a satisfactory level .the learning rule of the cfrule model is derived in the same way as the backpropagation algorithm [ 19 ] .the training objective is to minimize the sum of squared errors in the data . in each learning cycle , a training instance is given and the weights of channel ( for all ) are updated by where : the output weight , : an input weight , the argument denotes iteration , and the adjustment .the weight adjustment on the current instance is based on gradient descent .consider channel .for the output weight ( ) , ( : the learning rate ) where ( : the target output , : the model output ) .let the partial derivative in eq .( [ eq : gr - u ] ) can be rewritten with the calculus chain rule to yield then we apply this result to eq .( [ eq : gr - u ] ) and obtain the following definition .the learning rule for output weight of channel is given by for the input weights ( s ) , again based on gradient descent , the partial derivative in eq .( [ eq : gr - w ] ) is equivalent to since is not directly related to , the first partial derivative on the right hand side of the above equation is expanded by the chain rule again to obtain substituting these results into eq .( [ eq : gr - w ] ) leads to the following definition .the learning rule for input weight of channel is given by where assume that suppose and .the partial derivative can be computed as follows .* case ( a ) if , * case ( b ) if , it is easy to show that if in case ( a ) or in case ( b ) , .it is known that gradient descent can only find a local - minimum .when the error surface is flat or very convoluted , such an algorithm often ends up with a bad local minimum .moreover , the learning performance is measured by the error on unseen data independent of the training set .such error is referred to as generalization error .we note that minimization of the training error by the backpropagation algorithm does not guarantee simultaneous minimization of generalization error .what is worse , generalization error may instead rise after some point along the training curve due to an undesired phenomenon known as overfitting [ 7 ] .thus , global optimization techniques for network training ( e.g. , [ 21 ] ) do not necessarily offer help as far as generalization is concerned . to address this issue, cfrule uses a novel optimization strategy called multi - channel regression - based optimization ( mcro ) . in definition [ def : cf ] , and can also be expressed as when the arguments ( s and s ) are small , the cf function behaves somewhat like a linear function .it can be seen that if the magnitude of every argument is , the first order approximation of the cf function is within an error of 10% or so . sincewhen learning starts , all the weights take on small values , this analysis has motivated the mcro strategy for improving the gradient descent solution .the basic idea behind mcro is to choose a starting point based on the linear regression analysis , in contrast to gradient descent which uses a random starting point .if we can use regression analysis to estimate the initial influence of each input variable on the model output , how can we know how to distribute this estimate over multiple channels ?in fact , this is the most intricate part of the whole idea since each channel s structure and parameters are yet to be learned .the answer will soon be clear . in cfrule , each channel s activation is defined by suppose we separate the linear component from the nonlinear component ( ) in to obtain we apply the same treatment to the model output ( definition [ def : output ] ) so that then we substitute eq.([eq : phi - app ] ) into eq.([eq : sys - app ] ) to obtain in which the right hand side is equivalent to + r_{acc}\ ] ] note that suppose linear regression analysis produces the following estimation equation for the model output : ( all the input variables and the output transformed to the range from 0 to 1 ) .the mcro strategy is defined by for each that is , at iteration when learning starts , the initial weights are randomized but subject to these constraints ..the target rules in the simulation experiment . [ cols= " < , <if global optimization is a main issue for automated rule discovery from data , then current machine learning theories do not seem adequate . for instance , the decision - tree and neural - network based algorithms , which dodge the complexity of exhaustive search , guarantee only local but not global optimization . in this paper, we introduce a new machine learning theory based on multi - channel parallel adaptation that shows great promise in learning the target rules from data by parallel global convergence .the basic idea is that when a model consisting of multiple parallel channels is optimized according to a certain global error criterion , each of its channels converges to a target rule .while the theory sounds attractive , the main question is how to implement it . in this paper , we show how to realize this theory in a learning system named cfrule .cfrule is a parallel weight - based model , which can be optimized by weight adaptation .the parameter adaptation rule follows the gradient - descent idea which is generalized in a multi - level parallel context . however , the central idea of the multi - channel rule - learning theory is not about how the parameters are adapted but rather , how each channel can converge to a target rule .we have noticed that cfrule exhibits the necessary conditions to ensure such convergence behavior .we have further found that the cfrule s behavior can be attributed to the use of the cf ( certainty factor ) model for combining the inputs and the channels .since the gradient descent technique seeks only a local minimum , the learning model may well be settled in a solution where each rule is optimal in a local sense . a strategy called multi - channel regression - based optimization ( mcro ) has been developed to address this issue .this strategy has proven effective by statistical validation .we have formally proven two important properties that account for the parallel rule - learning behavior of cfrule .first , we show that any given rule set can be explicitly and precisely encoded by the cfrule model .secondly , we show that once a channel is settled in a target rule , it barely moves .these two conditions encourage the model to move toward the target rules .an empirical weight convergence graph clearly showed how each channel converged to a target rule in an asynchronous manner .notice , however , we have not been able to prove or demonstrate this rule - oriented convergence behavior in other neural networks .we have then examined the application of this methodology to dna promoter recognition and hepatitis prognosis prediction . in both domains , cfrule is superior to c4.5 ( a rule - learning method based on the decision tree ) based on cross - validation .rules learned are also consistent with knowledge in the literature . in conclusion , the multi - channel parallel adaptive rule - learning theory is not just theoretically sound and supported by computer simulation but also practically useful . in light of its significance , this theory would hopefully point out a new direction for machine learning and data mining .this work is supported by national science foundation under the grant ecs-9615909 .the author is deeply indebted to edward shortliffe who contributed his expertise and time in the discussion of this paper .1 . j.b .adams , `` probabilistic reasoning and certainty factors '' , in _ rule - based expert systems _ , addison - wesley , reading , ma , 1984 . 2 .alexander and m.c .mozer , `` template - based algorithms for connectionist rule extraction '' , in _ advances in neural information processing systems _ , mit press , cambridge , ma , 1995 . 3 .b.g . buchanan and t.m .mitchell , `` model - directed learning of production rules '' , in _ pattern - directed inference systems _ , academic press , new york , 1978 .b.g . buchanan and e.h .shortliffe ( eds . ) , _ rule - based expert systems _ , addison - wesley , reading , ma , 1984 . 5 .p. clark and r. niblett , `` the cn2 induction algorithm '' , _ machine learning _ , 3 , pp .261 - 284 , 1989 . 6 .fu , `` knowledge - based connectionism for revising domain theories '' , _ ieee transactions on systems , man , and cybernetics _ , 23(1 ) , pp . 173182 , 1993 .fu , _ neural networks in computer intelligence _ , mcgraw hill , inc . , new york , ny , 1994fu , `` learning in certainty factor based multilayer neural networks for classification '' , _ ieee transactions on neural networks_. 9(1 ) , pp .151 - 158 , 1998 . 9 .fu and e.h .shortliffe , `` the application of certainty factors to neural computing for rule discovery '' , _ ieee transactions on neural networks _ , 11(3 ) , pp .647 - 657 , 2000 . 10 .janikow , `` a knowledge - intensive genetic algorithm for supervised learning '' , _ machine learning _ , 13 , pp .189 - 228 , 1993 . 11 .koudelka , s.c .harrison , and m. ptashne , `` effect of non - contacted bases on the affinity of 434 operator for 434 repressor and cro '' , _ nature _ , 326 , pp .886 - 888 , 1987 . 12 .lacher , s.i .hruska , and d.c .kuncicky , `` back - propagation learning in expert networks '' , _ ieee transactions on neural networks _ , 3(1 ) , pp . 6272 , 1992 .n. lavrac and s. dzeroski , _ inductive logic programming : techniques and applications _ , ellis horwood , new york , 1994 .lawrence , _ the gene _ , plenum press , new york , ny , 1987 . 15 .mahoney and r. mooney , `` combining connectionist and symbolic learning to refine certainty - factor rule bases '' , _ connection science _, 5 , pp . 339 - 364 , 1993 . 16 .t. mitchell , _ machine learning _, mcgraw hill , inc ., new york , ny . , 1997 .quinlan , `` rule induction with statistical data a comparison with multiple regression '' , _ journal of the operational research society _ , 38 , pp .347 - 352 , 1987 . 18 .quinlan , _c4.5 : programs for machine learning _ ,morgan kaufmann , san mateo , ca . , 1993 .rumelhart , g.e .hinton , and r.j .williams , `` learning internal representation by error propagation '' , in _ parallel distributed processing : explorations in the microstructures of cognition _ , vol .1 . mit press , cambridge , ma , 1986 . 20 .r. setiono and h. liu , `` symbolic representation of neural networks '' , _ computer _ , 29(3 ) , pp .71 - 77 , 1996 . 21 .y. shang and b.w .wah , `` global optimization for neural network training '' , _ computer _ , 29(3 ) , pp .45 - 54 , 1996 . 22 .shortliffe and b.g .buchanan , `` a model of inexact reasoning in medicine '' , _ mathematical biosciences _, 23 , pp .351 - 379 , 1975 . 23 .towell and j.w .shavlik , `` knowledge - based artificial neural networks '' , _ artificial intelligence_. 70(1 - 2 ) , pp. 119 - 165 , 1994 .
in this paper , we introduce a new machine learning theory based on multi - channel parallel adaptation for rule discovery . this theory is distinguished from the familiar parallel - distributed adaptation theory of neural networks in terms of channel - based convergence to the target rules . we show how to realize this theory in a learning system named cfrule . cfrule is a parallel weight - based model , but it departs from traditional neural computing in that its internal knowledge is comprehensible . furthermore , when the model converges upon training , each channel converges to a target rule . the model adaptation rule is derived by multi - level parallel weight optimization based on gradient descent . since , however , gradient descent only guarantees local optimization , a multi - channel regression - based optimization strategy is developed to effectively deal with this problem . formally , we prove that the cfrule model can explicitly and precisely encode any given rule set . also , we prove a property related to asynchronous parallel convergence , which is a critical element of the multi - channel parallel adaptation theory for rule learning . thanks to the quantizability nature of the cfrule model , rules can be extracted completely and soundly via a threshold - based mechanism . finally , the practical application of the theory is demonstrated in dna promoter recognition and hepatitis prognosis prediction . [ section ] [ section ] [ section ] + + + * li min fu * + department of cise + university of florida + + corresponding author : + li min fu + department of computer and information sciences , 301 cse + p.o . box 116120 + university of florida + gainesville , florida 32611 + + phone : ( 352)392 - 1485 + e - mail : fu.ufl.edu * multi - channel parallel adaptation theory + for rule discovery * + * li min fu * + * keywords : * rule discovery , adaptation , optimization , regression , certainty factor , neural network , machine learning , uncertainty management , artificial intelligence .
langevin dynamics is a modeling technique , in which the motion of a set of massive bodies in the presence of a bath of smaller solvent particles is directly integrated , while the dynamics of the solvent are `` averaged out '' .this approximation may lead to a dramatic reduction in computational cost compared to `` explicit solvent '' methods , since dynamics of the solvent particles no longer need to be fully resolved .with langevin dynamics , the effect of the solvent is reduced to an instantaneous drag force and a random delta - correlated force felt by the massive bodies .this framework has been applied in a variety of scenarios , including implicit solvents, brownian dynamics, dynamic thermostats, and the stabilization of time integrators. in spite of the numerous successes of conventional langevin dynamics it has long been recognized that there are physically compelling scenarios in which the underlying assumptions break down , necessitating a more general treatment. to this end , generalized langevin dynamics ( gld ) permits the modeling of systems in which the inertial gap separating the massive bodies from the smaller solvent particles is reduced . herethe assumptions of an instantaneous drag force and a delta - correlated random force become insufficient , leading to the introduction of a temporally non - local drag and a random force with non - trivial correlations .gld has historically been applied to numerous problems over the years, with a number of new applications inspiring a resurgence of interest , including microrheology, biological systems, nuclear quantum effects, and other situations in which anomalous diffusion arises. to facilitate computational exploration of some of these applications , a number of authors have developed numerical integration schemes for the generalized langevin equation ( gle ) , either in isolation or in conjunction with extra terms accounting for external or pairwise forces as would be required in a molecular dynamics ( md ) simulation .these schemes must deal with a number of complications if they are to remain computationally efficient and accurate .* the retarded drag force takes the form of a convolution of the velocity with a memory kernel .this requires the storage of the velocity history , and the numerical evaluation of a convolution at each time step , which can become computationally expensive .* the generation of a random force with non - trivial correlations may also require the storage of a sequence of random numbers , and some additional computational expense incurred at each time step .numerous methods exist that circumvent either one or both of these difficulties. each has a different computational cost , implementation complexity , order of convergence , and specific features , e.g. , some are restricted to single exponential memory kernels , require linear algebra , etc . to this end, it is difficult to distinguish any individual method as being optimal , especially given the broad range of applications to which gld may be applied .motivated by the aforementioned applications and previous work in numerical integrators , we have developed a new family of time integration schemes for the gle in the presence of conservative forces , and implemented it in a public domain md code , lammps. our primary impetus was to enable the development of reduced order models for nanocolloidal suspensions , among a variety of other applications outlined above .previous computational studies of these systems using explicit solvents have demonstrated and resolved a number of associated computational challenges . our method enables a complementary framework for the modeling of these types of systems using implicit solvents that can include memory effects . otherwise , to date the gle has only been solved in the presence of a number of canonically tractable external potentials and memory kernels. integration into the lammps framework , provides a number of capabilities .lammps includes a broad array of external , bonded , and non - bonded potentials , yielding the possibility for the numerical exploration of more complex systems than have been previously studied .finally , lammps provides a highly scalable parallel platform for studying n - body dynamics .consequently , extremely large sample statistics are readily accessible even in the case of interacting particles , for which parallelism is an otherwise non - trivial problem . in this paper , details germane to both the development of our time integration scheme , as well as specifics of its implementation are presented .the time integration scheme to be discussed is based upon a two - parameter family of methods specialized to an extended variable formulation of the gle .some of the salient advantages of our formulation and the final time integration scheme are : * generalizability to a wide array of memory kernels representable by a positive prony series , such as power laws . * efficiency afforded by an extended variable formulation that obviates the explicit evaluation of time convolutions . * inexpensive treatment of correlated random forces , free of linear algebra , requiring only one random force evaluation per extended variable per timestep .* exact conservation of the first and second moments of either the integrated velocity distribution or position distribution , for harmonically confined and free particles .* numerically stable approach to the langevin limit of the gle .* simplicity of implementation into an existing verlet md framework .the specialization to prony - representable memory kernels is worth noting , as there is a growing body of literature concerning this form of the gle. a number of results have been presented that establish the mathematical well - posedness of this extended variable gle including a term accounting for smooth conservative forces , as may arise in md. these results include proofs of ergodicity and exponential convergence to a measure , as well as a discussion of the langevin limit of the gle in which the parameters of the extended system generate conventional langevin dynamics .one somewhat unique feature of our framework is that we are analyzing the gle using methods from stochastic calculus . in particular , we focus on weak convergence in the construction our method , i.e. , error in representing the moments of the stationary distribution or the distribution itself .the optimal parametrization of our two - parameter family of methods will be defined in terms of achieving accuracy with respect to this type of convergence . in particular , the optimal method that has been implemented achieves exactness in the first and second moments of the integrated velocity distribution for harmonically confined and free particles .few authors have considered this type of analysis for even conventional langevin integrators , with a notable exception being wang and skeel, who have carried out weak error analysis for a number of integrators used in conventional langevin dynamics . to the best of our knowledge ,this is the first time that such a weak analysis has been carried out for a gle integrator . we hope that these considerations will contribute to a better understanding of existing and future methods .the remainder of this paper is structured as follows : * * section [ problem_statement ] * introduces the mathematical details of the gle . * * section [ extended_variable_formalism ] * presents the extended variable formulation and its benefits . ** section [ numerical_integration_of_the_gle ] * develops the theory associated with integrating the extended variable gle in terms of a two - parameter family of methods . * * section [ error_analysis ] * provides details of the error analysis that establishes the ` optimal ' method among this family . * * section [ multistage_splitting ] * discusses the extension of our method to a multi - stage splitting framework .* * section [ implementation_details ] * summarizes details of the implementation in lammps . * * section [ results ] * presents a number of results that establish accuracy in numerous limits / scenarios , including demonstration of utility in constructing reduced order models .the gle for particles moving in -dimensions can be written as [ cont_gle ] with initial conditions and . here, is a conservative force , is a random force , is a diagonal mass matrix , and is a memory kernel .the solution to this stochastic integro - differential equation is a trajectory , which describes the positions and velocities of the particles as a function of time , .the second term on the right - hand side of equation [ vel_gle ] accounts for the temporally non - local drag force , and the third term accounts for the correlated random force .the nature of both forces are characterized by the memory kernel , , consistent with the fluctuation - dissipation theorem ( fdt). the fdt states that equilibration to a temperature , , requires that the two - time correlation of and be related as follows : here , is the kronecker delta , and is boltzmann s constant . in the context of an md simulation ,we are interested in solving equation [ cont_gle ] for both and at a set of uniformly spaced discrete points in time . to this end, we seek to construct a solution scheme that is mindful of the following complications : * calculation of the temporally non - local drag force requires a convolution with , and thusly the storage of some subset of the time history of .* numerical evaluation of requires the generation of a sequence of correlated random numbers , as specified in equation [ fdt ] . * as equation [ cont_gle ]is a stochastic differential equation ( sde ) , we are not concerned with issues of local or global error , but rather that the integrated solution converges in distribution .to circumvent the first two complications , we work with an extended variable formalism in which we assume that is representable as a prony series : , \quad t\geq 0.\ ] ] as will be demonstrated in section [ extended_variable_formalism ] , this form of the memory kernel will allow us to map the non - markovian form of the gle in equation [ cont_gle ] onto a higher - dimensional markovian problem with extended variables per particle .the third complication is resolved in sections [ numerical_integration_of_the_gle ] and [ error_analysis ] , in which a family of integrators is derived , and then ` optimal ' parameters are selected based upon an error analysis of the moments of the integrated velocity .we introduce the extended variable formalism in two stages .first , we define a set of extended variables that allow for an effectively convolution - free reformulation of equation [ cont_gle ] .then , we demonstrate that the non - trivial temporal correlations required of can be effected through coupling to an auxiliary set of ornstein - uhlenbeck ( ou ) processes .we begin by defining the extended variable , , associated with the prony mode s action on the component of and : v_i(s ) ds\ ] ] component - wise , equation [ cont_gle ] can now be rewritten as : [ ext_gle_1 ] rather than relying upon the integral form of equation [ ext_var_def ] to update the value of , we consider the total differential of to generate an equation of motion that takes the form of a simple sde : now , the system of equations [ ext_gle_1 ] and [ ext_var_eom ] can be resolved for , , and without requiring the explicit evaluation of a convolution integral .next , we seek a means of constructing random forces that obey the fdt , as in equation [ fdt ] . to this end , we consider the following sde : if is a standard wiener process , this sde defines an ornstein - uhlenbeck ( ou ) process , .using established properties of the ou process, we can see that has mean zero and two - time correlation : , \quad s\geq 0\ ] ] it is then clear , that the random force in equation [ cont_gle ] can be rewritten as : here each individual contribution is generated by a standard ou process , the discrete - time version of which is the ar(1 ) process . while we are still essentially forced to generate a sequence of correlated random numbers , mapping onto a set of ar(1 ) processes has the advantage of requiring the retention of but a single prior value in generating each subsequent value .further , standard gaussian random number generators can be employed . combining both results ,the final extended variable gle can be expressed in terms of the composite variable , : [ ext_gle_2 ] it is for this system of equations that we will construct a numerical integration scheme in section [ numerical_integration_of_the_gle ] .it is worth noting that other authors have rigorously shown that this extended variable form of the gle converges to the langevin equation in the limit of small . informally , this can be seen by multplying the equation by , and taking the limit as goes to zero , which results in inserting this expression into the equation for , we obtain which is a conventional langevin equation .we have been careful to preserve this limit in our numerical integration scheme , and will explicitly demonstrate this theoretically and numerically .we consider a family of numerical integration schemes for the system in equation [ ext_gle_2 ] assuming a uniform timestep , .notation is adopted such that for . given the values of , , and , we update to the time step using the following splitting method : 1 ._ advance by a half step : _ 2 ._ advance by a full step : _. _ advance by a full step : _ 4 ._ advance by a half step : _ here , each is drawn from an independent gaussian distribution of mean zero and variance unity .the real - valued and can be varied to obtain different methods .for consistency , we require that for the remainder of this article , we restrict our attention three different methods , each of which corresponds to a different choice for and . ** method 1 : * using the euler - maruyama scheme to update is equivalent to using * * method 2 : * if is held constant , the equation for can be solved exactly .using this approach is equivalent to setting * * method 3 : * both methods 1 and 2 are unstable as goes to zero . to improve the stability when is small, we consider the following modified version of method 2 : note that all three methods satisfy the consistency condition , and are equivalent to the strmer - verlet - leapfrog method when and . to help guide our choice of method , we compute the moments of the stationary distribution for a one - dimensional harmonic potential ( natural frequency ) and a single mode memory kernel ( weight and time scale ) .a similar approach has been used for the classical langevin equation. the extended variable gle for this system converges to a distribution of the form \ ] ] where is the usual normalization constant . from this, we can derive the analytic first and second moments next , we consider the discrete - time process generated by our numerical integrators , and show that the moments of its stationary distribution converge to the analytic ones in . stationary distribution of this process is defined by the time independence of its first moments enforcing these identities , it can be shown that hence , the first moments are correctly computed by the numerical method for any choice of and . computing the second moments , we obtain from this analysis, we conclude that we obtain the correct second moment for for any method with .now , applying the particular values of and , and expanding in powers of , we obtain the following . ** method 1 : * * * method 2 : * * * method 3 : * for methods 1 and 3 , we obtain the exact variance for , independent of , since they both satisfy . for method 2 ,the error in the variance of is second - order in .all three methods overestimate the variance of , with an error which is second - order in .the error in the variance of is first - order for method 1 , and second - order for methods 2 and 3 .it is possible to choose and to obtain the exact variance for , but this would require using a different value for and for each value of .this is not useful in our framework , since the method is applied to problems with general nonlinear interaction forces .we would like our numerical method to be stable for a wide range of values for . as we mentioned in section [ extended_variable_formalism ], the gle converges to the conventional langevin equation as goes to zero , and we would like our numerical method to have a similar property . for fixed , both methods 1 and 2 are unstable ( is unbounded ) as goes to zero .however , method 3 does not suffer from the same problem , with converging to zero and bounded .from this analysis , we conclude that method 3 is the best choice for implementation .prior to providing implementation details , however , we briefly consider a simple multistage extension that can capture the exact first and second moments of position and velocity simultaneously at the expense of introducing a numerical correlation between them . inspired by the work of leimkuhler and matthews, we consider a generalization of the splitting method considered in section [ numerical_integration_of_the_gle ] in which the position and velocity updates are further split , in the special case that , we have , the two updates of can be combined , and we recover our original splitting method . repeating the analysis in section [ error_analysis ] for the harmonic oscillator with a single memory term , we find in the special case that , we can simplify these expressions to obtain from this analysis , we conclude that we obtain the correct second moment for for any method with and .to guarantee the correct second moment for , the choice of and becomes dependent . as was discussed in the previous section ,the parameters prescribed by methods 1 and 3 using the original splitting have a similar behavior but with the roles of and reversed .it is then tempting to formulate a method that exactly preserves the second moments of both and at the same time .it turns out that this is possible by simply shifting where we observe , using either or in the multistage splitting method above .for example , consider the following asymmetric method , with , for this method we obtain , if we use a method with , we find that we obtain the exact moments for and , but we have introduced an correlation between and .this is in contrast to the symmetric methods where this correlation is identically zero .method 3 , as detailed in section [ error_analysis ] has been implemented in the lammps software package .it can be applied in conjunction with all conservative force fields supported by lammps .there are a number of details of our implementation worth remarking on concerning random number generation , initial conditions on the extended variables , and the conservation of total linear momentum .the numerical integration scheme requires the generation of gaussian random numbers , by way of in equation [ s_update ] . by default ,all random numbers are drawn from a uniform distribution with the same mean and variance as the formally required gaussian distribution .this distribution is chosen to avoid the generation of numbers that are arbitrarily large , or more accurately , arbitrarily close to the floating point limit .the generation of such large numbers may lead to rare motions that result in the loss of atoms from a periodic simulation box , even at low temperatures .atom loss occurs if , within a single time step , the change in one or more of an atom s position coordinates is updated to a value that results in it being placed outside of the simulation box after periodic boundary conditions are applied .a uniform distribution can be used to guarantee that this will not happen for a given temperature and time step . however , for the sake of mathematical rigor , the option remains at compile - time to enable the use of the proper gaussian distribution with the caveat that such spurious motions may occur .should the use of this random number generator produce a trajectory in which atom loss occurs , a simple practical correction may be to use a different seed and/or a different time step in a subsequent simulation .it is worth noting that the choice of a uniform random number distribution has been rigorously justified by dnweg and paul for a number of canonical random processes , including one described by a conventional langevin equation .we anticipate that a similar result may hold for the extended variable gle presented in this manuscript .with respect to the initialization of the extended variables , it is frequently the case in md that initial conditions are drawn from the equilibrium distribution at some initial temperature .details of the equilibrium distribution for the extended system are presented in section 2 of an article by ottobre and pavliotis. in our implementation , we provide the option to initialize the extended variables either based upon this distribution , or with zero initial conditions ( i.e. , the extended system at zero temperature ) . as it is typically more relevant for md simulations ,the former is enabled by default and used in the generation of the results in this paper .conservation of the total linear momentum of a system is frequently a desirable feature for md trajectories . for deterministic forces, this can be guaranteed to high precision through the subtraction of the velocity of the center of mass from all particles at a single time step . in the presence of random forces such as those arising in gld, a similar adjustment must be made at each time step to prevent the center of mass from undergoing a random acceleration . while it is not enabled by default, our implementation provides such a mechanism that can be activated .when active , the average of the forces acting on all extended variables is subtracted from each individual extended variable at each time step . while this is a computationally inexpensive adjustment , it may not be essential for all simulations .throughout this section , results will be presented , primarily in terms of the integrated velocity autocorrelation function ( vaf ) .this quantity is calculated using `` block averaging '' and `` subsampling '' of the integrated trajectories for computational convenience. error bars are derived from the standard deviation associated with a set of independently generated trajectories ., ) , ii . ) critically damped ( , ) , and iii . ) overdamped limits ( , ) .a time step of is used for all runs , and error bars are drawn based upon a sample of 10,000 walkers over 10 independent runs .[ single_mode_fig],title="fig : " ] , ) , ii . ) critically damped ( , ) , and iii . ) overdamped limits ( , ) .a time step of is used for all runs , and error bars are drawn based upon a sample of 10,000 walkers over 10 independent runs .[ single_mode_fig],title="fig : " ] , ) , ii . ) critically damped ( , ) , and iii . ) overdamped limits ( , ) .a time step of is used for all runs , and error bars are drawn based upon a sample of 10,000 walkers over 10 independent runs .[ single_mode_fig],title="fig : " ] to validate our integration scheme in the presence of a conservative force , we next consider gld with an external potential .the analytic solution of the gle with a power law memory kernel and a harmonic confining potential has been derived by other authors. in the cited work , the gle is solved in the laplace domain , yielding correlation functions given in terms of a series of mittag - leffler functions and their derivatives . here , we apply our integration scheme to a prony series fit of the power law kernel and demonstrate that our results are in good agreement with the exact result over a finite time interval .the gle that we intend to model has the form : we begin by constructing a prony series representation of the memory kernel .as it exhibits a power law decay in the laplace domain , as well , a prony series representation will have strong contributions from modes decaying over a continuum of time scales .to this end , rather than relying on a non - linear fitting procedure to choose values for , we assume logarithmically spaced values from to . by assuming the form of each exponential , the prony series fit reduces to a simple linear least squares problem , that we solve using uniformly spaced data over an interval that is two decades longer than the actual simulation . in figure[ kernel_fit_fig ] , the prony series fit of the memory kernel for is compared to its exact value . for , with an increasing number of modes.[kernel_fit_fig],title="fig : " ] here , the maximum relative error is , while it is for . in figure[ harmonic_soln_fig ] , the normalized vaf computed via numerical integration of the extended gle with a variable number of modes is shown compared to the exact result for some of the parameters utilized in an article by despsito and viales. it seems evident that the accuracy of the integrated velocity distribution improves relative to the exact velocity distribution as the number of terms in the prony series fit increases .this is quantified in figure [ harmonic_error_fig ] , in which the pointwise absolute error in the integrated vaf is illustrated . for , , and .a time step of is used , and error bars are drawn based upon a sample of 10,000 walkers over 10 independent runs with .[harmonic_soln_fig],title="fig : " ] for , , and .a time step of is used , and error bars are drawn based upon a sample of 10,000 walkers over 10 independent runs with .[harmonic_soln_fig],title="fig : " ] + for , , and .a time step of is used , and error bars are drawn based upon a sample of 10,000 walkers over 10 independent runs with .[harmonic_soln_fig],title="fig : " ] .error is computed with respect to the mean of vafs computed from 10 independent runs.[harmonic_error_fig],title="fig : " ] next , we demonstrate that our implementation is robust in certain limits of the gle - particularly the langevin and zero coefficient limits . as our implementation is available in a public domain code with a large user base , developing a numerical method that is robust to a wide array of inputs is essential .for the zero coefficient limit we consider a harmonically confined particle experiencing a single mode prony series memory kernel of the following form : the initial conditions on and are drawn from a thermal distribution at . in the limit that , we expect the integrated normalized vaf to approach that of a set of deterministic harmonic oscillators .the initial conditions on and are drawn from a thermal distribution at , giving the integrated result some variance , even in the newtonian / deterministic limit . in figure [ zero_coeff_fig ] , that this limit is smoothly and stably approached is illustrated .( newtonian ) limit from for a single prony mode with .the particle is harmonically confined with , and the expected period of oscillation is restored for .[ zero_coeff_fig],title="fig : " ] in the exact limit , oscillations in the vaf occur at the natural frequency of the confining potential , , whereas for non - zero , these oscillations are damped in proportion to , as one may expect on the basis of intuition .the langevin limit is illustrated next . to this end, we utilize the same single mode prony series memory kernel , but remove the confining potential ( i.e. , ) .we have done so to ensure that the langevin limit yields an ornstein - uhlenbeck process . in figure[ langevin_lim_fig ] , we illustrate that this limit is also smoothly and stably approached .( langevin ) limit from for a single prony mode .the particle is subject to no conservative forces , so the resultant dynamics correspond to an ornstein - uhlenbeck process .the ` exact ' langevin limit was integrated using _fix langevin _ in lammps .[ langevin_lim_fig],title="fig : " ] the result for the langevin limit itself was integrated using the existing _ fix langevin _ command in lammps .for , the gle yields results that differ from the langevin limit as one might expect on the basis of the previous results .however , even for , the gle yields results that are close to that of the langevin limit , as the resultant numerical method retains some gle - like behavior .however , for and , the parameter ] ) highlighting the utility of our method s reproduction of the langevin limit .this prony series representation of was used in equation [ force_free_gle ] , which was numerically integrated to yield the integrated vaf result in figure [ harmonic_kernel_fit ] .this result is especially interesting as it demonstrates capability for effecting a confining potential , on a finite timescale , exclusively through the gle drag / random forces .this same fitting procedure could be used to remove inter - particle interactions , albeit with some loss of dynamical information , and will be discussed in more detail in future work .it is important to note that while the dynamics of a confined particle are reproduced in the absence of an explicit confining force , this is only the case over the interval for which the memory kernel is reconstructed . outside of this interval , the asymptotic behavior of the prony series memory kernel will give rise to unbounded diffusive motion . while the expected discrepancies in the dynamics are difficult to notice in the vaf , as both the analytic and reconstructed quantities decay to zero , it is very evident in the mean - squared displacement ( msd ) . herethe asymptotic exponential behavior of the prony series memory kernel necessarily leads to asymptotic diffusive motion signaled by a linear msd .in contrast , the analytic memory kernel with the confining force will generate an msd that remains bounded .this is illustrated in figure [ harmonic_msd_comparison ] . here , the fit vaf is integrated to yield the msd for the prony series model , and compared with the exact msd over both the interval of the fit , and beyond .this example illustrates the importance of choosing an appropriate interval for fitting memory kernels to achieve the appropriate limiting behavior in one s dynamics ., title="fig : " ]a family of numerical integration schemes for gld have been presented .these schemes are based upon an extended variable formulation of the gle in which the memory kernel is rendered as a positive prony series . in certain limits, it can be shown that a specific instance of this family of integrators exactly conserves the first and second moments of the integrated velocity distribution , and stably approaches the langevin limit . to this end, we identify this parametrization as optimal , and have implemented it in the md code , lammps .numerical experiments indicate that this implementation is robust for a number of canonical problems , as well as certain pathologies in the memory kernel .an exemplary application to reduced order modeling illustrates potential uses of this module for md practitioners .future work will further develop the vaf fitting procedure in the context of statistical inference methods , and present extensions of the numerical integrator to mixed sign and complex memory kernels .the authors would like to acknowledge jason bernstein , paul crozier , john fricks , jeremy lechman , rich lehoucq , scott mckinley , and steve plimpton for numerous fruitful discussions and feedback .sandia national laboratories is a multi - program laboratory managed and operated by sandia corporation , a wholly owned subsidiary of lockheed martin corporation , for the u.s .department of energy s national nuclear security administration under contract de - ac04 - 94al85000 .42ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] __ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) , * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( )
generalized langevin dynamics ( gld ) arise in the modeling of a number of systems , ranging from structured fluids that exhibit a viscoelastic mechanical response , to biological systems , and other media that exhibit anomalous diffusive phenomena . molecular dynamics ( md ) simulations that include gld in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient , stable , and have known convergence properties . in this article , we derive a family of extended variable integrators for the generalized langevin equation ( gle ) with a positive prony series memory kernel . using stability and error analysis , we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the lammps md software package . salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases , stable behavior in the limit of conventional langevin dynamics , and the use of a convolution - free formalism that obviates the need for explicit storage of the time history of particle velocities . capability is demonstrated with respect to accuracy in numerous canonical examples , stability in certain limits , and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel . + _ copyright 2013 american institute of physics . this article may be downloaded for personal use only . any other use requires prior permission of the author and the american institute of physics . _
most solid tumors eventually establish colonies in distant anatomical locations ; when these colonies become clinically detectable , they are called macrometastasis . while often there is a large burden from primary tumors , it is in fact metastatic disease that is responsible for most cancer fatalities .the creation of macrometastasis requires the successful completion of a sequence of difficult steps .first , cancer cells must gain access to the general circulation system via the process of intravasation .next , the cells must survive in the inhospitable environment of the circulatory system . following this, the tumor cells must exit the circulatory system ( extravasation ) at a distant site and initiate micrometastsis ( clinically undetectable population of tumor cells at a distant anatomical site ) .lastly , the micrometastsis must develop the ability to successfully proliferate in the distant site and grow into clinically identifiable macrometastasis .the completion of these steps is very difficult and only a small fraction of tumor cells are able to achieve this .however , due to the vast number of cells in most primary tumors , metastasis commonly occurs in later stage solid tumors . there has been significant mathematical research in the design of optimal anti - cancer therapies .this has included studies on optimal chemotherapy , radiotherapy , and more recently targeted therapies and immunotherapy ( ) .since we are interested in radiotherapy we will focus on previous work in this field .the vast majority of modeling of radiotherapy response is based on the linear - quadratic model ( lq ) which says that tissue response is governed by the parameters and ( see e.g. , ) . specifically , following a single exposure to gray of radiation , the surviving fraction of viable cellsis given by .an important question in this field is to decide on the optimal temporal distribution of a given amount of radiation , i.e. , how to kill the most tumor cells while inflicting the least amount of normal tissue damage .this is commonly referred to as the ` optimal fractionation problem . 'two possible solutions to this problem are hyper - fractionated and hypo - fractionated schedules . in hyper - fractionated schedules ,small fraction sizes are delivered over a large number of treatment days , while in hypo - fractionated schedules , large fraction sizes are delivered over a small number of treatment days .if we minimize primary tumor cell population at the conclusion of treatment , it has been seen ( and ) that whether hyper or hypo - fractionation is preferable depends on the radiation sensitivity parameters of the normal and cancerous tissue .however we will observe in section 4 of this manuscript that when designing optimal treatments with the goal of minimizing metastatic production , hypo - fractionation is preferable for many parameter choices , and hyper - fractionation is only preferable sometimes when the value of the tumor is large .there have been a substantial number of works looking at optimal fractionation . the work considers dynamic design of fractionation schedules with incomplete repair , repopulation and reoxygenation . a more recent work considers the optimization problem associated with finding fractionation schedules under an lq model with incomplete repair and exponential repopulation .the authors theoretically establish the benefits of hypo - fractionation in the setting of a low value of the tumor . brenner and hall utilized the lq model in combination with the lea - catcheside function ( a generalization of the lq model that is useful at higher doses or prolonged doses ) to conclude that due to its slow response to radiation , prostate cancer can be treated equally effectively by either uniform radiation scheduling or hypo - fractionation ( which has fewer side effects ) .unkelbach et al . studied the interdependence between optimal spatial dose distribution and creation of fractionation schedules .another work utilized a dynamic programming approach to study the problem of optimal fractionation schedules in the presence of various repopulation curves .an important property common to all of these works is that they utilize an objective function that seeks to minimize final primary tumor population size in some sense . while this can be an important objective , in most cancers , it is ultimately metastatic disease that proves fatal. therefore , in this work , we study optimal fractionation schedules when using an objective function that seeks to minimize the total production of metastatic cells .the understanding of the metastatic process and how to respond to it has been greatly aided by the mathematical modeling community ( for an overview of this contribution see the recent review paper ) . in an interesting work , iwata et al .developed a set of differential equations governing the population dynamics of the metastatic population .a compelling work is the paper by thames et al . where they developed a mathematical model of the metastatic process to calculate risk from metastatic disease due to delay in surgery . hanin and korosteleva used a stochastic model to address questions such as : ( 1 ) how early do metastasis events occur , ( 2 ) how does extirpation of the primary affect evolution of the metastasis , and ( 3 ) how long are metastasis latent ?haeno and michor developed a multitype branching process model to study metastasis and in particular the probability of metastasis being present at diagnosis . in a follow up work , they used a mathematical model to study metastasis data in recently deceased pancreatic cancer patients . in a recent work , diego et al .used an ode model to study the relationship between primary and metastatic cancer sites , and in particular , makes predictions about the clinical course of the disease based on the parameter space of their ode model .the remainder of the paper is organized as follows . in section [ sec : metsprod ] , we discuss a model for metastasis production and how it can be used to develop a function that reflects metastatic risk .next , in section [ sec : opt ] , we describe the optimization model and solution approach .finally , in section [ sec : num ] , we present numerical results in the setting of breast cancer .we start by assuming that the total population of primary tumor cells at time is given by the function .note we will assume throughout this work that the population of cells is large enough that we can treat the population as a deterministic function .we then assume that each tumor cell initiates a successful macrometastasis at rate per unit of time .this is similar to the modeling approach taken in where they were able to fit their model to metastasis data from patients .if we are interested in the time horizon ] , is a poisson random variable with mean and thus the probability of metastasis occurring is .therefore , in order to minimize , it suffices to minimize . in the rate , we assume that every cell is capable of metastasis . in the geometry of the actual tumor , it might be the case that only those cells on the surface of the tumor are capable of metastasis , or only those cells in close proximity to a blood vessel are capable .therefore , we consider the generalization for .this is similar to the model of metastasis creation in ; there they refer to the parameter as the fractal dimension of the blood vessels infiltrating the tumor , if we assume that the tumor is three dimensional with a two dimensional surface and that all cells on the surface are equally capable of metastasis then we can take .however if we assume that only a small fraction of cells on the surface are capable of metastasis we could take e.g. , .notice that in order to minimize , we do not need to know the parameter , which is difficult to measure .note that we are using a rather simplistic model for the metastasis production in that we assume at most only two rates of metastasis for the primary tumor cells . in reality , it is likely that the rate of metastasis for a given cell will be a complex function of its position , migratory potential , and its oxidative state .however , given the lack of data available , we found it preferable to work with this relatively simplistic model that does not require the knowledge of any intricate parameters .in addition , since the primary goal of this work is to introduce a novel objective function , we feel that adding further biological details can be saved for further exploration .lastly and importantly , variants of this relatively simplistic model have been matched to clinical metastasis data .our goal is to determine an optimal radiotherapy fractionation scheme that minimizes the probability that the primary tumor volume metastasizes . given the discussion from the previous section , this equates to minimizing ; however , for simplicity , we will use an approximate objective that uses a summation rather than an integral during therapy .let the time horizon consists of treatment days and a potentially long period of no treatment at the end of which metastatic risk is evaluated .suppose a radiation dose is delivered at time instant , for , then we choose to minimize where is the number of cells immediately before the delivery of dose .during the course of the treatment , we assume exponential tumor growth with a time lag . beyond that, we use the gomp - ex model for tumor growth because we are evaluating long - term metastatic risk and the exponential model will give unreasonably large values over such a long time period . the gomp - ex law of growthassumes that the cellular population expands through the exponential law initially , when there is no competition for resources .however , there is a critical size threshold such that for the growth follows the gompertz law .thus , we have for and for ] is an oar tissue sensitivity parameter .the bed can be derived from the lq model and is used to quantify fractionation effects in a clinical setting .note that we can also have multiple bed constraints on a single oar , e.g. , to model early and late effects .the optimization problem of interest is s.t .}}\right ) \leq c_i,\ \i=1,\dots , m\ ] ] where is a constant that specifies an upper bound to the bed in the oar and is the total number of oars in the vicinity of tumor .note that we do not work directly with the quantity during therapy but instead with its approximation .here , is an upper bound for and is a good approximation for if the impact of the exponential growth term in ( [ eq : linearquadraticexp ] ) is relatively small compared to the dose fraction terms , which is typically the case for most disease sites .formulation is a nonconvex quadratically constrained problem .such problems are computationally difficult to solve in general . however, we can use a dynamic programming ( dp ) approach with only two states to solve this deterministic problem , similar to the work in .the states of the system are , the cumulative dose , and , the cumulative dose squared , delivered to the tumor immediately after time .we have now we can write the dp algorithm ( forward recursion ) as , & t\le n_0 - 1\\ \min_{d_{t}\ge0}[x_0^\xi e^{-\xi\left ( \alpha_t ( u_{t-1}+d_{t } ) + \beta_t ( v_{t-1}+d_{t}^2)-\frac{\ln 2}{\tau_d}(t - t_k)^+\right)}+j_{t-1}(u_{t-1},v_{t-1})]+\int_{n_0}^{t}(x_t)^\xi , & t = n_0\end{cases}\ ] ] with , .we set the function to be if } } v_t > c_i\ ] ] for any .since there are only two state variables , we can solve our optimization problem by discretizing the states and using this dp algorithm .we solve the optimization problem based on the radiobiological parameters for breast cancer .we consider two different normal tissues , heart and lung tissue . for each normal tissue , we define the maximal toxicity }}\frac{d_i^2}{n_i}\ ] ] where and are tissue specific parameters and define the maximum total dose delivered in fractions for each oar .a standard fractionated treatment is to deliver gy to the tumor with gy fractions .the tolerance bed values ( ) for various normal tissues were computed based on the standard scheme . hence all bed in oar associated with optimal schedules obtained in this sectionare less than or equal to their corresponding bed in standard schedule , i.e. gy and for .all radiobiological parameter values used are listed in table [ tabledata ] along with their sources ..breast tumor and normal tissues parameters [ cols="<,<,<,<",options="header " , ]in this work , we have considered the classic problem of optimal fractionation schedules in the delivery of radiation .we have however done this with the non - traditional goal of minimizing the production of metastasis .this is motivated by the fact that the majority of cancer fatalities are driven by metastasis , and that this disseminated disease can be very difficult to treat .we addressed this goal by considering the optimal fractionation problem with a novel objective function based on minimizing the total rate of metastasis production , which we argue is equivalent to minimizing the time integrated tumor cell population .we were able to numerically solve this optimization problem with a dynamic programming approach .we computed radiotherapy fractionation schedules that minimized metastatic risk for a variety of parameter settings .the resulting optimal schedules had an interesting structure that was quite different from what is observed from the traditional optimal fractionation problem where one is interested in minimizing local tumor population at the end of treatment . in the traditional optimal fractionation problemit was observed in that if }\le\min_i\{{[\alpha/\beta]_i}/\gamma_i\} ] then a hyper - fractionated schedule is optimal .in contrast , in the current work we observed that if we are evaluating metastatic risk at the conclusion of therapy then independent of the relationship between } ] , the resulting fractionation schedules for minimizing metastatic risk is a hypo - fractionated structure with large initial doses that taper off quickly .this is due to the structure of the objective function . in order to minimize the time integrated tumor cell population immediately after treatment , it is necessary to quickly reduce the tumor cell population since this is the high point of the tumor cell population over the course of the treatment .if we think of the tumor cell population as quite dangerous due to its metastasis potential , then it is natural to want to reduce their population as quickly as possible .we observed that the structure of the optimal schedule depends on the length of time for which we evaluated metastatic risk .in particular if we evaluate metastasis risk in a long time frame ( several years ) after therapy , and it holds that }>\min_i\{{[\alpha/\beta]_i}/\gamma_i\} ] value in prostate and breast tumors .our results indicate even when the }$ ] ratio of the tumor is very large , it may still better to deliver a hypo - fractionated schedule when taking metastatic risk into account .the current work provides a possible new motivation for considering hypo - fractionated schedules , i.e. , metastasis risk reduction . in this paper, we have assumed the delivery of single daily fractions and have not considered alternate fractionation schemes such as chart that deliver multiple fractions a day .some of our results indicate optimal hypo - fractionation schedules ( as obtained for the t=0.07 case ) .one way to deliver a large amount of dose in a very short period of time is to use such schemes like chart .however , additional modeling is needed to incorporate incomplete sublethal damage repair due to short inter - fraction time periods .another possible interpretation of this work is to view the output as the risk of the tumor developing resistance to a chemotherapeutic treatment .this can be achieved by simply viewing the parameter as the rate at which tumor cells develop drug resistance . due to the severe consequences of drug resistance , this is also an interesting direction for further exploration .we feel that this type of work minimizing metastatic production opens the potential for a new line of research in the radiation optimization community as well as cancer biology .in particular , there are several important biological phenomena that we have not included .this includes oxygenation status ( and history ) of cells as well as the vascular structure of the tumor of interest .lastly , a potentially interesting extension of this work could be to validate our predictions in animal models of metastatic cancer .t. bortfeld , j. ramakrishnan , j. n tsitsiklis , and j. unkelbach .optimization of radiation therapy fractionation schedules in the presence of tumor repopulation .forthcoming in informs journal on computing ., 2013 .m. mizuta , s. takao , h. date , n. kishimoto , k. sutherland , r. onimaru , and h. shirato . a mathematical study to select fractionation regimen based on physical dose distribution and the linear - quadratic model . , 84:829833 , 2012 .g. tortorelli , et al .standard or hypofractionated radiotherapy in the postoperative treatment of breast cancer : a retrospective analysis of acute skin toxicity and dose inhomogeneities . , 13(1):230 , 2013 .j. owen , a. ashton , j. bliss , j. homewood and et al . .effect of radiotherapy fraction size on tumour control in patients with early - stage breast cancer after local tumour excision : long - term results of a randomised trial ., 7:467471 , 2006 .d. dearnaley , i. syndikus , g. sumo , m. bidmead , d. bloomfield , c. clark , ... & e. hall .conventional versus hypofractionated high - dose intensity - modulated radiotherapy for prostate cancer : preliminary safety results from the chhip randomised controlled trial ., 13:4354 , 2012 .
metastasis is the process by which cells from a primary tumor disperse and form new tumors at distant anatomical locations . the treatment and prevention of metastatic cancer remains an extremely challenging problem . this work introduces a novel biologically motivated objective function to the radiation optimization community that takes into account metastatic risk instead of the status of the primary tumor . in this work , we consider the problem of developing fractionated irradiation schedules that minimize production of metastatic cancer cells while keeping normal tissue damage below an acceptable level . a dynamic programming framework is utilized to determine the optimal fractionation scheme . we evaluated our approach on a breast cancer case using the heart and the lung as organs - at - risk ( oar ) . for small tumor values , hypo - fractionated schedules were optimal , which is consistent with standard models . however , for relatively larger values , we found the type of schedule depended on various parameters such as the time when metastatic risk was evaluated , the values of the oars , and the normal tissue sparing factors . interestingly , in contrast to standard models , hypo - fractionated and semi - hypo - fractionated schedules ( large initial doses with doses tapering off with time ) were suggested even with large tumor / values . numerical results indicate potential for significant reduction in metastatic risk .
in spite of the growing recognition that physics skills `` scholastic rigor , analytical thinking , quantitative assessment , and the analysis of complex systems '' are important for biology and pre - medical students , these students often arrive in physics classes skeptical about the relevance of physics to their academic and professional goals . to engage these students , in the 2010 - 2011 academic year ,the yale physics department debuted a new introductory physics sequence , that , in addition to covering the basics kinematics , force , energy , momentum , hooke s law , ohm s law , maxwell s equations _ etc . _ also covers a number of more biologically - relevant topics , including , in particular , probability , random walks , and the boltzmann factor .the point of view of the class is that the essential aspect of physics is that it constitutes a mathematical description of the natural world , irrespective of whether the topic is planetary motion or cellular motion .the enrollment in the new sequence was approximately 100 students .the class is evenly split between sophomores and juniors with a few seniors .the majority ( 80% ) are biology majors , with 80% identifying themselves as premedical students , and they possess considerable biological sophistication . in many cases ,they are involved in biomedical research at yale or at the yale school of medicine . in many cases too , they are involved in medically - related volunteer work .the major time commitment required to do justice to a rigorous physics class has to compete with these other obligations .therefore , an important aspect of our teaching strategy is to convince these students that physics is indeed relevant to their goals . to this end, we determined to cover a number of biologically - relevant topics , with which the majority of the students would have some familiarity from their earlier biology and chemistry classes .this paper presents three such topics , that are interrelated and can be treated as random walks , in the hope that these may be useful to others .first is dna melting , which we place in the context of polymerase chain reaction ( pcr ) .this provides a way to illustrate the role of the boltzmann factor in a venue well - known to the students .this treatment builds on earlier sections of the course , concerned with random walks and chemical reaction rates , which are not described here .the second topic is the activity of helicase motor proteins in unzipping double - stranded nucleic acid ( dna or rna , although we will write in terms of dna ) .our discussion is based on ref . .helicase activity constitutes an elegant example of a brownian ratchet and builds on the earlier discussion of dna melting .third , we present a discussion of force generation by actin polymerization , which provides the physical basis of cell motility in many cases , and which is another brownian ratchet . in this case , based on ref . , we can determine how the velocity of actin polymerization depends on actin concentration and on load .in each of these examples , biology and pre - medical students in an introductory physics class see that a physics - based approach permits a new , deeper understanding of a familiar molecular - biological phenomenon .`` the laws of thermodynamics may easily be obtained from the principles of statistical mechanics , of which they are an incomplete expression . ''gibbs . instead of introducing thermal phenomena via thermodynamics and heat engines ,as might occur in a traditional introductory sequence , following the suggestion of garcia _ et al . _ , we chose to assert the boltzmann factor as the fundamental axiom of thermal physics .building upon earlier sections of the course on probability and random walks , this approach permits us to rapidly progress to physics - based treatments of dna melting , unzipping of double - stranded dna at the replication fork by helicase motor proteins , and force - generation by actin - polymerization . specifically , we assert that , for microstates and of a system , the probability ( ) of realizing a microstate and the probability ( ) of realizing a microstate are related via where is the energy of microstate , is the energy of microstate , jk is boltzmann s constant , and is the absolute temperature . `` this fundamental law is the summit of statistical mechanics , and the entire subject is either the slide - down from this summit , as the principle is applied to various cases , or the climb up to where the fundamental law is derived and the concepts of thermal equilibrium and temperature clarified .'' r. p. feynman on the boltzmann factor . to illustrate the boltzmann factor in a simple example , we consider protein folding / unfolding .protein / unfolding is an example of an isomerization reaction , in which one chemical species alternates between different molecular configurations . in this case , it is important to realize that the folded state corresponds to a single microstate , but that the unfolded state corresponds to microstates .this is because there is just one molecular configuration associated with the folded state .by contrast , the unfolded state can be viewed as a random walk in space , and therefore corresponds to different molecular configurations , one for each different random walk .if there are a total of proteins , of which are unfolded , and if there are possible unfolded microstates , then the probability of realizing a particular unfolded microstate ( ) is equal to the probability that a protein molecule is unfolded multipled by the probability that an unfolded protein is in the particular unfolded microstate of interest , which is one of equally - likely microstates : there is a unique folded microstate , so in terms of and the number of folded proteins , , the probability of realizing the folded microstate is simply combining eq . [ bf ] , eq .[ eq2 ] , and eq . [ eq3 ] , we find where is the energy of any of the unfolded states and is the energy of the folded state .next , we examine dna melting , according to the model of ref . , in which dna melting is equivalent to dna unzipping .we treat dna zipping and unzipping as a set of isomerization reactions . to this end , we consider a population of identical dna strands each of which contains a junction between dsdna and ssdna .[ fig1 ] illustrates the reactions involving the dna strand with paired base pairs .this is the chemical species in the center .the species on the left and right are dna strands with and paired base pairs , respectively .the relevant reaction rates are , which is the zipping rate , and , which is the unzipping rate .when , the dna zips up .when , the dna unzips . as suggested in fig .[ fig1 ] , is the mean number of dna strands with paired base pairs , _ etc ._ zipped base pairs are illustrated , either undergoing isomerization to a dna strand with base pairs or isomerization to a dna strand with base pairs.,title="fig:",scaledwidth=47.0% ] we have previously discussed in class that how the concentration of chemical species changes in time can be described by chemical rate equations . with the help of fig .[ fig1 ] , we are thus lead to an equation for the rate of change of in terms of , , , , and : at equilibrium , at a temperature , on - average nothing changes as a function of time , so .thus , the factor , which is the ratio of the mean number of dna strands with zipped base pairs to the mean number with zipped base pairs , is equal to the ratio of the probability that a particular dna strand has zipped base pairs to the probability that it has zipped base pairs .thus , this factor is given by a boltzmann factor ( _ cf . _ eq .[ xxx ] ) : where is the energy required to unzip one additional base pair ( so is positive ) and specifies that the two unzipped ssdna bases have a factor times as many microstates as the single dsdna base pair they replace .the last equality in eq .[ zip1 ] defines the free energy required to unzip one base pair : students are familar with from their chemistry classes .similarly , we have substituting eq . [ zip1 ] and eq . [ zip2 ] into eq . [ steadystate2 ] , we have it follows from eq . [ steadystate3 ]that eq . [ dg ] informs us that the dna unzips ,_ i.e. _ , only if , _i.e. _ only if .in order for this condition ( ) to be satisfied , it is necessary that .if we define the dna `` melting temperature '' to be , we see that the dna unzips for , while it zips up for .this phenomenon is an essential ingredient in dna multiplication by polymerase chain reaction ( pcr ) , which is well - known to the students , and for which kary mullis won the 1993 nobel prize in chemistry .the first step in pcr is to raise the temperature above , so that each dsdna strand unzips to become two ssdna strands .when the temperature is subsequently reduced in the presence of oligonucleotide primers , nucleotides and dna polymerase , each previously - unzipped , ssdna strand templates its own conversion to dsdna .this doubles the original number of dsdna strands because a new dsdna strand is created for each ssdna .pcr involves repeating this temperature cycling process multiple ( ) times , with the result that the initial number of dsdna molecules is multiplied by a factor of .thus , initially tiny quantities of dsdna can be hugely amplified , and subsequently sequenced .it is also instructive to view dna zipping / unzipping as a biased random walk , which students have previously studied in the class . in this context , if we consider a dsdna - ssdna junction , the probability of zipping up one base pair in a time is , and the probability of unzipping one base pair in a time is . for small enough it is reasonable to assume that the only three possibilities are ( 1 ) to zip up one base pair or ( 2 ) to unzip one base pair or ( 3 ) to not do anythingtherefore , since probabilities sum to unity , we must have that the probability to do nothing is .given these probabilities , and the length of a base pair , , we may readily calculate the mean displacement of the ssdna - dsdna junction in a time : where zipping corresponds to a negative displacement of the ss - to - ds junction since the mean of the sum of identically - distributed , statistically - independent random variables is times the mean of one of them ( which students learned earlier in the course ) , then in a time the mean displacement of the ssdna - dsdna junction is the corresponding drift velocity of the ssdna - dsdna junction is this is the drift velocity of a dsdna - ssdna junction in terms of the zipping up rate ( ) and the unzipping rate ( ) , or the zipping rate ( ) and the unzipping free energy ( ) .we will come back to this result below , but we note now that eq .[ vj ] is appropriate only when the junction is far from a helicase . as defined in eq .[ eq4 ] , is the change in free energy that occurs when one additional base pair is unzipped .thus , as far as this expression for is concerned , the final , `` product '' state is the unzipped state , and the initial , `` reactant '' state is the zipped state .thus , unzipping corresponds to the forward direction of the reaction .we may make contact with what students have learned in chemistry classes , namely that a reaction proceeds forward if is negative , by pointing out that eq .[ vj ] informs us that the unzipping reaction proceeds forwards ( _ i.e. _ that ) only for , exactly as we are told in chemistry classes . here, though , this result is derived from a more basic principle , namely the boltzmann factor .helicases are a class of motor proteins ( a.k.a . molecular motors ) , which perform myriad tasks in the cell by catalyzing atp - to - adp hydrolysis and using the free energy released in this reaction to do work .the importance of helicases may be judged from the fact that 4% of the yeast genome codes for some kind of helicase .one of their roles is to unzip dsdna and/or dsrna .thus , helicases play an indispensible role in dna replication , for example . to engage students in this topic , we start by showing a number of online movies illustrating the dna - unzipping activity of helicase motor proteins at the replication fork . these movies also present an opportunity for active learning in which we ask students to discuss with their neighbors what is misleading about the videos .the essential point is that , wonderful as they are , the videos suggest that everything proceeds deterministically .by contrast , as we will discuss , all of the processes depicted are actually random walks , but with a drift velocity that corresponds to their progress .we also point out that , on the medical side , werner syndrome , which involves accelerated aging , is caused by a mutation in the _ wrn _ gene which codes for the helicase wrn .one proposed mechanism for how helicase unzips dna is as follows .the helicase steps unimpeded on ssdna towards a ss - to - ds junction , until it encounters the junction , which then blocks its further progress , because the helicase translocates only on ssdna . however , at the junction , there is a non - zero probability per unit time for the junction to thermally unzip one base pair , because of the boltzmann factor .it is then possible for the helicase to step into the just - unzipped position .if the helicase does this , the dna is prevented from subsequently zipping back up again . in this way ,the junction is unzipped one step .repeating this process many times leads to the complete unzipping of the dna . because this mechanism relies on random brownian motions to both unzip the dna and to move the helicase into the just - unzipped position ,the helicase is said to be a brownian ratchet , analogous to feynman s thermal ratchet . just as the motion of the ss - to - ds junction may be conceived as a random walk , so may be the translocation of the helicase on ssdna . in this case , the probability of the helicase stepping one base pair towards the junction ( ) in a time is , and the probability stepping one base pair away from the junction ( ) in a time is , where and are the rate of stepping towards the junction and the rate of stepping away from the junction , respectively . since probabilities sum to unity , and we assume that the only three possibilities in a small time are to step towards the junction one base pair or to step away from the junction one base pair or to not do anything , we must have that the probability to do nothing is . given these probabilities , and the length of a base pair , , we may calculate the mean displacement of the helicase in a time : since the mean of the sum of identically - distributed , statistically - independent random variables is times the mean of one of them , then in a time the mean displacement of the helicase is the corresponding drift velocity of the helicase is this is the drift velocity of a helicase in terms of the stepping - towards - the - junction rate ( ) and the stepping - away - from - the - junction rate ( ) .just like eq. [ vj ] , eq .[ vh ] is appropriate only when the helicase is far from the junction .-direction to increase towards the right , so that and correspond to motion in the negative -direction , and and correspond to motion in the positive -direction.,title="fig:",scaledwidth=45.0% ] an important additional point , concerning helicase translocation on ssdna , is that , as we saw in eq .[ dg ] , the ratio of forward and backward rates is given by a change in free energy .thus , for helicase stepping we must expect , in analogy with eq .[ dg ] , that the ratio of stepping rates is given by where is a free energy change .but what free energy change ?the answer can be gleaned from the observation that helicases , and motor proteins generally , can be thought of as enzymes , which catalyze atp - to - adp hydrolysis , which is coupled to the helicase s translocation .it follows that in eq .[ dgprime ] corresponds to the free energy difference between atp and adp .( note that , as specified in eq .[ dgprime ] , must be positive , in order to ensure that so that the helicase translocates on ssdna preferentially towards the ssdna - to - dsdna junction . )so far , we have considered the situtation when the ds - to - ss junction and the helicase are far apart . to determine how helicase unzips dsdna , it is necessary to determine what happens when these two objects come into close proximity , given that they can not cross each other . to elucidate what happens in this case , we show to the class a simple mathematic demonstration that simulates these two non - crossing random walks .the simulation treats both the location of the helicase and the location of the ds - to - ss junction as random walks . at each time step within the simulation ,the helicase ordinarily steps in the positive -direction , towards the junction , with probability and in the negative -direction , away from the junction , with probability , while the junction ordinarily steps in the positive -direction , zipping up one step , with probability and in the negative -direction , unzipping one step , with probability .however , in the simulation , if the helicase and the junction are neighbors , neither one is permitted to step to where it would overlap with the other .thus , the helicase and the junction can not cross .an example of the simulational results is shown in fig .[ titans ] , where the orange trace represents helicase location as a function of time and the green trace represents the location of the ssdna - to - dsdna junction as a function of time . evidently , the helicase and the junction track together , implying that they have the same drift velocity . for the parameters of this simulation ,the helicase translocates in the same direction as it would in the absence of the junction .by contrast , the junction s direction is opposite its direction without the helicase . thus , for these parameters , the helicase indeed unzips dsdna . using sliders within the mathematica demonstration , which is readily accessed via any web browser, students can explore for themselves the effects of varying , , , and . and .the green random walk represents the position of a ssdna - to - dsdna junction for and . in both cases , .the blue line is eq .the two random walks start at 0 in the case of the junction , and at -2 in the case of the helicase ., title="fig:",scaledwidth=47.0% ] to incorporate analytically the fact that the helicase and the junction can not cross , we introduce the probability , , that the helicase and the junction are not next to each other . the dsdna - to - ssdna junction can only zip up if the helicase and the junction are not next to each other .therefore , we reason that eq . [ dxj ]should be modified to read similarly , eq .[ dxh ] should be modified to read it follows that the drift velocities are modified to read and .however , from the simulation , it is also clear that , while the helicase is unzipping dna , the helicase and the junction must have the same drift velocity _ i.e. _ or we can solve this equation to determine : furthermore , we can use this expression for to determine the drift velocity at which the helicase unzips the dsdna by substituting into eq .[ vh2 ] . setting , we find eq . [ vvv ] represents the velocity at which helicase unzips dsdna according to the brownian ratchet mechanism .the numerator in eq .[ vvv ] is the difference of two rate ratios .it follows , using eq .[ dg ] and eq .[ dgprime ] in eq .[ vvv ] , that eq .[ vvvv ] informs us that whether or not helicase unzips dsdna depends solely on whether or not . for ,the drift velocity of the helicase - plus - junction is positive , corresponding to the helicase unzipping the dsdna .for one base pair , we have , while for the hydrolysis of one atp molecule , we have , so indeed the helicase has plenty of free energy to do its work .in fact , energetically , one atp hydrolysis cycle could unzip up to about 5 base pairs .in fact , beautiful , single - helicase experiments suggest that the simple brownian ratchet mechanism of helicase dna - unzipping activity , presented here , should be refined by incorporating both a softer repulsive potential between the helicase and the ds - to - ss junction than the hard - wall potential implicit in our discussion , and suitable free energy barriers between different microstates of the helicase and junction .appropriate choices of the potential and the barriers permit the helicase to unzip dsdna faster than would occur in the case of a hard - wall potential . a force ( ) , that tends to unzip the dna ,can be incorporated by replacing with and with .the mechanism by which actin or tubulin polymerization exerts a force also a brownian ratchet and is schematically illustrated in fig . [ actin ] . in the case of a load , , applied to a cell membrane , the cell membrane is in turn pushed against the tip of an actin filament , which usually prevents the addition of an additional actin monomer ( g - actin ) of length to the tip of the actin filament ( f - actin ) .however , with probability specified by a boltzmann factor , the membrane s position relative to the tip , , occasionally fluctuates far enough away from the filament tip ( ) to allow a monomer to fit into the gap .if a monomer does indeed insert and add to the end of the filament , the result is that the filament and therefore the membrane move one step forward , doing work against the load force . repeating this many times for many such filaments gives rise to cell motility against viscous forces . in classwe also show movies showing cells moving as a result of actin polymerization , and _ listeria monocytogenes _ actin `` rockets '' . ) , applied to a membrane against which the polymerizing actin filament ( f - actin ) abuts . only if the gap , , between the tip of the actin filament and the membrane exceeds the length , , of a g - actin monomer is it possible for the filament to grow ., scaledwidth=49.0% ] similarly to eq .[ dxj ] and eq .[ dxh ] , we can write down an expression for the mean displacement of the filament tip in a time in the absence of a nearby membrane : where is the concentration of g - actin , is the length of an actin monomer , is the actin on - rate , and is the actin off - rate .however , if the membrane is nearby , it is only possible to add an actin monomer if the distance between the filament and the membrane is greater than .assuming that the time - scale for membrane fluctuations is much faster than that for adding actin monomers , if the probability , that the membrane - filament tip distance is greater than , is , then eq . [ actinx ] is modified to read and the drift velocity of the tip is but application of the boltzmann factor informs us that , when the force on the membrane is , the probability that the gap is greater than is so that this is the force - velocity relationship for an actin or tubulin filament . although the load , , is applied to the membrane , the velocity is constant .therefore , according to newton s third law , as the students know , there can be no net force on the membrane .we may deduce that the load is balanced by an equal and oppposite force , generated by the polymerization ratchet .three , interrelated biologically - relevant examples of biased random walks were presented .first , we presented a model for dna melting , modelled as dna unzipping , which provides a way to illustrate the role of the boltzmann factor in a venue well - known to the students .second , we discussed the activity of helicase motor proteins in unzipping double - stranded dna , for example , at the replication fork , which is an example of a brownian ratchet . finally , we treated force generation by actin polymerization , which is another brownian ratchet , and for which we can determine how the velocity of actin polymerization depends on actin concentration and on load . in each of these examples , building on an earlier coverage of biased random walks , biology and pre - medical students in an introductory physics sequence at yale were lead to the realization that a physics - based approach permits a deeper understanding of a familiar biological phenomenon .
three , interrelated biologically - relevant examples of biased random walks are presented : ( 1 ) a model for dna melting , modelled as dna unzipping , which provides a way to illustrate the role of the boltzmann factor in a venue well - known to biology and pre - medical students ; ( 2 ) the activity of helicase motor proteins in unzipping double - stranded dna , for example , at the replication fork , which is an example of a brownian ratchet ; ( 3 ) force generation by actin polymerization , which is another brownian ratchet , and for which the force and actin - concentration dependence of the velocity of actin polymerization is determined . a
data compression is performed in all types of data requiring storage and transmission .it preserves space , energy and bandwidth , while representing the data in most efficient way [ 1 - 4 ] .there are numerous coding algorithms used for compression in various applications [ 1 - 3,5 - 18 ] .some of them are optimal [ 4,19 ] in all cases , whereas others are optimal for a specific probability distribution of the source symbols .all of these algorithms are mostly applied on -ary data source .however , most of the universal compression algorithms substantially increase their coding complexity and memory requirements when the data changes from binary to -ary source .for instance , in arithmetic coding , the computational complexity difference between the encoder and decoder increases with the number of source symbols [ 2 ] .therefore , it would be beneficial if binarization is perfomed on -ary data source before compression algorithms are applied on it .the process where binarization is followed by compression is most notably found in context - based adaptive binary arithmetic coding ( cabac ) [ 20 ] which is used in h.264/avc video coding standard [ 21 ] , high efficiency video coding ( hevc ) standard [ 22 ] , dynamic 3d mesh compression [ 23 ] , audio video coding standard ( avs ) [ 24 ] , motion compensated - embedded zeroblock coding ( mc - ezc ) in scalable video coder [ 25 ] , multiview video coding [ 26 ] , motion vector encoding [ 27 - 28 ] , and 4d lossless medical image compression [ 29 ] .there are many binary conversion techniques which are , or can be used for the binarization process .the most common among all is binary search tree [ 30 - 32 ] . in this, huffman codeword is used to design an optimal tree [ 18 ] . however , there are two limitations to it .first , the probability of all the symbols should be known prior to encoding that may not be possible in all the applications .although there are methods to overcome the above problem , they come at an additional cost of complexity . for example , binary search tree is updated with the change in incoming symbol probabilities .second , as with the huffman coding , the optimality is achieved only when the probability distribution of symbols are in the powers of two .apart from binary search tree , there are other binarization schemes like unary binarization scheme [ 20 ] , truncated unary binarization scheme [ 20 ] , fixed length binarization scheme [ 20 ] , golomb binarization scheme [ 9,20,33 - 34 ] , among many [ 5 - 13,20,30 - 34 ] .all of them are optimal for only certain type of symbol probability distributions , and hence , can only conserve the entropy of the data for that probability distribution of the source symbols .currently , there is no binarization scheme that is optimal for all probability distributions of the source symbols which would result in achieving overall optimal data compression .this paper presents a generalized optimal binarization algorithm .the novel binarization scheme conserves the entropy of the data while converting the -ary source data into binary strings .moreover , the binarization technique is independent of the data type and can be used in any field for storing and compressing data .furthermore , it can efficiently represent data in the fields which require data to be easily written and read in binary form .the paper is organized as follows .section 2 describes the binarization and de - binarization process that will be carried out at the encoder and decoder , respectively .the optimality proof of the binarization scheme is provided in section 3 . in section 4 , the complexity associated with the binarization process is discussed .lastly , section 5 concludes by stating the advantages of the presented binarization scheme over others , and its applications .the binarization of the source symbols is carried out at the encoder using the following two steps : 1 .a symbol is chosen , and a binary data stream is created by assigning 1 where the chosen symbol is present and 0 otherwise , in the uncompressed data .the uncompressed data is rearranged by removing the symbol chosen in step 1 .the two steps are iteratively applied for symbols .it needs to be explicitly emphasized that the binarization of symbol occurs on the previously rearranged uncompressed data and not on the original uncompressed data . here , the algorithm reduces the uncompressed data size with the removal of binarized symbols from the data , leading to the conservation of entropy .after binarizing every symbol , there are binary data streams corresponding to source symbols .it is because the binary string would represent and symbols as 1 and 0 , respectively .the binarization scheme demostrated here is optimal i.e. , the overall entropy of binarized data streams is equal to the entropy of original data containing -ary source .the proof of optimality is provided in section 3 .after binarization , the binarized data streams can be optimally compressed using any universal compression algorithm , including the algorithms that optimally compress only binary data ( for example : binary arithmetic coding ) . [ cols="^,^,^,^ " , ] table 1 shows the binarization process through an example . a sample input data aabcbacbbaccabacb is considered for the process and as can be seen , it contains three source symbols a, b , and c. in table 1 , binarization order states the sequence in which the symbols are binarized .for instance , in abc binarization order , a is binarized first , followed by b , and then finally by c. the row data shows the uncompressed data available to be binarized after each iteration . below the data rowis the binarized value of each symbol .as can be seen in each first iteration , the symbol that has be binarized is marked 1 , while others are marked 0. in the next iteration , the symbol that was binarized in the current step is removed from the uncompressed data .although shown in table 1 , the binarization process does not require to binarize last symbol , because the resultant string contains all 1 s that provide no additional information and is redundant .it also needs to be noted that each binarization order results in different sets of binary strings . at the decoder ,the decoding of the compressed data is followed by de - binarization of -ary source symbol .the order of decoding follows the order of encoding for perfect reconstruction at minimum complexity . with the encoding order information ,the de - binarization can be perfectly reconstructed in multiple ways other than the encoding order , but the reordering of sequence after every de - binarization will increase the time as well as the decoder complexity .the de - binarization of the source symbols is also carried out in two steps shown below , and these steps are recursively applied to all the binary data streams representing -ary source symbols : 1 .replace 1 with the source symbol in the reconstructed data stream .2 . assign the values of the next binary data stream in sequence to the 0 s in the reconstructed data stream .an example of de - binarization process is shown in table 2 .the de - binarization order follows the same order as of binarization process . in table 2, the row data represents the reconstructed data at each iteration .the value 1 is replaced by the symbol to be de - binarized in the respective iteration , while 0 s are replaced by the binary string of the next symbol to be de - binarized . finally , after the last iteration, the original input data aabcbacbbaccabacb is losslessly recovered for all binarization and de - binarization order .let the data source be , and be the binary source for each source symbol .the entropy of a -ary source is defined as , where is the probability of source symbol .subsequently , the entropy of -ary data source with length is .similarly , is the entropy of binary source with data length .as explained in the binarization algorithm , the uncompressed data is rearranged after the binarization of the previous symbol / s to data length i.e. , the length of the original data subtracted by the length of all the previously binarized source symbols .hence , the overall entropy of the binarized strings is . here , binary strings are considered for mathematical convenience . to achieve the optimal binarization of -ary source ,the entropy of -ary source data must equal the total entropy of binary strings .therefore , rcl h(y^n ) & = & _ i=1^m h(x_i^n(1-_j=1^i-1p(y_i ) ) ) + n h(y ) & = & _ i=1^m n(1-_j=1^i-1 p(y_i ) ) h(x_i ) the probability distribution of the binary source is the probability distribution of -ary source when the first source symbols have already been binarized i.e. , removed from the original data .thus , can be rewritten in terms of in the following way : rcl h(y ) & = & _ i=1^m ( 1-_j=1^i-1 p(y_j ) ) h ( ) + h(y ) & = & -_i=1^m ( 1-_j=1^i-1 p(y_j ) ) ( ) + & & ( ) - _ i=1^m ( 1-_j=1^i-1 p(y_j ) ) + & & rcl h(y ) & = & -_i=1^m p(y_i ) ( ) + & & - _ i=1^m ( 1 - _ j=1^ip(y_j ) ) ( ) rcl h(y ) & = & -_i=1^m p(y_i ) ( ) + & & - _ i=1^m ( _ j = i+1^mp(y_j ) ) ( ) rcl h(y ) & = & -_i=1^m ( p(y_i ) p(y_i ) - p(y_i ) ( _ j = i^mp(y_j ) ) ) + & & - _ i=1^m ( _ j = i+1^mp(y_j ) ) ( _ j = i+1^mp(y_j ) ) + & & + _ i=1^m ( _ j = i+1^mp(y_j ) ) ( _ j = i^mp(y_j ) ) rcl h(y ) & = & -_i=1^m p(y_i ) p(y_i ) + & & - _ i=1^m ( _ j = i+1^mp(y_j ) ) ( _ j = i+1^mp(y_j ) ) + & & + _ i=1^m ( _ j = i^mp(y_j ) ) ( _ j = i^mp(y_j ) ) rcl h(y ) & = & -_i=1^m p(y_i ) p(y_i ) + & & + ( _ j=1^mp(y_j ) ) ( _ j=1^mp(y_j ) ) + h(y ) & = & -_i=1^m p(y_i ) p(y_i ) + 1 1 + h(y ) & = & -_i=1^m p(y_i ) p(y_i ) the reduction of equation 2 to equation 12 ( also equation 1 ) proves that the binarization scheme preserves entropy for any -ary data source .the computational complexity of the presented method is the linear function of the input data length .the binarization and de - binarization process only acts as a filter , assigning or replacing 0 s and 1 s , respectively , for an occurrence of a source symbol without any additional table or calculation , that is created or performed for the other binarization techniques .suppose , the length of input data is , is the number of source symbols , and is the source .for the first symbol , the length of the binary string would be .the length of binary string for the second symbol would be the length of all the symbols , except the first symbol ( see table 1 ) .likewise , the length of binary string would be the length all symbols yet to be binarized .mathematically , the length can be written as , where is the probability of symbol .the total number of binary assignment would be .as can be seen , the computational complexity of the binarization and de - binarization process is linear in terms of the input data length .the proposed binarization scheme has the following advantages over others .firstly , it is optimal for every data set . as proved and shown in this paper , the binarization scheme conserves entropy of -ary data source .secondly , the proposed method eliminates the need for knowing the source symbols at all .it works optimally without the knowledge of source because the binarization of the source symbols can occur in any order as shown table 1 , and all orders conserve -ary source entropy , which can be inferred from the derivation shown in section 3 .thirdly , adding to the previous point , the coding is independent of the occurrence of the source symbols .in other words , any source symbol can be encoded in any order subject to the constraint that decoding is performed in the same order .the optimality is independent of the source order in the data set .fourthly , unlike variable length codes , there is no need to know the probability distribution of the source symbols beforehand .it can be updated as the symbols occur . however , even without the knowledge of probability distribution , the presented method is optimal .lastly , it has low complexity that is feasible for practical data compression .one of the immediate usage of the presented binarization technique is in cabac used in video and image compression .in addition , cabac with the proposed binarization scheme can potentially replace context - based adaptive arithmetic coding used in various image compression standards [ 35 ] , including jpeg2000 [ 36 ] .furthermore , the binarization scheme can be applied to all the universal compression algorithms that have less complexity and resource requirements for binary data , than -ary data .v. sanchez , p. nasiopoulos , r. abugharbieh , efficient 4d motion compensated lossless compression of dynamic volumetric medical image data , in : proc .conf . on acoustics , speech and signal process . ,las vegas , nevada , usa , 2008 , pp .549552 .k. ong , w. chang , y. tseng , y. lee , c. lee , a high throughput low cost context - based adaptive arithmetic codec for multiple standards , in : proc .symp . on circuits and syst ., vol . 1 , 2002 , pp .
the paper presents a binarization scheme that converts non - binary data into a set of binary strings . at present , there are many binarization algorithms , but they are optimal for only specific probability distributions of the data source . overcoming the problem , it is shown in this paper that the presented binarization scheme conserves the entropy of the original data having any probability distribution of -ary source . the major advantages of this scheme are that it conserves entropy without the knowledge of the source and the probability distribution of the source symbols . the scheme has linear complexity in terms of the length of the input data . the binarization scheme can be implemented in context - based adaptive binary arithmetic coding ( cabac ) for video and image compression . it can also be utilized by various universal data compression algorithms that have high complexity in compressing non - binary data , and by binary data compression algorithms to optimally compress non - binary data . binarization , source coding , data compression , image compression , video compression , binary arithmetic coding , context - based adaptive binary arithmetic coding ( cabac ) .
for physicists physics is a permanent inspiration for new discoveries .however , non - physicists often consider physics as a boring and old discipline , detached from everyday life .public often fails to realize the consequences of research in everyday applications , so it often considers the academic research as a waste of financial resources .but research is tightly connected to the development even if it is not strongly focused toward applications .this can be best illustrated by the well known statement that the light bulb was not discovered by optimizing a candle .the apparent non - relevance of physics for the everyday life is often caused by the choice of topics taught during the lectures , which are usually old from the point of young students , since even the most recent topics - fundamentals of modern physics - are more than a hundred years old .in addition , traditional teaching very often considers idealized examples and , worst of all , present experiments as a prooffor theoretical explanations .the physics education research has pointed out several of these problems and the physics education in general has advanced tremendously in the last twenty years . but topics that introduce a part of the frontier research into the classroom , showing the students that the physics is not a dead subject yet , are still extremely rare . in this paperwe present a topic , liquid crystals , which is one of rare examples , where such a transfer is possible .the community occupied by the research on liquid crystals counts several thousands of researchers .we all experience the consequences of research on liquid crystals every day ; every mobile phone , every portable computer and almost every television screen is based on the technology using liquid crystals .the physics of liquid crystals is not very simple but there are several concepts that can be understood by non - physics students as well , especially if the teaching approach is based on gaining practical experiences with liquid crystals . in addition , for advanced levels of physics students , liquid crystals may serve as a clear illustration of several concepts especially in thermodynamics and optics .a serious interest of researchers for an introduction of liquid crystals into various levels of education was first demonstrated at the international liquid crystal conference ( ilcc ) in krakow , poland , in 2010 .ilcc is a biennial event gathering more than 800 researchers studying liquid crystals from a variety of aspects . in krakow ,one of four sections running in parallel was called _liquid crystals in education_. the audience unexpectedly filled the auditory to the last corner and after lectures lengthy discussions developed . a similar story repeated at the education section at the european conference on liquid crystals in maribor , slovenia , in 2011 , and at ilcc in mainz , germany , in 2012 . at present , some of the physics of liquid crystals is usually briefly mentioned at various courses at the university level , but there is no systematic consideration from the education perspective about the importance of various concepts and teaching methods . to our best knowledge ,there exist no example of a model teaching unit . in this contributionwe report on a teaching module on liquid crystals , which is appropriate for the undergraduate level for non - physicists .the module can be extended to the lab work at more advanced levels .most of the module can also be used in courses related to thermodynamics and optics as demonstration experiments or lab work accompanied by more rigorous measurements and calculations , which are not considered in detail in this contribution .the paper is organized as follows : in section 2 we consider the prerequisites for the introduction of new modern topic into education . before designing a module we had to consider several points , not necessary in the same order as quoted here :what outcomes do we expect of the teaching module ? which are the concepts that students should understand and be able to apply after the module ? where in the curriculum should the topic be placed , or equivalently , what is the knowledge students need to be able to construct new knowledge about liquid crystals ? which teaching methods are most appropriate for the teaching module ? andfinally , do we have all the materials like experiments , pictures , equipment and facilities to support the teaching module ?in section 3 we report the pilot evaluation study of the teaching module , which was performed in 2011 . in section 4we conclude and discuss several opportunities that the new teaching module offers to the physics education research in addition to the new knowledge itself .when we consider a new topic which is a part of contemporary research with applications met every day , and we want to adapt it for teaching purposes , the literature search is not much of a help .a thorough literature search did not show any theoretical frameworks on this topic .one can find theoretical frameworks for various approaches to teaching and discussions about students motivation and understanding of various concepts .we have found few examples of introduction of new topics like an introduction of semiconductors into the secondary school or introduction of more advanced concepts with respect to friction only .there are also examples of introduction of concepts of quantum mechanics into high school .all authors reported similar problems with respect to the existing theories and results in physics and science education research ; they had to build the units mostly from the personal knowledge , experience and considerations .on the other hand , several approaches for analytical derivation of already treated concepts , several suggestions for demonstrations and lab experiments for teaching purposes are published in every issue of the american and european journal of physics .this simply means that the physics community is highly interested in the improvement of the teaching itself , but the motivation of the researchers , being also lecturers , lies more in the area of developing new experiments than in thorough studies of their impact .therefore , a lot of material for teaching purposes for any topic , old , modern or new is available , but one further step is usually needed towards the coherent teaching module . with the above mentioned problems in mind , we begin by brief discussion on what liquid crystals are and then give a short overview of the existing literature regarding the introduction of liquid crystals into teaching .then we focus on the teaching module : we define our goals , consider the pre - knowledge on which the module should be built and finally describe details of the module .liquid crystals are materials which have at least one additional phase between the liquid and the solid phase .this phase is called the liquid crystalline phase and it has properties of both the liquid and the crystalline phase : a ) it flows like a liquid , or more fundamental , there is no long range order in at least one of directions , and b ) it is anisotropic , which is a property of crystals , or , again , more fundamental , there exists a long range order in at least one of directions .the name liquid crystal is the name for the material , which exhibits at least one liquid crystalline phase .liquid crystalline phases differ by the way of long range ordering . in this contributionwe will discuss only the simplest type of ordering that is typical for the nematic phase .its properties are applied in a liquid crystalline screen . the molecular order in the crystalline phaseis shown schematically in figure [ fig : fig_1 ] ( a ) , in the nematic liquid crystalline phase in figure [ fig : fig_1 ] ( b ) and in the isotropic liquid phase in figure [ fig : fig_1 ] ( c ) .one can see that , in the liquid crystalline phase , there exists some orientational order of long molecular axes .such a material is anisotropic , the physical properties along the average long molecular axis obviously being different from the properties in the direction perpendicular to it .there are several liquid crystal phases made of molecules having rather extravagant shapes .however , we shall limit our discussion to the simplest case of nematic liquid crystal made of rod - like molecules without any loss of generality of the phenomena studied within the teaching module . when liquid crystal molecules are close to surfaces , surfaces in general tend to prefer some orientation of long molecular axes .using a special surface treatment one can achieve a well - defined orientation of long molecular axes at the surface , for example , in the direction parallel to the surface or perpendicular to it . in liquid crystal cells , which are used in liquid crystal displays ( lcds ) ,surfaces are usually such that they anchor the molecules by their long molecular axes in the direction parallel to the surface ; however orientations of molecules at the top and bottom surface are perpendicular . molecules between the surfaces tend to arrange with their long axis being parallel , however due to the surface anchoring their orientation rotates through the cell ( figure [ fig : fig_2 ] ( a ) ) . because in the anisotropic materials the speed of light depends on the direction of light propagation and on the direction of light polarization , liquid crystals organized in such a special way rotate direction of light polarization .if such a cell is put between two crossed polarizers whose transmission directions coincide with the direction of surface anchoring , the cell transmits light .= 0 ) the cell transmits light ; ( b ) the dark state : when voltage is applied to the two glass plates molecules rearrange and the cell does not transmit light . ]liquid crystals are also extremely useful in discussing different competing effects that determine the molecular arrangement .application of an external magnetic or electric field changes the structure in the liquid crystal cell described above , because the electric or magnetic torque tends to arrange molecules in the direction parallel or perpendicular to the external field , depending on the molecular properties .rotation of molecules in the external field changes the optical transmission properties of the cell .this is a basis of how lcds work : with the field on ( figure fig : fig_2 ( b ) ) , light is not transmitted through a cell and the cell is seen to be dark ; with the field off ( figure [ fig : fig_2 ] ( a ) ) , light is transmitted through the cell and the cell is seen to be bright .because of their unique physical properties several authors have already considered the introduction of liquid crystals into the undergraduate university studies . a mechanical model of a three dimensional presentation of liquid crystals phasesis presented in .the historical development of the liquid crystals research and their application is given in .in the same authors point out that liquid crystals are an excellent material to connect some elementary physics with technology and other scientific disciplines . the procedures to synthesize cholesteric liquid crystals ( nematic liquid crystals in which the average orientation of long molecular axes spirals in space ) in a school lab at the undergraduate university level are given in together with the methods to test the elementary physical properties of liquid crystals .in the appendix to hecke_2005 an experiment to determine refractive indices of a liquid crystal is discussed .the anisotropic absorption of polarized light in liquid crystals is presented .liquid crystals that are appropriate for the education purposes are thermotropic ; their properties change with temperature .the colour of cholesteric liquid crystals changes if temperature increases or decreases , so they can be used as thermometers .they are also sensitive to pressure .reference contains worksheets for an experiment in which students discover the sensitivity of cholesteric liquid crystals mixtures on pressure and temperature . in a simple experimental setup is presented by which students can detect and record the light spectra , study and test the concept of bragg reflection , and measure the anisotropy of a refractive index in a cholesteric liquid crystal .a series of simple experiments that can be shown during lectures and can bring the science of liquid crystals closer to students is described in ciferno_1995 .the experiments are used to introduce the concepts of optics , such as light propagation , polarization of light , scattering of light and optical anisotropy .liquid crystal can also be used to describe light transmission through polarizers . when an external field is applied to a cell, a threshold value is required to rotate molecules in the direction preferred by the field .this effect is called the freedericksz transition and an experiment for the advanced physics lab at the undergraduate level is presented in .the procedure to prepare a surface - oriented liquid crystal cell is given where the procedure to synthesize a nematic liquid crystal 4-methoxybenzylidene-4-n - butylaniline ( mbba ) is provided as well .several experiments that illustrate optical properties of liquid crystals are shown .one learns how to design a cell in which molecules are uniformly oriented and what is observed , if this cell is studied under the polarizing microscope . a detailed description of phase transitions between the liquid, liquid crystal and crystal phases is given .exercises are interesting for undergraduate students because they synthesize the substance which they use for other experiments , e.g. measurements of the refractive indices .there are several advanced articles which give advice on the inclusion of liquid crystals into the study process at the university , both undergraduate or graduate , level .an experiment for the advanced undergraduate laboratory on magnetic birefringence in liquid crystals is presented in moses_2000 , measurements of order and biaxiality are addressed in low_2002 . in defects in nematic liquid crystals are studied by using physics applets .liquid crystals are not used only in displays but also in switches . in a liquid crystal spatial light modulator is built and used for a dynamic manipulation of a laser beam .when discussing an introduction of a new topic into education , the team should be aware of the goals as well as of limitations .there are no general criteria established about the concepts that student should learn and comprehend , when a new topic is introduced .therefore , for liquid crystals , we had to rely on our understanding of the topic ; we had to neglect our personal bias toward the matter as two of the authors are also active researchers in the theoretical modeling of liquid crystals .being an expert in the research of liquid crystals may also over or under estimate the concepts that are important for students .we hope that we managed to avoid these personaltraps .the aim of the teaching module is to explain how lcds work and what is the role of liquid crystals in its operation .we also believe that some basic understanding of lcds is a part of a general public knowledge , because it is an important example of the link between the academic research and the applications , which follow from it .if non - physics students meet such an example at least once during their studies , they might not consider the academic research that has no immediate specific application as obsolete . according to our opinionstudents should obtain and understand the following specific concepts : * they should be able to recognize and identify the object of interest - the pixel - on an enlarged screen ; * they should be aware of the fact that liquid crystals are a special phase of matter having very special properties ; * they should become familiar with the following concepts : anisotropy , double refraction and birefringence ; * they should be aware of the fact that liquid crystals must be ordered if we want to exploit their special properties ; * they should know that liquid crystals are easily manipulated by external stimuli like an electric field ; and finally * they should link the concepts mentioned above in a consistent picture of pixel operation . before constructing the teaching module we have thus set the goals stated above .the next steps are : a ) to consider the required knowledge before the students start the module , b ) to position the topic in the curriculum , otherwise teachers will probably not adopt it and c ) to choose the methods , which will be the most successful for constructing the new knowledge .as liquid crystals are materials which form a special liquid crystalline phase , students should be familiar with a concept of phases .they should be aware of a phase transition that appears at an exact temperature .students very often believe that mixture of ice and water can have any temperature as they often drink mixtures of ice and water , which are not in a thermodynamically stable state yet .the next important concept is the speed of light in a transparent medium , its relation to the index of refraction and snell s law .if students are familiar with the concept of polarized light and methods of polarizing the light , it is an advantage .however , polarization of light can also be nicely taught when teaching the properties of liquid crystals .students should also know , at least conceptually , that materials electrically polarize in an external electric field and that the material polarization ( not to be confused with polarization of light ! ) is a consequence of structural changes of a material in the electric field. considerations of the required preliminary students knowledge also give some hints about the placement of the topic on liquid crystals in the curriculum : * when teaching thermodynamics or , more specifically , phases and phase transitions , additional phase can be visually shown by liquid crystals , since the appearance of liquid crystals in their liquid crystalline phase is significantly different from their solid or liquid state .this is not the case for other materials , for example , a ferromagnetic material or a superconductor looks exactly the same when the phase transition to the ferromagnetic or superconductive phase appears at the transition temperature .the phase transition has to be deduced from other properties .* when snell s law is introduced and prisms are discussed , a rainbow is often added as an interesting phenomenon that is observed because the speed of light depends on its wavelength .similar phenomenon , i.e. a dependence of the speed of light on light polarization can be discussed by a phenomenon of a double refraction .* one picture element ( pixel ) is formed by confining lc between two conducting plates .one pixel is thus a capacitor with a dielectric material ( liquid crystal ) between the plates .when voltage is applied to the cell surfaces ( capacitor plates ) , the material between the plates polarizes .electric polarization leads to changes in structural properties of the materials , in this case to the reorientation of molecules , which affects the transmission of light through the cell .thus a lc pixel can be used to consider electric polarization of other materials that can structurally change due to the reorientation of molecules . from the abovewe clearly see that liquid crystals can provide a motivational _ file rouge _ through several topics . on the other hand , by showing several phenomena related to liquid crystals , teachers can motivate students to remain interested in various topics in physics and link them together in the explanation of how one pixel in a liquid crystal display works .teachers can choose to teach about liquid crystals as a separate topic aiming to establish a link between a current fundamental research topic and consequences of the research applied and used every day by everybody .the last question that remains to be answered is a choice of methods for the teaching intervention .liquid crystals are a new topic for students .they are mostly only slightly familiar with the name and have practically no associations connected to the name except a loose connection to displays , as will be shown later .therefore the topic should be introduced from the very beginning . due to several concepts that must be introduced and the structure of understanding that students have to build without any pre - knowledge , traditional lecturing seems the most natural choice . however , from the literature and our experiences we are aware that the transfer of knowledge to relatively passive students is not as successful as one would wish for .therefore we decided to use a combination of a traditional lecture accompanied by several demonstration experiments , where most of the fundamental concepts and properties are introduced , a chemistry lab , where students synthesize a liquid crystal , and a physics lab , where they use their own product from the chemistry lab to study its various physical properties by using an active learning approach .the lab work allows students to construct and to comprehend several new ideas that are all linked together in the application , a liquid crystalline display .the teaching module gives the basic knowledge about liquid crystals , which we assessed as necessary for the understanding of liquid crystals and liquid crystal display technology for a general citizen having at least a slight interest in science and technology .the teaching module has three parts : lecture ( 1st week ) , lab work at chemistry ( 2nd week ) and lab work at physics ( 3rd week ) .the estimated time for each part is 90 minutes . within the teaching module we wanted the students to assimilate the following concepts : a ) a synthesis of a liquid crystals mbba , b ) an existence of an additional phase and phase transitions , c ) polarization of light and d ) optical properties of liquid crystals related to anisotropy .below we present the module , its aims , its structure and a short description of activities in each part of the teaching module .the lecture in duration of 90 minutes provides the fundamental information about liquid crystals , about their properties and how they are used in applications .the method used is a traditional lecture accompanied by several demonstration experiments that are used for motivation , as a starting point for discussion and as an illustration of phenomena discussed .after the lecture students should be able to * list some products based on the liquid crystal technology ; * recognize the additional liquid crystalline phase and phase transition ; * describe and illustrate the structure of liquid crystals on a microscopic level ; * recognize the properties of liquid crystals , which are important for applications : birefringence , resulting from the orientational ordering of molecules , and the effect of an electric field on molecular orientation ; * describe how a lcds work ; * know that liquid crystals are also found in nature and that they are present in living organisms .in addition , a short part of the lecture introduces polarizers and their properties , since most of the students have not heard of the concept of polarization and polarizers during their previous education .the lecture starts with a magnification of a lc screen as a motivation ; it is explained that at the end of the module students will be able to understand how the display works .the lecture continues with a description and a demonstration of the new , liquid crystalline , phase , the macroscopic appearance of which is similar to an opaque liquid .all three phases ( crystalline , liquid crystalline and isotropic liquid ) are shown while heating the sample .the microscopic structures of all three phases are presented by cartoons and the orientational order is introduced . the molecular shape which allows for the orientational ordering is discussed .the concept of light propagation in an anisotropic material is introduced and double refraction is shown by using a wedge liquid crystalline cell .colours of an anisotropic material ( scotch tape ) between crossed polarizers are demonstrated and explained .when polarized light propagates through an anisotropic material , the polarization state of light , in general , changes from the linearly polarized to elliptically polarized .the state of elliptical polarization is defined by the wavelength of light , the birefringence of the anisotropic material ( i.e. the difference between the refractive indices ) and the thickness of the material .the understanding of how polarized light propagates through a birefringent ( optically anisotropic ) material is crucial for understanding the pixel operation . in the lcdsthe electric properties of molecules are very important so they have to be introduced in the lecture .the effect of the electric field on the molecular orientation is discussed as well .molecules are described as induced electric dipoles that are rotated by the external electric field .because the anisotropic properties depend on the structure of the liquid crystal in the cell , the transmission depends on the applied electric field .this leads to the structure of a pixel and to how liquid crystal displays work . at the end of the lecturesome interesting facts are mentioned , such as liquid crystals being a part of spider threads and cell membranes in living organisms .the aims of the lab work in chemistry are the following : * students are able to synthesize the liquid crystal mbba .* students realize that the product of the synthesis is useful for the experiments showing the basic properties of liquid crystals .students synthesize the liquid crystal mbba in a school lab from 4-n - butylaniline and of 4-methoxybenzaldehyde .due to the safety reasons , the synthesis has to be carried out in the fume hood .this part of the teaching module can be left out if the laboratory is not available and the lab work in physics could extend to two meetings of the duration of 90 minutes , which allows for more detailed studies of phenomena .four experiments that are carried out during the lab work in physics provide students with personal experiences and allow them to investigate the most important liquid crystalline properties .* experiment 1 : an additional phase and phase transition * aims : * students know that the liquid crystalline state is one of the states of mater .* students are able to describe the difference between the melting temperature and the clearing temperature . *students are able to measure these two temperatures and use them as a measure of the success of the synthesis . if both temperatures are close to the temperatures given in the published data , the synthesis was successful .students use a water bath to heat the test tube with a frozen liquid crystal mbba .they measure the temperature of water assuming that the small sample of liquid crystal has the same temperature as the bath .they observe how the appearance of the substance changes ( figure [ fig : fig_3 ] ) while heating the water bath .they measure the temperature at which the sample begins to melt .this temperature is called the melting temperature and it is the temperature of the phase transition from the crystalline to the liquid crystalline phase .students heat the water bath further and measure the temperature at which the milky appearance of the sample starts to disappear .this temperature is called the clearing temperature and it is the temperature of the phase transition from the liquid crystalline phase to the isotropic liquid . *experiment 2 : polarization * aims : * students know what polarizers are and how they affect the unpolarized light . * students know how light propagates through the system of two polarizers . *students are able to test , if the light is polarized and in which direction it is polarized by using the polarizer with a known polarizing direction .* students are able to test if the substance is optically anisotropic by using two polarizers .students use two polarizers and investigate the conditions under which light propagates through two polarizers or is absorbed by them .they compare the transmitted light intensity as a function of the angle between the polarizing directions of polarizers . by this part of the activity they learn how to use a polarizer as an analyzer .they also verify that the reflected light is partially polarized .students observe various transparent materials placed between crossed polarizers and find that light can not be transmitted when ( isotropic ) materials like water or glass are placed between the crossed polarizers . when some other material like a scotch tape , cellophane or cd box is placed between the crossed polarizers ,light is transmitted .colours are also often observed ( figure [ fig : fig_4 ] ) .such materials are anisotropic .students can investigate how various properties of anisotropic materials ( thickness , type of material ) influence the colour of the transmitted light .the activity provides an experience that is later used for observations of liquid crystals in a cell .* experiment 3 : double refraction * aims : * students know that birefringence is an important property of matter in the liquid crystalline state . * students are able to make a planar wedge cell and find an area where liquid crystal is ordered enough that the laser beam splits into two separate beams .the beams are observed as two light spots on a remote screen .* students know how to check light polarization in a beam by a polarizer .students manufacture a wedge cell from a microscope slide , a cover glass , a foil for food wrapping or a tape and the liquid crystal mbba pavlin_2011 .a special attention is given to the rubbing of the microscope and cover glass , which enables anchoring of the liquid crystal molecules .the rubbing also prevents the disorder of clusters of molecules with the same orientation of long molecular axis ; such clustering results in scattering of light and opaqueness .students direct the light on the wedge cell ( they use a laser pointer as the light source ) and find the area of the cell where the laser beam splits into two beams . by rotating the polarizer between the cell and the screen they verify light polarization in the beams ( figure [ fig : fig_5 ] ) . then students heat the wedge cell with the hair - dryer and observe the collapse of the two bright spots into one at the phase transition to the isotropic liquid . *experiment 4 : colours * aims : * students are able to fabricate a planar cell , i.e. a cell with parallel glass surfaces .* students know that liquid crystals are optically anisotropic and that light is transmitted if a cell filled with liquid crystal in its liquid crystalline phase is placed between two crossed polarizers.they know that under such circumstances colours may also appear when the sample is illuminated by white light .* students know that the colours observed under perpendicular and under parallel polarizers are complementary .* students are able to mechanically order molecules in a planar cell .students manufacture a planar cell filled with the liquid crystal mbba from a microscope slide , a cover glass and a foil for food wrapping or a tape .they observe the planar cell under a polarizing microscope ( a school microscope with or and two crossed polarizing foils ) . at this pointstudents should remember the descriptive definition of optically anisotropic materials and find out that liquid crystals are their representatives .then they rotate the polarizing foils and observe how colours change ( see figure [ fig : fig_6 ] ) .this experiment also illustrates a concept of complementary colours .afterwards students heat the cell and observe colour changes .an experiment where molecules are ordered is done next .students make micro notches on a microscope slide by rubbing it by a velvet soaked in alcohol .molecules orient with their long axes parallel to the surface and the rubbing direction .a similar process is used in the fabrication of liquid crystal displays .they observe the cell with the ordered liquid crystal under the polarizing microscope .finally , the cell is heated by a hair - dryer and students observe the phase transition that appears as a dark front moving through the sample ending in a dark image when the liquid crystal between crossed polarizers is in the isotropic phase .the lab work is concluded by a discussion of how one pixel in the lcd works , relating the changes of the liquid crystalline structure due to the electric field to the transmission rate of the pixel . at this pointthe fact that colour filters are responsible for colours of each part of the pixel is also emphasized .the aim of our study was development of a teaching module for non - physics students , which could also be implemented for the high school students .our goal was to give future primary school teachers basic knowledge about liquid crystals , so that they will be able to answer potential questions of younger students when they will be teachers themselves .therefore , the teaching module described in the previous section was preliminary tested by a group of 90 first year students enrolled in a four - year university program for primary school teachers at the faculty of education ( university of ljubljana , slovenia ) in the school year 2010/11 . in this section we present the evaluation of the module as regards the efficiency of teaching intervention : which concepts do students assimilate and comprehend and to what extent ?first - year pre - service teachers ( future primary school teachers ) were chosen for testing the teaching module .they were chosen , because their pre - knowledge on liquid crystals is just as negligible as the pre - knowledge of students from other faculties and study programs ( see section 4 ) .in addition , the pre - service teachers do not have any special interest in natural sciences , but they have to be as scientifically literate as everyone else who has finished high school . and most important ,the pre - service primary school teachers form the only homogeneous group that has physics included in the study program and is , at least approximately , large enough to allow for a quantitative study . in the group of , were male and female students .they were on average years old ( years ) . on average, they achieved points out of ( ) on the final exam at the end of the secondary school .the average achievement on the final exam in slovenia was points out of and a total of candidates attended the final exams in spring 2010 .the studied group consisted of predominantly rural population with mixed socio - economics status .the data collection took place by a pre - test , classroom observations of the group work , worksheets and tests .the pre - test had 28 short questions .the first part ( questions ) was related to a general data about a student : gender , age , secondary school , final exam , residence stratum and motivation for science subjects . the second part ( questions )was related to liquid crystals , their existence , properties and microscopic structure .the pre - test was applied at the beginning of lecture related to liquid crystals .those students who did not attend the lecture filled in the pre - test before the beginning of the compulsory lab work in chemistry .the worksheet for the lab work in chemistry includes a procedure to synthesise liquid crystal mbba , a reaction scheme , observations and conclusions regarding the synthesis and questions from chemistry related to liquid crystals and the lab work .the worksheet for the lab work in physics presents properties of polarizing foils and optically anisotropic materials and experiments with the liquid crystal mbba .test includes short questions related to the knowledge obtained during the lecture and lab work .test was held immediately after the end of the physics lab ( in may 2011 ) .test 2 was a part of an exam held weeks later ( june 2011 ) .it has questions that , again , cover the contents of the lecture and lab work .questions on test were similar to questions given on the pre - test and test .the study provided an extensive set of data but in this paper we will focus only on students comprehension of new concepts .results of the pre - test show that of students have already heard of liquid crystals .the percentage is so high , because we were testing students informally obtained knowledge about liquid crystals as a part of another study held at the beginning of the academic year .one student said : when we got the questionnaire at the beginning of the academic year i was ashamed because i did not know anything about liquid crystals .when i came home i asked my father and checked on the web what they are .these experiments definitely bring them closer to me .such an interest is a rare exception , however , most of the students remembered the term liquid crystals , which was a central point of the questionnaire that they filled in at the beginning of the academic year 2010/11 . since lectures are not compulsory only students attended the lecture .150 students attended compulsory laboratories .they worked on the synthesis a week after the lectures and another week later on experiments with liquid crystals at the physics lab .students worked in groups of or in the chemistry lab and in pairs in the physics lab .however , the whole data ( tests and worksheets ) was collected only for 90 students , therefore we present only their achievements .all the groups made the synthesis successfully according to the procedure written in the worksheets . syntheses out of were successfully carried out which was confirmed by measuring the melting and clearing temperature of the synthesized liquid crystal mbba . on average of worksheetswere correctly filled in ( ) .all the experiments described in section 2.4.3 were successfully carried out in the physics lab .the only difference was that students did not prepare the wedge cell by themselves . due to the lack of time cellswere prepared in advance . onaverage 84 % of worksheets included correct answers to questions and observations ( ) . on the pre - test students on average achieved of all points .their achievements show that their prior knowledge about liquid crystals was limited , as expected . on test 1 that was held immediately after the physics lab, students on average achieved 68.1 % ( see table [ table : tablei ] ) .test was a part of a regular exam in physics .on test students on average achieved only points . the reduced performance on test 2can be explained by the research on memory and retention , which suggests that many standard educational practices , such as exams and a great emphasis on the final exam , which encourages studying by cramming , are likely to lead to the enhanced short - term performance at the expense of a poor long - term retention . .students achievements on tests [ cols= " < , < , < , < , < " , ] to conclude , the results from the testing of informally gained knowledge at the end of the secondary school of the pre - service teachers and the students from different faculties on average do not differ : in both samples the lack of knowledge about liquid crystals was detected .based on the research of prior knowledge it can be said that students that are interested in natural sciences and technology would assimilate at least as much knowledge about liquid crystals from the teaching module as the pre - service teachers did . since the results of prior knowledge show statistically significant differences between the knowledge of male and female students one can even dare to conclude that the male students would achieve better results on testing of the module as female students .one can therefore safely conclude that a general audience would achieve at least as good results as the group of students involved in this study .it must be stressed that we have designed the module in which students acquire new knowledge relevant to liquid crystals and their applications , as confirmed by the implementation and the evaluation of the module .module can also be used as a teaching module at more specialized physics courses .of course there are still opened issues that we intend to explore . since we know that the module is appropriate for students that are not motivated in science we will work on the adaptation of module for students motivated in natural sciences .the module presented in this paper can be readily used in the introductory physics courses , and with appropriate modifications it can also be used at lower levels of education .evaluation of the model raised several questions that need to be addressed in the future : how do practical experiences influence the knowledge about liquid crystals ?, how does a new learning environment influence the knowledge ?, how do the chosen teaching methods influence the study process ? , etc . however , the whole study and its evaluation show that it is worth an effort to develop new modules on topics related to the current scientific research and everyday technology .99 ghidaglia j m june 2009 high - performance computing is going to save research _la recherche _ 3 butterworthj 27 july 2010 sarkozy shares his enlightenment vision with high - energy physicists _ the guardian _ ( see : http://www.guardian.co.uk/science/blog/2010/jul/27/sarkozy-high-energy-physicists-ichep ) repnik r , cvetko m and gerli i 2011 development of some natural science competences in undergraduate study by training visualization skills on subject liquid crystal phases and structures _ mol .cryst . _ * 547 * 24954
the paper presents a teaching module about liquid crystals . since liquid crystals are linked to everyday student experiences and are also a topic of a current scientific research , they are an excellent candidate of a modern topic to be introduced into education . we show that liquid crystals can provide a _ file rouge _ through several fields of physics such as thermodynamics , optics and electromagnetism . we discuss what students should learn about liquid crystals and what physical concepts they should know before considering them . in the presentation of the teaching module that consists of a lecture and experimental work in a chemistry and physics lab , we focus on experiments on phase transitions , polarization of light , double refraction and colours . a pilot evaluation of the module was performed among pre - service primary school teachers who have no special preference for natural sciences . the evaluation shows that the module is very efficient in transferring knowledge . a prior study showed that the informally obtained pre - knowledge on liquid crystals of the first year students on several different study fields is negligible . since the social science students are the ones that are the least interested in natural sciences it can be expected that students in any study programme will on average achieve at least as good conceptual understanding of phenomena related to liquid crystals as the group involved in the pilot study .
in recent years , the theoretical and experimental use of quantum systems to store , transmit and process information has spurred the study of how much of classical information theory can be extended to the new territory of quantum information and , vice versa , how much novel strategies and concepts are needed that have no classical counterpart .we shall compare the relations between the rate at which entropy is produced by classical , respectively quantum , ergodic sources , and the complexity of the emitted strings of bits , respectively qubits . according to kolmogorov , the complexity of a bit string is the minimal length of a program for a turing machine ( _ tm _ ) that produces the string .more in detail , the algorithmic complexity of a string is the length ( counted in the number of bits ) of the shortest program that fed into a universal _ tm _ ( _ utm _ ) yields the string as output , i.e. . for infinite sequences , in analogy with the entropy rate , one defines the _ complexity rate _ as , where is the string consisting of the first bits of , .the universality of implies that changing the _ utm _ , the difference in the complexity of a given string is bounded by a constant independent of the string ; it follows that the complexity rate is _ utm_-independent .different ways to quantify the complexity of qubit strings have been put forward ; in this paper , we shall be concerned with some which directly generalize the classical definition by relating the complexity of qubit strings with their algorithmic description by means of quantum turing machines ( _ qtm _ ) . for classical ergodic sources , an important theorem , proved by brudno and conjectured before by zvonkin and levin ,establishes that the entropy rate equals the algorithmic complexity per symbol of almost all emitted bit strings .we shall show that this essentially also holds in quantum information theory .for stationary classical information sources , the most important parameter is the _ entropy rate _ , where is the shannon entropy of the ensembles of strings of length that are emitted according to the probability distribution . according to the shannon - mcmillan - breiman theorem , represents the optimal compression rate at which the information provided by classical ergodic sources can be compressed and then retrieved with negligible probability of error ( in the limit of longer and longer strings ) .essentially , is the number of bits that are needed for reliable compression of bit strings of length .intuitively , the less amount of patterns the emitted strings contain , the harder will be their compression , which is based on the presence of regularities and on the elimination of redundancies . from this point of view , the entropy rate measures the randomness of a classical source by means of its compressibility on the average , but does not address the randomness of single strings in the first instance .this latter problem was approached by kolmogorov , ( and independently and almost at the same time by chaitin , and solomonoff ) , in terms of the difficulty of their description by means of algorithms executed by universal turing machines ( _ utm _ ) , see also . on the whole ,structureless strings offer no catch for writing down short programs that fed into a computer produce the given strings as outputs .the intuitive notion of random strings is thus mathematically characterized by kolmogorov by the fact that , for large , the shortest programs that reproduce them can not do better than literal transcription . intuitively , one expects a connection between the randomness of single strings and the average randomness of ensembles of strings . in the classical case , this is exactly the content of a theorem of brudno which states that for ergodic sources , the complexity rate of -almost all infinite sequences coincides with the entropy rate , i.e. . quantum sources can be thought as black boxes emitting strings of qubits .the ensembles of emitted strings of length are described by a density operator on the hilbert spaces , which replaces the probability distribution from the classical case .the simplest quantum sources are of bernoulli type : they amount to infinite quantum spin chains described by shift - invariant states characterized by local density matrices over sites with a tensor product structure , where is a density operator on .however , typical ergodic states of quantum spin - chains have richer structures that could be used as quantum sources : the local states , not anymore tensor products , would describe emitted strings which are correlated density matrices .similarly to classical information sources , quantum stationary sources ( shift - invariant chains ) are characterized by their entropy rate , where denotes the von neumann entropy of the density matrix .the quantum extension of the shannon - mcmillan theorem was first obtained in for bernoulli sources , then a partial assertion was obtained for the restricted class of completely ergodic sources in , and finally in , a complete quantum extension was shown for general ergodic sources .the latter result is based on the construction of subspaces of dimension close to , being typical for the source , in the sense that for sufficiently large block length , their corresponding orthogonal projectors have an expectation value arbitrarily close to with respect to the state of the quantum source .these typical subspaces have subsequently been used to construct compression protocols .the concept of a universal quantum turing machine ( _ uqtm _ ) as a precise mathematical model for quantum computation was first proposed by deutsch .the detailed construction of _ uqtms _ can be found in : these machines work analogously to classical _ tm_s , that is they consist of a read / write head , a set of internal control states and input / output tapes .however , the local transition functions among the machine s configurations ( the programs or quantum algorithms ) are given in terms of probability amplitudes , implying the possibility of linear superpositions of the machine s configurations . the quantum algorithms work reversibly .they correspond to unitary actions of the _ uqtm _ as a whole .an element of irreversibility appears only when the output tape information is extracted by tracing away the other degrees of freedom of the _uqtm_. this provides linear superpositions as well as mixtures of the output tape configurations consisting of the local states and blanks , which are elements of the so - called _computational basis_. the reversibility of the _ uqtm _s time evolution is to be contrasted with recent models of quantum computation that are based on measurements on large entangled states , that is on irreversible processes , subsequently performed in accordance to the outcomes of the previous ones . in this paperwe shall be concerned with bernstein - vazirani - type _ uqtms _ whose inputs and outputs may be bit or qubit strings .given the theoretical possibility of universal computing machines working in agreement with the quantum rules , it was a natural step to extend the problem of algorithmic descriptions as a complexity measure to the quantum case .contrary to the classical case , where different formulations are equivalent , several inequivalent possibilities are available in the quantum setting . in the following, we shall use the definitions in which , roughly speaking , say that the algorithmic complexity of a qubit string is the logarithm in base of the dimension of the smallest hilbert space ( spanned by computational basis vectors ) containing a quantum state that , once fed into a _ uqtm _ , makes the _ uqtm _ compute the output and halt . in general , quantum statescan not be perfectly distinguished .thus , it makes sense to allow some tolerance in the accuracy of the machine s output . as explained below , there are two natural ways to deal with this , leading to two ( closely related ) different complexity notions and , which correspond to asymptotically vanishing , respectively small but fixed tolerance . both quantum algorithmic complexities and are thus measured in terms of the length of _ quantum _ descriptions of qubit strings , in contrast to another definition which defines the complexity of a qubit string as the length of its shortest _ classical _ description .a third definition is instead based on an extension of the classical notion of universal probability to that of universal density matrices .the study of the relations among these proposals is still in a very preliminary stage . for an approach to quantum complexity based on the amount of resources ( quantum gates ) needed to implement a quantum circuit reproducing a given qubit stringsee .the main result of this work is the proof of a weaker form of brudno s theorem , connecting the quantum entropy rate and the quantum algorithmic complexities and of pure states emitted by quantum ergodic sources .it will be proved that there are sequences of typical subspaces of , such that the complexity rates and of any of their pure - state projectors can be made as close to the entropy rate as one wants by choosing large enough , and there are no such sequences with a smaller expected complexity rate .the paper is divided as follows . in section 2 , a short review of the -algebraic approach to quantum sourcesis given , while section 3 states as our main result a quantum version of brudno s theorem . in section 4 ,a detailed survey of _ qtm_s and of the notion of _ quantum kolmogorov complexity _ is presented . in section 5 ,based on a quantum counting argument , a lower bound is given for the quantum kolmogorov complexity per qubit , while an upper bound is obtained in section 5 by explicit construction of a short quantum algorithm able to reproduce any pure state projector belonging to a particular sequence of high probability subspaces .in order to formulate our main result rigorously , we start with a brief introduction to the relevant concepts of the formalism of quasi - local -algebras which is the most suited one for dealing with quantum spin chains . at the same time, we shall fix the notations .we shall consider the lattice and assign to each site a -algebra being a copy of a fixed finite - dimensional algebra , in the sense that there exists a -isomorphism . to simplify notations ,we write for and .the algebra of observables associated to a finite is defined by .observe that for we have and there is a canonical embedding of into given by , where and denotes the identity of .the infinite - dimensional quasi - local -algebra is the norm completion of the normed algebra , where the union is taken over all finite subsets . in the present paper, we mainly deal with qubits , which are the quantum counterpart of classical bits .thus , in the following , we restrict our considerations to the case where is the algebra of observables of a qubit , i.e. the algebra of matrices acting on . since every finite - dimensional unital -algebra is -isomorphic to a subalgebra of for some ,our results contain the general case of arbitrary .moreover , the case of classical bits is covered by being the subalgebra of consisting of diagonal matrices only .similarly , we think of as the algebra of observables of qubit strings of length , namely the algebra of matrices acting on the hilbert space . the quasi - local algebra corresponds to the doubly - infinite qubit strings .the ( right ) shift is a -automorphism on uniquely defined by its action on local observables }\mapsto a\in\mathcal{a}_{[m+1,n+1]}\end{aligned}\ ] ] where \subset \mathbb{z} ] , for each .as motivated in the introduction , in the information - theoretical context , we interpret the tuple describing the quantum spin chain as a stationary quantum source .the von neumann entropy of a density matrix is . by the subadditivity of for a shift - invariant state on , the following limit , the quantum entropy rate , exists the set of shift - invariant states on is convex and compact in the weak-topology .the extremal points of this set are called ergodic states : they are those states which can not be decomposed into linear convex combinations of other shift - invariant states . notice that in particular the shift - invariant product states defined by a sequence of density matrices , , where is a fixed density matrix , are ergodic .they are the quantum counterparts of bernoulli ( i.i.d . )most of the results in quantum information theory concern such sources , but , as mentioned in the introduction , more general ergodic quantum sources allowing correlations can be considered . more concretely , the typical quantum source that has first been considered was a finite - dimensional quantum system emitting vector states with probabilities .the state of such a source is the density matrix being an element of the full matrix algebra ; furthermore , the most natural source of qubit strings of length is the one that emits vectors independently one after the other at each stroke of time . is a vector in a hilbert space and a ket is its dual vector . ]the corresponding state after emissions is thus the tensor product in the following , we shall deal with the more general case of _ ergodic _ sources defined above , which naturally appear e.g. in statistical mechanics ( compare 1d spin chains with finite - range interaction ) . when restricted to act only on successive chain sites , namely on the local algebra , these states correspond to density matrices acting on which are not simply tensor products , but may contain classical correlations and entanglement .the qubit strings of length emitted by these sources are generic density matrices acting on , which are compatible with the state of the source in the sense that , where denotes the support projector of the operator , that is the orthogonal projection onto the subspace where can not vanish .more concretely , can be decomposed in uncountably many different ways into convex decompositions in terms of other density matrices on the local algebra each one of which describes a possible qubit string of length emitted by the source .it turns out that the rates of the complexities ( approximation - scheme complexity ) and ( finite - accuracy complexity ) of the typical pure states of qubit strings generated by an ergodic quantum source are asymptotically equal to the entropy rate of the source .a precise formulation of this result is the content of the following theorem .it can be seen as a quantum extension of brudno s theorem as a convergence in probability statement , while the original formulation of brudno s result is an almost sure statement .we remark that a proper introduction to the concept of quantum kolmogorov complexity needs some further considerations .we postpone this task to the next section .+ in the remainder of this paper , we call a sequence of projectors , , satisfying a _ sequence of -typical projectors_. [ theqbrudno ] ' '' '' let be an ergodic quantum source with entropy rate . for every , there exists a sequence of -typical projectors , , i.e. , such that for large enough every one - dimensional projector satisfies moreover , is the optimal expected asymptotic complexity rate , in the sense that every sequence of projectors , , that for large may be represented as a sum of mutually orthogonal one - dimensional projectors that all violate the lower bounds in ( [ eq1 ] ) and ( [ eq2 ] ) for some , has an asymptotically vanishing expectation value with respect to .algorithmic complexity measures the degree of randomness of a single object .it is defined as the minimal description length of the object , relative to a certain machine ( classically a _ utm _ ) . in order to properly introduce a quantum counterpart of kolmogorov complexity, we thus have to specify what kind of objects we want to describe ( outputs ) , what the descriptions ( inputs ) are made of , and what kind of machines run the algorithms . in accordance to the introduction , we stipulate that inputs and outputs are so - called ( pure or mixed ) _ variable - length qubit strings _ , while the reference machines will be _ qtm_s as defined by bernstein and vazirani , in particular universal _ qtm_s . let be the hilbert space of qubits ( ) .we write for to indicate that we fix two orthonormal _ computational basis vectors _ and .since we want to allow superpositions of different lengths , we consider the hilbert space defined as the classical finite binary strings are identified with the computational basis vectors in , i.e. , where denotes the empty string .we also use the notation and treat it as a subspace of . a ( variable - length ) _ qubit string _ is a density operator on .we define the _ length _ of a qubit string as or as if this set is empty ( this will never occur in the following ) .there are two reasons for considering variable - length and also mixed qubit strings .first , we want our result to be as general as possible .second , a _ qtm _ will naturally produce superpositions of qubit strings of different lengths ; mixed outputs appear naturally while tracing out the other parts of the _ qtm _ ( input tape , control , head ) after halting .in contrast to the classical situation , there are uncountably many qubit strings that can not be perfectly distinguished by means of any quantum measurement .if are two qubit strings with finite length , then we can quantify their distance in terms of the trace distance where the are the eigenvalues of the hermitian operator . in subsection [ subqac ], we will define quantum kolmogorov complexity for qubit strings . due to the considerations above, it can not be expected that the qubit strings are reproduced exactly , but it rather makes sense to demand the strings to be generated within some trace distance .another possibility is to consider approximation schemes ,i.e. to have some parameter , and to demand the machine to approximate the desired state better and better the larger gets. we will pursue both approaches , corresponding to equations ( [ berthdelta ] ) and ( [ berth ] ) below .note that we can identify every density operator on the local -block algebra with its corresponding qubit string such that .similarly , we identify qubit strings of finite length with the state of the input or output tape of a _ qtm _ ( see subsection [ basicdefqtmsec ] ) containing the state in the cell interval ] , the empty state is written on the remaining cells of the input track and on the whole output track , the control is in the initial state and the head is in position .then , the state of on input at time is given by .the state of the control at time is thus given by partial trace over all the other parts of the machine , that is . in accordance with , def .3.5.1 , we say that the _ qtm _ _ halts at time on input _ , if and only if where is the special state of the control ( specified in the definition of ) signalling the halting of the computation .denote by the set of vector inputs with equal halting time .observe that the above definition implies that is equal to the linear span of , i.e. is a linear subspace of .moreover for the corresponding subspaces and are mutually orthogonal , because otherwise one could perfectly distinguish non - orthogonal vectors by means of the halting time .it follows that the subset of on which a _ qtm m _halts is a union . for our purpose ,it is useful to consider a special class of _ qtms _ with the property that their tape consists of two different tracks , an _ input track _ and an _ output track _ .this can be achieved by having an alphabet which is a cartesian product of two alphabets , in our case .then , the tape hilbert space can be written as . ' '' '' a partial map will be called a _ qtm _ , if there is a bernstein - vazirani two - track qtm ( see , def .3.5.5 ) with the following properties : * , * the corresponding time evolution operator is unitary , *if halts on input with a variable - length qubit string on the output track starting in cell such that the -th cell is empty for every $ ] , then ; otherwise , is undefined . in general , different inputs have different halting times and the corresponding outputs are essentially results of different unitary transformations given by .however , as the subset of on which is defined is of the form , the action of the partial map on this subset may be extended to a valid quantum operation on the system hilbert space , see . ] on : [ lemmaqtmsareoperations ] ' '' '' for every _ qtm _ there is a quantum operation , such that for every * proof .* let and be an orthonormal basis of , , and the orthogonal complement of within , respectively .we add an ancilla hilbert space to the _ qtm _ , and define a linear operator by specifying its action on the orthonormal basis vectors : since the right hand side of ( [ eqonb ] ) is a set of orthonormal vectors in , the map is a partial isometry .thus , the map is trace - preserving , completely positive ( ) .its composition with the partial trace , given by , is a quantum operation .the typical case we want to study is the ( approximate ) reproduction of a density matrix by a qtm .this means that there is a quantum program , such that in a sense explained below .we are particularly interested in the case that the program is shorter than itself , i.e. that . on the whole ,the minimum possible length for will be defined as the _ quantum algorithmic complexity _ of .as already mentioned , there are at least two natural possible definitions .the first one is to demand only approximate reproduction of within some trace distance .the second one is based on the notion of an approximation scheme . to define the latter, we have to specify what we mean by supplying a _ qtm _ with _ two _ inputs , the qubit string and a parameter : [ defencoding ] ' '' '' let and .we define an encoding of a pair into a single string by where denotes the ( classical ) string consisting of s , followed by one , followed by the binary digits of , and is the corresponding projector in the computational basis and . ] . for every _qtm _ , we set note that the _ qtm _ has to be constructed in such a way that it is able to decode both and from , which is an easy classical task .[ defqk ] ' '' '' let be a _ qtm _ and a qubit string .for every , we define the _ finite - accuracy quantum complexity _ as the minimal length of any quantum program such that the corresponding output has trace distance from smaller than , similarly , we define an _ approximation - scheme quantum complexity _ by the minimal length of any density operator , such that when given as input together with any integer , the output has trace distance from smaller than : some points are worth stressing in connection with the previous definition : * this definition is essentially equivalent to the definition given by berthiaume et .al . in .the only technical difference is that we found it convenient to use the trace distance rather than the fidelity . * the _ same _ qubit program is accompanied by a classical specification of an integer , which tells the program towhat accuracy the computation of the output state must be accomplished . *if does not have too restricted functionality ( for example , if is universal , which is discussed below ) , a noiseless transmission channel ( implementing the identity transformation ) between the input and output tracks can always be realized : this corresponds to classical literal transcription , so that automatically for some constant .of course , the key point in classical as well as quantum algorithmic complexity is that there are sometimes much shorter qubit programs than literal transcription . *the exact choice of the accuracy specification is not important ; we can choose any computable function that tends to zero for , and we will always get an equivalent definition ( in the sense of being equal up to some constant ) . + the same is true for the choice of the encoding : as long as and can both be computably decoded from and as long as there is no way to extract additional information on the desired output from the -description part of , the results will be equivalent up to some constant . both quantum algorithmic complexities and are related to each other in a useful way : [ lemrelation ] for every _ qtm _ and every , we have the relation * proof .* suppose that , so there is a density matrix with , such that for every .then , where is given in definition [ defencoding ] , is an input for such that .thus , where the second inequality is by ( [ encoding_length ] ) .the term in ( [ eqrelation ] ) depends on our encoding given in definition [ defencoding ] , but if is assumed to be universal ( which will be discussed below ) , then ( [ eqrelation ] ) will hold for _ every _ encoding , if we replace the term by , where denotes the classical ( self - delimiting ) algorithmic complexity of the integer , and is some constant depending only on . for more detailswe refer the reader to . in , it is proved that there is a universal _ qtm _ ( _ uqtm _ ) that can simulate with arbitrary accuracy every other machine in the sense that for every such there is a classical bit string such that where .as it is implicit in this definition of universality , we will demand that is able to perfectly simulate every classical computation , and that it can apply a given unitary transformation within any desired accuracy ( it is shown in that such machines exist ) .we choose an arbitrary _ uqtm _ which is constructed such that it decodes our encoding given in definition [ defencoding ] into and at the beginning of the computation .like in the classical case , we fix for the rest of the paper and simplify notation by already mentioned at the beginning of section [ secergodicquantumsources ] , without loss of generality , we give the proofs for the case that is the algebra of the observables of a qubit , i.e. the complex -matrices . for classical _ tm_s , there are no more than different programs of length .this can be used as a counting argument for proving the lower bound of brudno s theorem in the classical case ( ) .we are now going to prove a similar statement for _ qtm_s .our first step is to elaborate on an argument due to which states that there can not be more than mutually orthogonal one - dimensional projectors with quantum complexity .the argument is based on holevo s -quantity associated to any ensemble consisting of weights , , and of density matrices acting on a hilbert space .setting , the -quantity is defined as follows where , in the second line , the relative entropy appears if is finite , ( [ eqholevologdim1 ] ) is bounded by the maximal von neumann entropy : in the following , denotes an arbitrary ( possibly infinite - dimensional ) hilbert space , while the rest of the notation is adopted from subsection [ basicdefqtmsec ] .[ countingargument ] let , such that , an orthogonal projector onto a linear subspace of an arbitrary hilbert space , and a quantum operation .let be a subset of one - dimensional mutually orthogonal projections from the set that is , the set of all pure quantum states which are reproduced within by the operation on some input of length .then it holds that * proof .* let , , be a set of mutually orthogonal projectors and . by the definition of , for every , there are density matrices with consider the equidistributed ensemble , where also acts on . using that , inequality ( [ eqholevologdim3 ] ) yields we define a quantum operation on by .applying twice the monotonicity of the relative entropy under quantum operations , we obtain moreover , for every , the density operator is close to the corresponding one - dimensional projector . indeed , by the contractivity of the trace distance under quantum operations ( compare thm .9.2 in ) and by assumption ( [ delta_distance ] ) , it holds let .the trace - distance is convex ( , ( 9.51 ) ) , thus whence , since , fannes inequality ( compare thm .11.6 in ) gives where .combining the two estimates above with ( [ dim_estim ] ) and ( [ chi_estimate ] ) , we obtain assume now that . then it follows ( [ constr ] ) that .so if is larger than this expression , the maximum number of mutually orthogonal projectors in must be bounded by .the second step uses the previous lemma together with the following theorem ( * ? ? ?it is closely related to the quantum shannon - mcmillan theorem and concerns the minimal dimension of the subspaces .[ qaep ] let be an ergodic quantum source with entropy rate .then , for every , where . notice that the limit ( [ eqboltzmann ] ) is valid for all . by means of this property, we will first prove the lower bound for the finite - accuracy complexity , and then use lemma [ lemrelation ] to extend it to .[ cor1 ] ' '' '' let be an ergodic quantum source with entropy rate . moreover , let , and let be a sequence of -typical projectors .then , there is another sequence of -typical projectors , such that for large enough is true for every one - dimensional projector .* proof . *the case is trivial , so let . fix and some , andconsider the set from the definition of , to all s there exist associated density matrices with such that , where denotes the quantum operation of the corresponding _ uqtm _ , as explained in lemma [ lemmaqtmsareoperations ] .using the notation of lemma [ countingargument ] , it thus follows that let be a sum of a maximal number of mutually orthogonal projectors from .if was chosen large enough such that is satisfied , lemma [ countingargument ] implies that and there are no one - dimensional projectors such that , namely , one - dimensional projectors must satisfy . since inequality ( [ eqcountargused ] ) is valid for every large enough , we conclude using theorem [ qaep ] , we obtain that .finally , set .the claim follows .[ cor2 ] ' '' '' let be an ergodic quantum source with entropy rate .let with be an arbitrary sequence of -typical projectors .then , for every , there is a sequence of -typical projectors such that for large enough is satisfied for every one - dimensional projector .* according to corollary [ cor1 ] , for every , there exists a sequence of -typical projectors with for every one - dimensional projector if is large enough .we have where the first estimate is by lemma [ lemrelation ] , and the second one is true for one - dimensional projectors and large enough . fix a large satisfying .the result follows by setting . in the previous section, we have shown that with high probability and for large , the finite - accuracy complexity rate is bounded from below by , and the approximation - scheme quantum complexity rate by .we are now going to establish the upper bounds .[ propupperbound ] ' '' '' let be an ergodic quantum source with entropy rate . then , for every , there is a sequence of -typical projectors such that for every one - dimensional projector and large enough we prove the above proposition by explicitly providing a quantum algorithm ( with program length increasing like ) that computes within arbitrary accuracy .this will be done by means of quantum universal typical subspaces constructed by kaltchenko and yang in .[ kaltc ] ' '' '' let and .there exists a sequence of projectors , , such that for large enough and for every ergodic quantum state with entropy rate it holds that we call the orthogonal projectors in the above theorem universal typical projectors at level .suited for designing an appropriate quantum algorithm , we slightly modify the proof given by kaltchenko and yang in .* let and .we consider an abelian quasi - local subalgebra constructed from a maximal abelian subalgebra .the results in imply that there exists a universal sequence of projectors with such that for any ergodic state on the abelian algebra with entropy rate .notice that ergodicity and entropy rate of are defined with respect to the shift on , which corresponds to the -shift on .the first step in is to apply unitary operators of the form , unitary , to the and to introduce the projectors let be a spectral decomposition of ( with some index set ) , and let denote the orthogonal projector onto a given subspace .then , can also be written as it will be more convenient for the construction of our algorithm in [ subsubconstr ] to consider the projector it holds that . for integers with and we introduce the projectors in we now use an argument of to estimate the trace of . the dimension of the symmetric subspace is upper bounded by , thus now we consider a stationary ergodic state on the quasi - local algebra with entropy rate . let . if is chosen large enough then the projectors , where , are for , i.e. , for sufficiently large .this can be seen as follows .due to the result in ( * ? ? ?3.1 ) the ergodic state convexly decomposes into states each being ergodic with respect to the on and having an entropy rate ( with respect to the ) equal to .we define for the set of integers then , according to a density lemma proven in ( * ? ? ?* lemma 3.1 ) it holds let be the maximal abelian subalgebra of generated by the one - dimensional eigenprojectors of .the restriction of a component to the abelian quasi - local algebra is again an ergodic state .it holds in general for , where we set , we additionally have the upper bound .let be a unitary operator such that .for every , it holds that we fix an large enough to fulfill and use the ergodic decomposition ( [ erg_decomp ] ) to obtain the lower bound from ( [ erg_comp ] ) we conclude that for large enough we proceed by following the lines of by introducing the sequence , , where each is a power of fulfilling the inequality let the integer sequence and the real - valued sequence be defined by and .then we set observe that where the second inequality is by estimate ( [ dim_estimate ] ) and the last one by the bounds on thus , for large , it holds by the special choice ( [ l_m ] ) of it is ensured that the sequence of projectors is indeed typical for any quantum state with entropy rate , compare .this means that is a sequence of universal typical projectors at level .we proceed by applying the latter result to universal typical subspaces for our proof of the upper bound .let be an arbitrary real number such that is rational , and let be the universal projector sequence of theorem [ kaltc ] .recall that the projector sequence is _ independent _ of the choice of the ergodic state , as long as . because of ( [ eqtraceissmall ] ) , for large enough , there exists some unitary transformation that transforms the projector into a projector belonging to , thus transforming every one - dimensional projector into a qubit string of length . as shown in , a _ uqtm_ can implement every classical algorithm , and it can apply every unitary transformation ( when given an algorithm for the computation of ) on its tapes within any desired accuracy. we can thus feed ( plus some classical instructions including a subprogram for the computation of ) as input into the _ uqtm _ . this _ uqtm _starts by computing a classical description of the transformation , and subsequently applies to , recovering the original projector on the output tape .since depends on only through its entropy rate , the subprogram that computes does not have to be supplied with additional information on and will thus have fixed length .we give a precise definition of a quantum decompression algorithm , which is , formally , a mapping ( is rational ) we require that is a short algorithm in the sense of short in description , _ not _ short ( fast ) in running time or resource consumption .indeed , the algorithm is very slow and memory consuming , but this does not matter , since kolmogorov complexity only cares about the description length of the program .the instructions defining the quantum algorithm are : * read the value of , and find a solution for the inequality such that is a power of two .( there is only one such . ) * compute . *read the value of .compute . *compute a list of codewords , belonging to a classical universal block code sequence of rate .( for the construction of an appropriate algorithm , see ( * ? ? ?* thm . 2 and 1 ) .) since can be stored as a list of binary strings .every string has length .( note that the exact value of the cardinality depends on the choice of . ) during the following steps , the quantum algorithm will have to deal with * rational numbers , * square roots of rational numbers , * binary - digit - approximations ( up to some specified accuracy ) of real numbers , * ( large ) vectors and matrices containing such numbers .a classical _ tm _ can of course deal with all such objects ( and so can _ qtm _ ) : for example , rational numbers can be stored as a list of two integers ( containing numerator and denominator ) , square roots can be stored as such a list and an additional bit denoting the square root , and binary - digit - approximations can be stored as binary strings . vectors and matricesare arrays containing those objects .they are always assumed to be given in the computational basis .operations on those objects , like addition or multiplication , are easily implemented .the quantum algorithm continues as follows : * compute a basis of the symmetric subspace this can be done as follows : for every -tuple , where , there is one basis element , given by the formula where the summation runs over all -permutations , and with a system of matrix units in .+ there is a number of different matrices which we can label by .it follows from ( [ eqwillberational ] ) that these matrices have integer entries .+ they are stored as a list of -tables of integers .thus , this step of the computation is exact , that is without approximations . * for every and , let where denotes the computational basis vector which is a tensor product of s and s according to the bits of the string .compute the vectors one after the other .for every vector that has been computed , check if it can be written as a linear combination of already computed vectors .( the corresponding system of linear equations can be solved exactly , since every vector is given as an array of integers . )if yes , then discard the new vector , otherwise store it and give it a number . +this way , a set of vectors is computed .these vectors linearly span the support of the projector given in ( [ eqjoin2 ] ) . *denote by the computational basis vectors of .if , then let , and let .otherwise , compute for every and .the resulting set of vectors has cardinality .+ in both cases , the resulting vectors will span the support of the projector . * the set is completed to linearly span the whole space .this will be accomplished as follows : + consider the sequence of vectors where denotes the computational basis vectors of .find the smallest such that can be written as a linear combination of , and discard it ( this can still be decided exactly , since all the vectors are given as tables of integers ) .repeat this step times until there remain only linearly independent vectors , namely all the and of the . *apply the gram - schmidt orthonormalization procedure to the resulting vectors , to get an orthonormal basis of , such that the first vectors are a basis for the support of .+ since every vector and has only integer entries , all the resulting vectors will have only entries that are ( plus or minus ) the square root of some rational number . up to this point, every calculation was _ exact _ without any numerical error , comparable to the way that well - known computer algebra systems work .the goal of the next steps is to compute an approximate description of the desired unitary decompression map and subsequently apply it to the quantum state . according to section 6 in , a _uqtm _ is able to apply a unitary transformation on some segment of its tape within an accuracy of , if it is supplied with a complex matrix as input which is within operator norm distance of ( here , denotes the size of the matrix ) .thus , the next task is to compute the number of digits that are necessary to guarantee that the output will be within trace distance of . *read the value of ( which denotes an approximation parameter ; the larger , the more accurate the output of the algorithm will be ) .due to the considerations above and the calculations below , the necessary number of digits turns out to be .compute this number .+ afterwards , compute the components of all the vectors up to binary digits of accuracy .( this involves only calculation of the square root of rational numbers , which can easily be done to any desired accuracy . ) + call the resulting numerically approximated vectors .write them as columns into an array ( a matrix ) .+ let denote the unitary matrix with the exact vectors as columns .since binary digits give an accuracy of , it follows that if two -matrices and are -close in their entries , they also must be -close in norm , so we get so far , every step was purely classical and could have been done on a classical computer .now , the quantum part begins : will be touched for the first time . *compute , which gives the length .afterwards , move to some free space on the input tape , and append zeroes , i.e. create the state on some segment of cells on the input tape . *approximately apply the unitary transformation on the tape segment that contains the state .+ the machine can not apply exactly ( since it only knows an approximation ) , and it also can not apply directly ( since is only approximately unitary , and the machine can only do unitary transformations ) .instead , it will effectively apply another unitary transformation which is close to and thus close to , such that + let be the output that we want to have , and let be the approximation that is really computed by the machine . then, a simple calculation proves that the trace distance must then also be small : * move to the output tape and halt .we have to give a precise definition how the parameters are encoded into a single qubit string .( according to the definition of , the parameter is not a part of , but is given as a second parameter .see definitions [ defencoding ] and [ defqk ] for details . )we choose to encode by giving 1 s , followed by one 0 , followed by the binary digits of .let denote the corresponding projector in the computational basis .the parameter can be encoded in any way , since it does not depend on .the only constraint is that the description must be self - delimiting , i.e. it must be clear and decidable at what position the description for starts and ends .the descriptions will also be given by a computational basis vector ( or rather the corresponding projector ) .the descriptions are then stuck together , and the input is given by if is large enough such that ( [ eqlogtrqm ] ) is fulfilled , it follows that , where is some constant which depends on , but not on .it is clear that this qubit string can be fed into the reference _ uqtm _ together with a description of the algorithm of fixed length which depends on , but not on .this will give a qubit string of length where is again a constant which depends on , but not on .recall the matrix constructed in step 11 of our algorithm , which rotates ( decompresses ) a compressed ( short ) qubit string back into the typical subspace .conversely , for every one - dimensional projector , where was defined in ( [ eqtypicalprojector ] ) , let be the projector given by .then , since has been constructed such that it follows from ( [ eqlength ] ) that if is large enough , equation ( [ upperzero ] ) follows .now we continue by proving equation ( [ upperdelta ] ) .let .then , we have for every one - dimensional projector and large enough where the first inequality follows from the obvious monotonicity property , the second one is by lemma [ lemrelation ] , and the third estimate is due to equation ( [ upperzero ] ) . _proof of the main theorem [ theqbrudno ] ._ let be the -typical projector sequence given in proposition [ propupperbound ] , i.e. the complexities and of every one - dimensional projector are upper bounded by .due to corollary [ cor1 ] , there exists another sequence of -typical projectors such that additionally , is satisfied for . from corollary [ cor2 ] , we can further deduce that there is another sequence of -typical projectors such that also holds .finally , the optimality assertion is a direct consequence of the quantum counting argument , lemma [ countingargument ] , combined with theorem [ qaep ] .classical algorithmic complexity theory as initiated by kolmogorov , chaitin and solomonoff aimed at giving firm mathematical ground to the intuitive notion of randomness .the idea is that random objects can not have short descriptions .such an approach is on the one hand equivalent to martin - lf s which is based on the notion of _ typicalness _ , and is on the other hand intimately connected with the notion of entropy . the latter relation is best exemplified in the case of longer and longer strings : by taking the ratio of the complexity with respect to the number of bits , one gets a _ complexity per symbol _ which a theorem of brudno shows to be equal to the _ entropy per symbol _ of almost all sequences emitted by ergodic sources .the fast development of quantum information and computation , with the formalization of the concept of _ uqtms _ , quite naturally brought with itself the need of extending the notion of algorithmic complexity to the quantum setting . within such a broader context , the ultimate goal is again a mathematical theory of the randomness of quantum objects .there are two possible algorithmic descriptions of qubit strings : either by means of bit - programs or of qubit - programs . in this work ,we have considered a qubit - based _ quantum algorithmic complexity _ , namely constructed in terms of quantum descriptions of quantum objects .the main result of this paper is an extension of brudno s theorem to the quantum setting , though in a slightly weaker form which is due to the absence of a natural concatenation of qubits .the quantum brudno s relation proved in this paper is not a pointwise relation as in the classical case , rather a kind of convergence in probability which connects the _ quantum complexity per qubit _ with the von neumann entropy rate of quantum ergodic sources .possible strengthening of this relation following the strategy which permits the formulation of a quantum breiman theorem starting from the quantum shannon - mcmillan noiseless coding theorem will be the matter of future investigations . in order to assert that this choice of quantum complexity as a formalization of quantum randomness " is as good as its classical counterpart in relation to classical randomness , one ought to compare it with the other proposals that have been put forward : not only with the quantum complexity based on classical descriptions of quantum objects , but also with the one based on the notion of _ universal density matrices _ . in relation to vitanyi sapproach , the comparison essentially boils down to understanding whether a classical description of qubit strings requires more classical bits than qubits per hilbert space dimension .an indication that this is likely to be the case may be related to the existence of entangled states . in relation to gacsapproach , the clue is provided by the possible formulation of quantum martin - lf tests in terms of measurement processes projecting onto low - probability subspaces , the quantum counterparts of classical untypical sets .one can not however expect classical - like equivalences among the various definitions .it is indeed a likely consequence of the very structure of quantum theory that a same classical notion may be extended in different inequivalent ways , all of them reflecting a specific aspect of that structure .this fact is most clearly seen in the case of quantum dynamical entropies ( compare for instance ) where one definition can capture dynamical features which are precluded to another .therefore , it is possible that there may exist different , equally suitable notions of `` quantum randomness '' , each one of them reflecting a different facet of it .this work was supported by the dfg via the project `` entropie , geometrie und kodierung groer quanten - informationssysteme '' and the dfg - forschergruppe `` stochastische analysis und groe abweichungen '' at the university of bielefeld . a. k. zvonkin , l. a. levin , `` the complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms '' , _ russian mathematical surveys _ * 25 no .6 * 83 - 124 ( 1970 )
in classical information theory , entropy rate and algorithmic complexity per symbol are related by a theorem of brudno . in this paper , we prove a quantum version of this theorem , connecting the von neumann entropy rate and two notions of quantum kolmogorov complexity , both based on the shortest qubit descriptions of qubit strings that , run by a universal quantum turing machine , reproduce them as outputs .
long before the development of cryptography and encryption , one of humankind s best techniques of secretly delivering a message was to hide the fact that a message was even delivered at all . a well - known example of this is from ancient times where one person would shave another s head and write a message on it .it was only after waiting for their hair to grow back that they would then be sent to the intended recipient , who would again shave the scalp to reveal the message .this is historically one of the first - known examples of covert communication , i.e. , the ability to communicate without being detected . today, even though advanced encryption techniques are available , there is still a need to hide the fact that communication is taking place from malicious eavesdroppers , e.g. , secret government operations .modern approaches of such covert communication exist from digital steganography to spread spectrum communication and more .another approach consists of hiding information in the naturally occurring thermal noise of optical channels , such as additive white gaussian noise ( awgn ) channels .a natural quantum generalization of the awgn channel is the lossy bosonic channel ; a beam splitter mixing the signal with a vacuum state ( pure - loss channel ) or a thermal state .the lossy bosonic channel is a good model of losses for quantum - optical links used in quantum communication .covert transmission in this case has been considered both from the point of view of a classical channel with a powerful quantum hacker , as well as recently , an all - quantum channel and hacker .understandably , refs . concluded that a pure - loss channel can not be used because there is no way to stealthy hide the information , i.e. , when the hiding medium ( thermal noise ) is absent .surprisingly , the situation does not get much better if the signal is mixed with a thermal state . in refs . it was shown that the asymptotic rate of covert communication is zero and , worse , the probability of a decoding error does not vanish for realistic photon detectors . in this paper, we show that in fact no concealing medium or object is needed at all in order to hide quantum information transfer at large distances .this counterintuitive result relies on the machinery of relativistic quantum mechanics quantum field theory .specifically , we show that covert quantum communication is possible using the intrinsic resource of time - like entanglement present in minkowski vacuum .this is similar to the entanglement of minkowski vacuum that can be mined at space - like distances as pioneered by following early insights from .cosmological and other consequences of space - like entanglement were investigated in for observers equipped with an unruh - dewitt ( udw ) detector and following various trajectories . in our casethe local measurements of two temporally and spatially separated parties induce a quantum channel whose asymptotic rate of quantum communication supported by the two - way classical communication is in fact nonzero . we have calculated an upper bound on the communication rate and interpreted it as the ultimate covert communication rate . our results offers significant differences from previous work relating to both relativistic quantum information protocols and non - relativistic quantum protocols .it was shown in ( also following ) that quantum key distribution could be achieved without actually sending a quantum signal by exploiting the concept of time - like entanglement .however , the authors of did not consider the application of covert quantum communication and also considered the use of continuous variables as the substrate .our approach based on discrete variables allows us to do away with the shared coherent beam needed to synchronize the local oscillators ( needed for homodyne detection ) that could eventually blow the cover .another problem with the oscillator mode may be its delivery especially for participants below the horizon or at large distances .our construction avoids this problem while reaching similarly large distances for quantum communication . for our protocol as well as for the parties have to independently solve the problem of how to covertly exchange classical messages . in general , we can not do away with classical communication at large spatial distances but we did find instances where no classical communication is needed at all . this paper is structured as follows .[ sec : entgen ] is the technical part of the paper where we introduce two inertial observers both equipped with an unruh - dewitt detector .we analyze the response of the detectors to the fourth perturbative order and extract the quantum state of the two detectors after the scalar field has been traced out as the main object of our study . in sec .[ subsec:4thorderdm ] we present the quantum state for zero spatial distance of the two observers subsequently generalized to a nonzero spatial distance in sec .[ subsec:4thorderlnonzero ] .this is our main object of study and we interpret it from the quantum information theory point of view in sec .[ sec : keyrate ] .this enables us to calculate and present our main results in sec .[ sec : results ] .we conclude by mentioning open problems in sec .[ sec : discussion ] .alice and bob are in possession of two - state detectors whose energy levels can be made time - dependent .tunability of energy levels can be achieved , e.g. , by using a superconducting system .thus bob s detector obeys the schrdinger equation , where has a constant spectrum .it can be turned to a standard schrdinger equation in conformal time , defined by .this coordinate change is equivalent to a change of the coordinate system of an inertial observer to the system of an observer undergoing uniform acceleration .thus , bob s detector effectively couples to rindler modes in the future ( f ) wedge .similarly , we setup alice s detector so that it effectively couples to the rindler modes in the past ( p ) wedge , by tuning the energy levels so that they correspond to the hamiltonian in conformal time , where .both detectors couple to ( distinct ) vacuum modes which we model by a real massless scalar field . for an introduction to such concepts see . moreover , we are interested in the case where alice and bob are separated by a distance . for definiteness ,let for alice , and for bob .if alice s ( bob s ) detector clicks at time ( ) , then the proper time between the two clicks is we expect the maximal correlations in the vicinity of the null line connecting the two events . in terms of their respective conformal times , this condition can be written as .for definiteness , we choose the rhs to be ( similar results are obtained for any other choice ) .thus , if alice s detector performs a measurement at , then optimally , bob s detector will perform its measurement at bob s conformal time , where }.\ ] ] the standard way of evaluating the interaction between a field and udw detectors is perturbatively in the interaction picture .a realistic udw detector is described by the following term where is a real scalar field where is a four - momentum , is a window function , a smearing function we set to be a delta function in this work , are the atom ladder operators and stands for the energy gap .the evolution operator for observer reads }\big\ } } \nn\\ & = 1-i\int_{t_0}^t{\mathrm{\,d } t_1}h_a(t_1)- \frac{1}{2}\mathsf{t } \bigg ( \int_{t_0}^t{\mathrm{\,d } t_1 } h_a(t_1 ) \bigg)^2 + \frac{(-i)^3}{3!}\mathsf{t } \bigg ( \int_{t_0}^t{\mathrm{\,d } t_1 }h_a(t_1 ) \bigg)^3 + \dots .\end{aligned}\ ] ] considering the initial state of the field and detectors to be we calculate from ( we label the time coordinate in the future / past wedge by ) the following density matrix of the two atoms after we trace over the field degrees of freedom where the bar denotes complex conjugation .we further denote , where stands for the lower ( upper ) atom level .the state is a two - qubit density matrix . indeed , in the unruh - dewitt model the atoms are two qubits as the ladder operators commute : . using the notation ( time ordering and tensor product implied ) where and .we write for the detectors density matrix and we obtain therefore [ eq : upto4thorder ] { |0_m\rangle } + \dots\\ b_2 & = { \langle0_m| } |a_+|^2 \big [ 1 - \frac{1}{3 } ( a_+a_-+ 3b_+b_- ) \big ] { |0_m\rangle } + \dots\\ c_2 & = { \langle0_m| } a_+^\dg b_+ \big [ 1 - \frac{1}{6 } ( a_+a_-+ 3b_+b_-)^\dg - \frac{1}{6 } ( b_+b_- + 3a_+a_-)\big ] { |0_m\rangle } + \dots,\end{aligned}\ ] ] where e.g. means . we will need the following standard results where stands for the feynman propagator and denotes a four vector .following , the ff ( and for pp ) propagator reads }}\ ] ] and the fp propagator takes the following form }}.\ ] ] propagators eqs .( [ eq : propagffandpp ] ) and ( [ eq : propagfp ] ) can be further simplified using the identities we obtain and ( note that we can set here right away ) the matrix entries needed for up to the second order read [ eq : mtrxelmns2ndorder ] for the fourth order in the next section we will also need to evaluate the integrals we adopt the results from where one of the investigated window functions is a gaussian profile },\ ] ] where tunes the coupling strength .the matrix elements ( [ eq : mtrxelmns2ndorder ] ) are thus given by integrals of the following four types : }\int_{-\infty}^\infty{\mathrm{\,d } \tau'}\,\exp{\big[{-{\tau'^2\over\s^2}}\big]}e^{i\d(\tau'\pm\tau ) } \sum_{n=-\infty}^\infty{1\over(\tau-\tau'-2i\ve+2\pi in / a)^2}\end{aligned}\ ] ] and }\int_{-\infty}^\infty{\mathrm{\,d } \eta}\,\exp{\big[{-{\eta^2\over\s^2}}\big]}e^{i\d(\eta\pm\tau ) } \sum_{n=-\infty}^\infty{1\over(\tau-\eta-2i\ve+\pi i / a ( 2n+1))^2}.\end{aligned}\ ] ] integrals and are integral ( 50 ) in ( for ) . conveniently , this integral has sort of a closed form . following , the inner integral splits in two expressions according to whether the poles are positive ( ) or not ( ) leading to }.\ ] ] then , we get the following expressions }\ups_{+}(n)\nn\\ & = { \la^2\over8\pi}e^{-{(\s\d)^2\over2}}\sum_{n=-\infty}^0\big[2+\sqrt{2\pi}\exp{\big[{(2n\pi/(a\s)+\s\d)^2\over2}\big]}(2n\pi/(a\s)+\s\d ) \big(1+\mathrm{erf\,}\big[{2n\pi/(a\s)+\s\d\over\sqrt{2}}\big]\big)\big]\end{aligned}\ ] ] and }\ups_{-}(n)\nn\\ & = -{\la^2\over8\pi}e^{-{(\s\d)^2\over2}}\sum_{n=1}^\infty\big[{-2}+\sqrt{2\pi}\exp{\big[{(2n\pi/(a\s)+\s\d)^2\over2}\big]}(2n\pi/(a\s)+\s\d ) \big(1-\mathrm{erf\,}\big[{2n\pi/(a\s)+\s\d\over\sqrt{2}}\big]\big)\big].\end{aligned}\ ] ] note that ( [ eq : int1minus]),([eq : int1plus ] ) and all other matrix elements we calculate will be expressed in terms of certain dimensionless parameters such as and . to get we swap for in ( [ eq : innerint ] ) but we must also shift the sums limits because the poles in ( [ eq : integral23 ] ) are vertically shifted by one : [ eq : int2plusminus ] }((2n+1)\pi / a+\s^2\d ) \big(1+\mathrm{erf\,}\big[{(2n+1)\pi / a+\s^2\d\over\s\sqrt{2}}\big]\big)\big]\nn\\ & = -{\la^2\over8\pi}e^{-{(\s\d)^2\over2}}\nn\\ & \times\sum_{n=-\infty}^{-1}\big[2+\sqrt{2\pi}\exp{\big[{((2n+1)\pi/(a\s)+\s\d)^2\over2}\big]}((2n+1)\pi/(a\s)+\s\d ) \big(1+\mathrm{erf\,}\big[{(2n+1)\pi/(a\s)+\s\d\over\sqrt{2}}\big]\big)\big].\\ \eui_{2}^-&=\la^2{\s^2\over8\pi^2}{\pi\over\s^3}e^{-{(\s\d)^2\over2}}\nn\\ & \times\sum_{n=0}^\infty\big[{-2\s+\sqrt{2\pi}\exp{\big[{((2n+1)\pi / a+\s^2\d)^2\over2\s^2}\big]}((2n+1)\pi / a+\s^2\d ) } \big({1-\mathrm{erf\,}\big[{{(2n+1)\pi / a+\s^2\d\over\s\sqrt{2}}}\big]}\big)\big]\nn\\ & = { \la^2\over8\pi}e^{-{(\s\d)^2\over2}}\nn\\ & \times\sum_{n=0}^{\infty}\big[{-2}+\sqrt{2\pi}\exp{\big[{((2n+1)\pi/(a\s)+\s\d)^2\over2}\big]}((2n+1)\pi/(a\s)+\s\d ) \big(1-\mathrm{erf\,}\big[{(2n+1)\pi/(a\s)+\s\d\over\sqrt{2}}\big]\big)\big].\end{aligned}\ ] ] next , the plus version of eq .( [ eq : integral23 ] ) is called : }\int_{-\infty}^\infty{\mathrm{\,d } x}\exp{\big[{-{x^2\over2\s^2}}\big ] } \sum_{n=-\infty}^\infty{1\over(x-2i\ve+\pi i / a ( 2n+1))^2}.\end{aligned}\ ] ] it can be written as where [ eq : int3plusminus ] }\big(1-\mathrm{erf\,}\big[{(2n+1)\pi\over\sqrt{2}a\s}\big]\big)}\big]\nn\\ & = { \la^2\over8\pi}e^{-{(\s\d)^2\over2}}\sum_{n=0}^\infty \big[{{-2}+{(2n+1)\pi\sqrt{2\pi}\over\s{a}}\exp{\big[{(2n+1)^2\pi^2\over2(\s{a})^2}\big]}\big(1-\mathrm{erf\,}\big[{(2n+1)\pi\over\sqrt{2}a\s}\big]\big)}\big],\\ \eui_{3}^-&={\la^2\over8\pi } e^{-{(\s\d)^2\over2}}\sum_{n=-\infty}^{-1 } \big[{-2-{(2n+1)\pi\sqrt{2\pi}\over\s{a}}\exp{\big[{(2n+1)^2\pi^2\over2(\s{a})^2}\big]}\big(1+\mathrm{erf\,}\big[{(2n+1)\pi\over\sqrt{2}a\s}\big]\big)}\big]=\eui_{3}^+ . \ ] ] by taking only the zeroth and second order contributions from eqs .( [ eq : upto4thorder ] ) the expressions reveal the density matrix up to the second perturbative order : [ eq:2ndorderentries ] the result coincides with and has been rederived in the timelike scenario as well .the second order is insufficient for our purposes since the ` outer ' density matrix is negative unless . in order to evaluate the coefficients from eqs .( [ eq : upto4thorder ] ) we first calculate the plus version of eq .( [ eq : integral14 ] ) }\int_{-\infty}^\infty{\mathrm{\,d } x}\exp{\big[{-{x^2\over2\s^2}}\big ] } \sum_{n=-\infty}^\infty{1\over(x-2i\ve+2n\pi i / a)^2}\end{aligned}\ ] ] where we split it into two parts calculated as [ eq : int4plusminus ] }\big(1-\mathrm{erf\,}\big[{n\pi\sqrt{2}\over\s{a}}\big]\big)}\big]\nn\\ & = -{\la^2\over8\pi}e^{-{(\d\s)^2\over2}}\sum_{n=0}^\infty\big[{{-2}+{2n\pi\sqrt{2\pi}\over{a\s}}\exp{\big[{2n^2\pi^2\over\s^2a^2}\big]}\big(1-\mathrm{erf\,}\big[{n\pi\sqrt{2}\over\s{a}}\big]\big)}\big],\\ \eui_{4}^-&=-{\la^2\over8\pi^2}\sqrt{2\pi}\s e^{-{(\s\d)^2\over2}}\sum_{n=-\infty}^{-1 } \big[{-{\sqrt{2\pi}\over\s}-{2n\pi^2\over\s^2a}\exp{\big[{2n^2\pi^2\over\s^2a^2}\big ] } \big(1+\mathrm{erf\,}\big[{n\pi\sqrt{2}\over\s a}\big]\big)}\big]\nn\\ & = -{\la^2\over8\pi}e^{-{(\d\s)^2\over2}}\sum_{n=-\infty}^{-1}\big[{{-2}-{2n\pi\sqrt{2\pi}\over{a\s}}\exp{\big[{2n^2\pi^2\over\s^2a^2}\big ] } \big(1+\mathrm{erf\,}\big[{n\pi\sqrt{2}\over\s{a}}\big]\big ) } \big].\end{aligned}\ ] ] we notice at once that equals without the term . to proceed we use the following four - point correlation function identity as the simplest case of the wick theorem : as a result , from eqs .( [ eq : upto4thorder ] ) we get [ eq : higherorderomab ] note that similarly to the second perturbative order in eq .( [ eq:2ndorderentries ] ) the trace of assembled from ( [ eq : higherorderomab ] ) equals one without a renormalization. the timelike entanglement would be hardly useful for covert quantum communication if the stationary sender and receiver were forced to be stationed at the same spatial coordinate .fortunately , the timelike entanglement of the scalar massless particles persists over nonzero distances . in order to quantify how much entanglement surviveswe take the two - point correlation function eq .( [ eq:2pointcorr ] ) and by fixing we have for the receiver ( see fig . [fig : timelike ] ) by setting propagator ( [ eq : propagfp ] ) then becomes }-{a^2l^2\over4}\exp{[-a(\tau+\eta)]}}.\ ] ] we removed the prescription as no pole lies on the real axis and after rescaling and and plugging eq .( [ eq : propagfpdistance ] ) the integrals }-{a^2l^2\over4}\exp{[-(\tau+\eta)]}}\ ] ] converge without the need to regularize .we have } ] and for we get a familiar expression for all four eigenvalues : .let s see how useful this representation is . given a quantum noisy channel , the question of the highest achievable rate at which two legitimate parties can asymptotically establish secret correlations using one - way ( ) classical ( public ) communication was answered in .it is given by the private channel capacity which is equivalent to the one - way channel secret key rate .the secret key rate is hard to calculate , though .fortunately , a number of lower bounds are known .one such a bound is the coherent information , where and $ ] is the von neumann entropy ( is to the base two throughout the paper and the symbol stands for defined ) .hence , the coherent information is optimized over an input state to the channel but evaluated on the output states and .we are led to the following chain of inequalities : where and are the single- and multi - letter quantum capacity formulas , respectively . in short, quantum capacity can be operationally interpreted as the rate at which maximally entangled states can be distilled using one - way classical communication and local operations ( one - way locc ) .hence , it provides a natural lower bound on the one - way secret key rate since any quantum or classical correlation ( namely a classical key a secret string of bits ) can be teleported once maximally entangled states are available . unfortunately, the regularization makes and practically incalculable for plenty of interesting channels .a noteworthy peculiarity that we will encounter for other measures as well , is the inability to optimize over the channel input state to get .this is because there is no accessible input state in the first place . in quantum shannon theory ,the state ( or its purification ) is a quantum code chosen such that the entanglement generation rate ( or , equivalently , quantum communication rate ) is the best possible .here we have only an image of such a state and we can assume that the input purified state is maximally entangled .then the output state is a choi state as we interpreted it .we could take a different point of view and decide that is an image of any input state in which case it is formally not a choi state .but by this we would deprive ourselves of being able to use the secret key rate estimates introduced further in this section .there , the choi matrix plays a crucial role .so let s adopt the following convention : in the expressions to come , c the state or means a generic bipartite state . if we need to stress that we interpret as a choi - jamiokowski matrix of a channel we will write instead . one - way classical communication is a strictly weaker resource for quantum communication than two - way ( ) classical communication . as a matter of fact , the problem of calculating the two - way secret key rate ( or quantum / private capacity ) recently received a lot of deserved attention drawing on early results from .the two - way quantities of a general quantum channel are equally hard to calculate unless the channel possesses certain symmetries .the secret key rate is lower bounded by the two - way distillable entanglement . in order to decide whether the two - way key rate is nonzero it is thus sufficient to check whether is entangled ( see , sections xii.d , xii.f and xiv.a for an all - in - one overview )this can be decided via the ppt criterion which for is a sufficient and necessary condition .currently , there also exists only a few upper bounds .the upper bounds are harder to evaluate : to the authors knowledge , none of them are tight and as we will see , they often require a non - trivial optimization step .the first quantity we mention is the squashed entanglement introduced in where is a ` squashing ' channel and is the conditional mutual information defined as squashed entanglement was adapted in to define the squashed entanglement of a quantum channel where is the channel s input state ( recall our convention of being the choi matrix of ) and the following inequalities hold the last inequality ( cf . ) is computationally the most relevant .the rhs is called the entanglement of formation ( eof ) and was introduced in where the optimization goes over the decomposition .the eof is normally difficult to calculate except in our case where a closed formula is known due to wootters .another notable bound is the regularized relative entropy of entanglement where is a separable state and \ ] ] is the quantum relative entropy .again , the regularization is an obstacle for its direct evaluation .recently , the authors of , influenced by seminal , introduced the regularized relative entropy of entanglement of a channel where is its choi matrix .they showed that for a large class of channels ( so - called ` teleportation covariant ' ) the following inequality holds where are all adaptive strategies and is a separable state .it turns out that the following inequalities hold : note that holds as well ( see ) . finally , the authors of defined the max - relative entropy of entanglement of a quantum channel as where is the channel s input state and in general we have the states are chosen from the set of separable states and the max - relative entropy is defined as where is the greatest eigenvalue of .the max - relative entropy of entanglement was shown to satisfy looking at the above definitions it does not come as a surprise that the upper bound could be difficult to find as well and so a simplified upper bound ( on the upper bound ) was devised : where is a separable state representing a choi matrix of an entanglement - breaking channel . comparing ( [ eq : maxreechnl ] ) and ( [ eq : maxree ] ) with ( [ eq : choieb ] )we see that we removed one optimization step that we would not be able to perform anyway as discussed earlier in this section .ultimately , our goal is to maximize the secret key rate and distance for the physical parameters of the unruh - dewitt detectors such that they are reasonably realistic .this is a hard problem as the integrals from eqs .( [ eq : j23forl ] ) must be solved numerically which itself is a rather time - consuming process if we want to make sure that the values we get can be trusted .hence we choose the following strategy : for realistically chosen atomic parameters we made an educated guess and showcase our results by calculating the best rates we could find at the expense of a relatively small ( but still respectable ) distance and the best distance for a smaller but positive secret key rate . especially herewe got to an impressive distance in the order of thousands of kilometers so let s start with this case .we set the parameters introduced in eqs .( [ eq : udwhint ] ) and ( [ eq : gaussianprofile ] ) to be and .the eof of the corresponding state is depicted in fig .[ fig : ratelada ] .we can see that it quickly decreases for but remains positive for more than a thousand kilometers indicating the presence of entanglement at large distances that can be used for two - way quantum communication . as discussed in the previous section ,as long as is entangled the two - way secret key rate is strictly positive .but we do not know the actual rate of covert communication except with one truly remarkable exception . for the immediate vicinity of the one - way quantum capacity is nonzero : for . this result is useless for covert communication for large distances but nonetheless conceptually important provides not only an achievable lower bound on the one - way secret key rate as indicated in eq .( [ eq : ineqchain1 ] ) but mainly we can exploit .that is , the participants do not have to classically communicate at all .they have to initially agree on a decoding strategy without the need to talk again in order to quantum communicate .one could wonder why the channel is not an identity for .this is because does not mean a measurement at the same time and especially on the same system .it rather corresponds to the situation depicted in the left panel of fig .[ fig : timelike ] .the information is sent to the future where it is received at the same spatial position but by a completely different observer ( in his own lab that could have been built in the meantime ) . by tuning the detectors parameters we can increase the eof ( and most likely the rate as well ) .the eof is plotted in fig .[ fig : ratel ] for the following set of parameters : and . for get a better result compared to fig . [fig : ratelada ] but it drops to zero .this is the area marked by an arrow in fig .[ fig : ratel ] .the zero point coincides with becoming separable . as in the previous case ,the dimension of the detectors state makes it easy to find an upper bound via the eof but it is also a good opportunity to test how tight bounds we would have gotten if we could not rely on the calculable eof .first , we plotted the unoptimized squashed entanglement eq .( [ eq : squashedent ] ) where was set ( see the blue dashed curve ) .it is clearly an inappropriate bound which is not surprising given the trivial squashing channel . to get a better upper bound we numerically optimized over the system such that using the tools from provides a better estimate and surprisingly , sometimes ( for ) even better than the one given by the eof .another plotted estimate is the simplified upper bound on the max - relative entropy of entanglement eq .( [ eq : choieb ] ) we calculated numerically ( green curve ) .overall , it grossly overestimates the eof but provides a better estimate when is becoming separable .the third of the mentioned estimates , eq . ( [ eq : reestretch ] ) , is not known to be applicable to our channel .we can also study how the coupling strength affects the rate .for the parameters , and ( we set ) we plot the eof against in fig .[ fig : ratelam ] . as expected , with a decreasing coupling constant rate goes down as well until the state becomes so weakly coupled that it is separable and the rate is zero .we will comment on an interesting problem of what eavesdropping actually means in our situation .no quantum signal is exchanged between the communicating parties and so an eavesdropper can not take direct advantage of the qkd paradigm of how the security is analyzed : normally , any kind of channel noise is attributed to the eavesdropper and the value of the secret key rate reflects the fact whether under this assumption information - theoretic security can be achieved . in our opinionthere are two possibilities of how eve can interact with the system held by the legitimate parties and none of them affects the obtained results .eve could gain access to the labs and tamper with the detectors .but this is really an out - of - the - scope attack , something more similar to a side - channel attack in qkd where an eavesdropper manipulates the detector s efficiency by sending laser pulses .a more relevant attack would be eve setting up her own detectors at various spacetime points . in principleshe could be in possession of a multipartite entangled state or even a purification of the state shared by the sender and the receiver .but this does not help her at all since she can not manipulate the marginal state . as a matter of fact, this is precisely the assumption in the security of qkd an eavesdropper holds the purification ( or the complementary output of the channel ) and the secret key rate is calculated or estimated under these most advantageous conditions for the eavesdropper .we developed a truly fundamental covert quantum communication protocol by exploiting the properties of minkowski vacuum .contrary to the previously introduced schemes , the covert communication does not need to be hidden in the environmental ( thermal ) noise and is not limited to a particular model of an optical quantum channel .what makes our results fundamental is that the rate of covert communication in the asymptotic limit of quantum communication is in fact strictly positive ; where previous results tended towards zero in the same limit . in our schemeno quantum signal is exchanged at all which is possible due to intrinsically quantum properties of minkowski vacuum essentially offering preshared entangled states at large spatial distances .we tapped into this resource by equipping two inertial observers by specially crafted unruh - dewitt detectors ( following ) and calculating their response up to the fourth perturbative order .we found that the detectors are entangled and we analyzed the meaning of this entanglement from the quantum communication point of view .we found that for suitable parameters and using two - way quantum communication , that a positive secret key rates can be achieved over thousands of kilometers .the main open problem is whether we can avoid classical communication altogether for greater distances .we have shown that quantum information can be covertly sent to the same place in the future without the need to classically communicate whatsoever .this is possible because of the positive value of the coherent information for the corresponding quantum channel . to answer this question we would need to go well beyond the fourth perturbative order a strategy that would bring the answer to another open problem : what is the actual non - perturbative channel ? if is interpreted as a choi - jamiokowski matrixas was advocated here , then its form strongly resembles a qubit pauli channel .this hardly can be true for an arbitrary value of the detector parameters since in order for it to be a pauli channel the first and last diagonal elements must be equal .but perhaps in some regime or limit this could indeed be the case .the authors thank nathan walk and timothy ralph for comments .
we present truly ultimate limits on covert quantum communication by exploiting quantum - mechanical properties of the minkowski vacuum in the quantum field theory framework . our main results are the following : we show how two parties equipped with unruh - dewitt detectors can covertly communicate at large distances without the need of hiding in a thermal background or relying on various technological tricks . we reinstate the information - theoretic security standards for reliability of asymptotic quantum communication and show that the rate of covert communication is strictly positive . therefore , contrary to the previous conclusions , covert and reliable quantum communication is possible .
the imaging atmospheric cherenkov technique ( iact ) was developed at the fred lawrence whipple observatory ( flwo ) resulting in the first very high energy ( vhe ; e 100 gev ) detection of the crab nebula in 1989 . in the twenty years since that first publicationthere have been vhe detections of over 100 objects including pulsars , blazars , pulsar wind nebula , supernova remnants and starburst galaxies . since vhe photonsdo not penetrate the atmosphere , iact telescopes measure the cherenkov light generated by particle showers initiated by the primary photons interacting with our atmosphere .this cherenkov light appears as a two dimensional ellipse when imaged by an iact telescope camera .the shape and orientation of the ellipse in the camera indicate whether the shower was initiated by a gamma ray or by a cosmic ray which can also cause a particle shower .the current generation of iact instruments involve arrays of telescopes .the addition of multiple telescopes allows for a more accurate determination of the shower parameters .one of the most powerful aspects of this technique is that the light pool of the shower defines the collection area ( ) which is more than adequate to compensate for the low flux of vhe gamma rays .currently there are four major experiments in operation , hess , an array of four iact telescopes located in namibia , magic , an array of two telescopes located in the canary islands , veritas in southern arizona and cangaroo in australia .magic just completed a major upgrade by adding a single telescope and stereo trigger and hess is in the process of building an additional very large telescope .this contribution details part of the ongoing upgrade program being undertaken by the veritas collaboration .veritas is an array of four 12 m diameter iact telescopes located in southern arizona at the flwo at an altitude of 1268 m. veritas detects photons from astrophysical sources at energies between 100 gev and 30 tev .the veritas telescopes consist of four identical alt - az mounted davies - cotton reflectors with an f number of 1.0 .the mirror area is approximately 106 m . mounted in the focal planeis a camera made up of 499 pixels consisting of 28 mm photonis phototubes .veritas has a three level trigger , the first at the pixel level , the second is a pattern trigger which triggers when any three adjacent pixels trigger . finally , an array trigger fires if any 2 or more telescopes trigger within a set time frame . for more details on the veritas hardware ,see . for historical reasons , telescopes 1 and 4were erected in close ( m ) proximity . even though veritas met all of its original design specifications , this resulted in a significant collection area overlap and increased background due to cosmic rays and local muons .in fact , all of the published veritas analysis included a cut that rejected events that only triggered telescopes 1 and 4 .simulations performed in the summer of 2008 suggested up to a 15% improvement in sensitivity if telescope 1 was moved m eastward from its initial position . assuming that telescopes 1 and 4 are redundant and can be considered a single telescope , a 1/3 improvement is expected by adding an additional telescope .based on these data , it was decided to relocate telescope 1 to a more ideal location providing a more symmetrical layout to the veritas array ( see figures [ fig : layout ] and [ fig : layout - schematic ] ) .it was decided to relocate telescope 1 instead of telescope 4 to allow for the refurbishment of the oldest telescope in the array which was originally installed at the flwo as a prototype in 2002 .the relocation of telescope 1 is part of an ongoing upgrade program which recently included an improvement in the optical point spread function ( psf ) .the improvement in the optical psf was accomplished using a novel mirror alignment system which resulted in a 25 - 30% improvement in the psf .this optical psf improvement also contributes to the enhancement in sensitivity discussed here and can not be disentangled from the overall results .the move of telescope 1 combined with the improvement in the optical psf has resulted in making veritas the most sensitive vhe telescope array in the world capable of detected a 1% crab nebula signal in less than 30 hours .since veritas does not operate during the summer months ( approximately july through august ) , the move of telescope 1 was scheduled to take place during this time to minimize the impact on the observing program .telescope 1 was shutdown 6 weeks early ( may 4 , 2009 ) so that it would be operational by the first of october .the move was completed on september 4 , 2009 and is estimated to have taken 2600 person hours of labor .ten days later on the scheduled operations began with the full array , over two weeks earlier than expected . by september operations had resumed . in total , veritas only lost 6 weeks of full four telescope operations and these were with the old array layout .the final array layout , while not entirely symmetric , is a much better layout for a vhe instrument .figure [ fig : layout ] shows an aerial view of the veritas array with the old layout shown in blue and the new layout in red . while the old layout had inter - telescope distances ranging from 35 m to 127 m , the new layout distances range from 81 m to 127 m. figure [ fig : layout - schematic ] shows a schematic representation of the array viewed from directly above . also shown as a black arrow is the relocation of telescope 1 .veritas data are calibrated and cleaned initially as described in . after calibrationseveral noise - reducing cuts are made .the veritas standard analysis consists of parametrization using a moment analysis and following this , the calculation of scaled parameters are used for event selection .this selection consists of different sets of gamma - ray cuts , determined _ a priori_. depending on the strength and expected spectral index of a source , different cuts are chosen .for example , a source with the strength of the crab would use a looser set of cuts than a week source at the 1% crab flux level .these two sets of cuts are called loose and hard cuts .additionally , soft and standard versions of these cuts are used for soft ( approximately spectral indices of 3 and above ) or standard ( crab - like 2.5 spectral index sources ) .the choice of which cuts to use are determined prior to the analysis .figure [ fig : time ] shows the observation time needed to detect an object at the 5 standard deviation ( ) level . before the relocation of telescope 1 , a 1% crab flux source could be detected in 48 hours while after the move it only takes 28 hours loose cuts .similarly , it takes 72 seconds to detect the crab nebula after the move with hard cuts , as opposed to 108 seconds before the move . level vs. that source s flux in units of the crab nebula s flux .this is shown for the original array layout and for the new array layout with two different sets of event selection cuts .note that it would take 48 hours to detect a 1% crab source with the original array layout and only 28 hours with the new array layout.,width=245 ] another way of looking at the sensitivity of vhe instruments is to calculate the integral flux sensitivity above a energy threshold .shown in figure [ fig : sensitivity ] are the integral flux sensitivity vs. energy threshold for several different instruments . in redis the original veritas layout while the new veritas layout is shown in black ( based on crab observations ; the dashed sections are under evaluation ) .the integral flux sensitivity of veritas above 300 gev is better after the move .this corresponds directly to a 60% reduction in the time needed to detect a source ( for example , a 50 hour observation before the move is equivalent to a 30 hour post move observation ) . for comparisonare shown the initial hess sensitivity shown as the blue dashed line ( note that this is the original hess sensitivity curve before any mirror reflectivity degradations ) .the integral sensitivity of a single magic - i telescope is shown as the green dashed line .another thing to note is that the sensitivity of veritas has slightly degraded at the lower end due to the loss of sensitivity to the lowest energy showers .is shown as the green dashed line .note that the integral flux sensitivity improvement of veritas above 300 gev is .,width=245 ] figure [ fig : resolution ] shows the energy resolution of the veritas array as well as the angular resolution .these two features are similar to the numbers calculated for the original veritas array layout .both plots are for observations at 70 degrees elevation . the angular resolution and energy resolution change for observations undertaken at different elevations .the veritas collaboration relocated telescope 1 and dramatically improved the optical psf during the summer of 2009 as part of an ongoing upgrade program .these studies indicate that the upgrades have improved the sensitivity of veritas by 30% resulting in a 60% change in the time needed to detect a source .the higher sensitivity achieved with veritas allows the detection of more objects in a shorter amount of time , effectively doubling the observation time .the ability to detect marginal sources such as m82 and to do deep observations of known objects has drastically improved .in addition to the telescope relocation and optical psf improvements , there are several other upgrade plans being discussed which are described in and are planned to be implemented in the next few years .these upgrade plans include the installation of higher efficiency photon detectors which would result in a 17% improvement in the sensitivty of the array and/or the installation of a topological trigger which would consist of transmitting image parameters from the camera directly to the array trigger allowing for real - time event classification for gamma / hadron separation .in addition to these baseline upgrades , the expansion of the array by adding more telescopes or an active mirror alignment system is also possible .this ongoing upgrade program , beginning with the optical psf improvement and the relocation of telescope 1 will continue to make veritas competitive in the coming decade .this research is supported by grants from the u.s .department of energy , the u.s . national science foundation , and the smithsonian institution , by nserc in canada , by pparc in the uk and by science foundation ireland .the veritas collaboration acknowledges the hard work and dedication of the flwo support staff in making the relocation of telescope 1 possible .j. holder , v. a. acciari , e. aliu , t. arlen , m. beilicke , w. benbow , s. m. bradbury , j. h. buckley , v. bugaev , y. butt , et al ., in american institute of physics conference series , edited by f. a. aharonian , w. hofmann , and f. rieger ( 2008 ) , vol .1085 of american institute of physics conference series , pp .657 - 660 .m. k. daniel , in international cosmic ray conference , edited by r. caballero , j. c. dolivo , g. medina - tanco , l. nellen , f. a. snchez , and j. f. valds - galicia ( 2008 ) , vol . 3 of international cosmic ray conference , pp .1325 - 1328 .
the first veritas telescope was installed in 2002 - 2003 at the fred lawrence whipple observatory and was originally operated as a prototype instrument . subsequently the decision was made to locate the full array at the same site , resulting in an asymmetric array layout . as anticipated , this resulted in less than optimal sensitivity due to the loss in effective area and the increase in background due to local muon initiated triggers . in the summer of 2009 , the veritas collaboration relocated telescope 1 to improve the overall array layout . this has provided a 30% improvement in sensitivity corresponding to a 60% change in the time needed to detect a source .
_ iscsi _ is a protocol designed to transport scsi commands over a tcp / ip network .+ _ iscsi _ can be used as a building block for network storage using existing ip infrastructure in a lan / wan environment .it can connect different types of block - oriented storage devices to servers .+ _ iscsi _ was initially standardized by ansi t10 and further developed by the ip storage working group of the ietf , which will publish soon an rfc .many vendors in the storage industry as well as research projects are currently working on the implementation of the iscsi protocol ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ " the small computer systems interface ( scsi ) is a popular family of protocols for communicating with i / o devices , especially storage devices .scsi is a client - server architecture .clients of a scsi interface are called " initiators " .initiators issue scsi " commands " to request services from components , logical units , of a server known as a " target " . a " scsi transport " maps the client - server scsi protocol to a specific interconnect .initiators are one endpoint of a scsi transport and targets are the other endpoint .the iscsi protocol describes a means of transporting of the scsi packets over tcp / ip , providing for an interoperable solution which can take advantage of existing internet infrastructure , internet management facilities and address distance limitations . " draft - ietf - ips - iscsi-20 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ hyperscsi _ is a protocol that sends scsi commands using raw ethernet packets instead of the tcp / ip packets used for _iscsi_. thus , it bypasses the tcp / ip stack of the os and does not suffer from the shortcomings of tcp / ip ._ hyperscsi _ focuses on turning ethernet into a usable storage infrastructure by adding missing components such as flow control , segmentation , reassembly , encryption , access control lists and security .it can be used to connect different type of storage , such as scsi , ide and usb devices ._ hyperscsi _ is developed by the _ modular connected storage architecture _ group in the network storage technology division of the data storage institute from the agency for science , technology and research of singapore .enbd is a linux kernel module coupled with a user space daemon that sends block requests from a linux client to a linux server using a tcp / ip connection .it uses multichannel communications and implements internal failover and automatic balancing between the channels .it supports encryption and authentication . + this block access technology is only useful with a linux kernel because of the linux specific block request format .+ it is developed by the linux community under a gpl license .the following hardware was used to perform the tests : * _ test2 _ : + dual pentium 3 - 1 ghz + 3com gigabit ethernet card based on broadcom bcm 5700 chipset + 1 western digital wd1800jb 180 gbytes + 3ware raid controller 7000-series * _ test11 _ : + dual pentium 4 - 2.4 ghz ( hyperthreading enabled ) + 6 western digital wd1800jb 180 gbytes + 3ware raid controllers 7000-series or promise ultra133 ide controllers + 3com gigabit ethernet card based on broadcom bcm 5700 chipset * _ test13 _ : + dual amd mp 2200 + + 6 western digital wd1800jb 180 gbytes + 3ware raid controllers 7000-series or promise ultra133 ide controllers + 3com gigabit ethernet card based on broadcom bcm 5700 chipset * iscsi server : eurologic elantraics2100 ip - san storage appliance - v1.0 + 3 scsi drives all the machines have a redhat 7.3 based distribution , with kernel 2.4.19 or 2.4.20 .+ the following optimizations were made to improve the performance : sysctl -w vm.min-readahead=127 sysctl -w vm.max-readahead=256 sysctl -w vm.bdflush = 2 500 0 0 500 1000 60 20 0 elvtune -r 512 -w 1024/dev / hd\{a , c , e , g , i , k } two benchmarks were used to measure the io bandwidth and the cpu load on the machines : * _ bonnie++ _ : v 1.03 + this benchmark measures the performance of harddrives and filesystems .it aims at simulating a database like access pattern .+ we are interested in two results : _sequential output block_ and _sequential input block_. + bonnie++ uses a filesize of 9gbytes with a chunksize of 8kbytes .bonnie++ reports the cpu load for each test .however , we found that the reported cpu load war incorrect .so we used a standard monitoring tool ( vmstat ) to measure the cpu load during bonnie++ runs instead . * _ seqent_random_io64 : _ + in this benchmark , we were interested in three results : * * _ write _ performance : bandwidth measured for writing a file of 5 gbytes , with a blocksize of 1.5 mbytes .* * _ sequential reading _ performance : bandwidth measured for sequential reading of a file of 5 gbytes with a blocksize of 1.5 mbytes . * * _ random reading _ performance : bandwidth measured for random reads within a file of 5 gbytes with a blocksize of 1.5 mbytes . +this benchmark is a custom program used at cern to evaluate the performance of disk servers .it simulates an access pattern used by cern applications ._ vmstat _ has been used to monitor the cpu load on each machine .the server was the eurologic ics2100 ip - san storage appliance . the client was _ test13 _ , with kernel 2.4.19smp .two software initiators were used to connect to the iscsi server : ibmiscsi and linux - iscsi .we used two versions of linux - iscsi : 2.1.2.9 , implementing version 0.8 of the iscsi draft , and 3.1.0.6 , implementing version 0.16 of the iscsi draft .the results are given in the table below : [ cols="^,^,^,^,^,^ " , ] _ comments : _ softwareraid delivers more bandwidth than hardwareraid , but at a higher cpu cost .i would like to thank all the people from the it / adc group at cern for helping me in this study and particularly markus schulz , arie van praag , remi tordeux , oscar ponce cruz , jan iven , peter kelemen and emanuele leonardi for their support , comments and ideas .1 ietf ip storage working group + http://www.ietf.org/html.charters/ips-charter.html mcsa hyperscsi + http://nst.dsi.a-star.edu.sg/mcsa/hyperscsi/index.html enhanced network block device + http://www.it.uc3m.es/~ptb/nbd/ eurologic elantra ics2100 + http://www.eurologic.com/products_elantra.htm bonnie++ + http://www.coker.com.au/bonnie++/ ibmiscsi + http://www-124.ibm.com/developerworks/projects/naslib/ linux - iscsi + http://linux-iscsi.sourceforge.net/
we report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions . it focuses on the performance that can be achieved by these systems and gives measured figures for different configurations . it is divided into two parts : iscsi and other technologies and hardware and software raid solutions . the first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network . it covers block access technologies ( iscsi , hyperscsi , enbd ) . experimental figures are given for different numbers of clients and servers . the second part compares a system based on 3ware hardware raid controllers , a system using linux software raid and ide cards and a system mixing both hardware raid and software raid . performance measurements for reading and writing are given for different raid levels .
in wireless networks , radio signals can be overheard by unauthorized users due to the broadcast nature of wireless medium , which makes the wireless communication systems vulnerable to eavesdropping attack .secret key encryption techniques have been widely used to prevent eavesdropping and ensure the confidentiality of signal transmissions .however , the cryptographic techniques rely on secret keys and introduce additional complexities due to the dynamic distribution and management of secret keys . to this end , physical - layer security is emerging as an alternative paradigm to prevent the eavesdropper attack and assure the secure communication by exploiting the physical characteristics of wireless channels .the physical - layer security work was pioneered by wyner [ 1 ] and further extended in [ 2 ] , where an information - theoretic framework has been established by developing achievable secrecy rates .it has been proven in [ 2 ] that in the presence of an eavesdropper , a so - called _ secrecy capacity _ is shown as the difference between the channel capacity from source to destination ( called main link ) and that from source to eavesdropper ( called wiretap link ) . if the secrecy capacity is negative , the eavesdropper can intercept the transmission from source to destination and an intercept event occurs in this case . due to the wireless fading effect , the secrecy capacity is severely limited , which results in an increase in the intercept probability . to alleviate this problem , some existing work is proposed to improve the secrecy capacity by taking advantage of multiple antennas [ 3 ] and [ 4 ] .however , it may be difficult to implement multiple antennas in some cases ( e.g. , handheld terminals , sensor nodes , etc . ) due to the limitation in physical size and power consumption . as an alternative ,user cooperation is proposed as an effective means to combat wireless fading , which also has great potential to improve the secrecy capacity of wireless transmissions in the presence of eavesdropping attack . in [ 5 ] , the authors studied the secrecy capacity of wireless transmissions in the presence of an eavesdropper with a relay node , where the amplify - and - forward ( af ) , decode - and - forward ( df ) , and compress - and - forward ( cf ) relaying protocols are examined and compared with each other .the cooperative jamming was proposed in [ 6 ] by allowing multiple users to cooperate with each other in preventing eavesdropping and analyzed in terms of the achievable secrecy rate . in [ 7 ] ,the cooperation strategy was further examined to enhance the physical - layer security and a so - called noise - forwarding scheme was proposed , where the relay node attempts to send codewords independent of the source message to confuse the eavesdropper . in addition , in [ 8 ] and [ 9 ] , the authors explored the cooperative relays for physical - layer security improvement and developed the corresponding secrecy capacity performance , showing that the cooperative relays can significantly increase the secrecy capacity . in this paper , we consider a cooperative wireless network with multiple df relays in the presence of an eavesdropper and examine the best relay selection to improve wireless security against eavesdropping attack .differing from the traditional max - min relay selection criterion in [ 10 ] where only the channel state information ( csi ) of two - hop relay links ( i.e. , source - relay and relay - destination ) are considered , we here have to take into account additional csi of the eavesdropper s links , in addition to the two - hop relay links csi .the main contributions of this paper are summarized as follows .first , we propose the best relay selection scheme in a cooperative wireless networks with multiple df relays in the presence of eavesdropping attack .we also examine the direct transmission without relay and traditional max - min relay selection as benchmark schemes .secondly , we derive closed - form expressions of intercept probability for the direct transmission , traditional max - min relay selection , and proposed best relay selection schemes in rayleigh fading channels . the remainder of this paper is organized as follows .section ii presents the system model and describes the direct transmission , traditional max - min relay selection , and proposed best relay selection schemes . in section iii, we derive closed - form intercept probability expressions of the direct transmission , traditional max - min relay selection , and proposed best relay selection schemes over rayleigh fading channels . in sectioniv , we conduct numerical intercept probability evaluation to show the advantage of proposed best relay selection over traditional max - min relay selection . finally , we make some concluding remarks in section v.consider a cooperative wireless network consisting of one source , one destination , and df relays in the presence of an eavesdropper as shown in fig .1 , where all nodes are equipped with single antenna and the solid and dash lines represent the main and wiretap links , respectively .the main and wiretap links both are modeled as rayleigh fading channels and the thermal noise received at any node is modeled as a complex gaussian random variable with zero mean and variance , i.e. , .following [ 8 ] , we consider that relays are exploited to assist the transmission from source to destination and the direct links from source to destination and eavesdropper are not available , e.g. , the destination and eavesdropper both are out of the coverage area . for notational convenience , relays are denoted by .differing from the existing work [ 8 ] in which all relays participate in forwarding the source messages to destination , we here consider the use of the best relay only to forward the message transmission from source to destination .more specifically , the source node first broadcasts the message to cooperative relays among which only the best relay will be selected to forward its received signal to destination .meanwhile , the eavesdropper monitors the transmission from the best relay to destination and attempts to interpret the source message .following [ 8 ] , we assume that the eavesdropper knows everything about the signal transmission from source via relay to destination , including the encoding scheme at source , forwarding protocol at relays , and decoding method at destination , except that the source signal is confidential .it needs to be pointed out that in order to effectively prevent the eavesdropper from interception , not only the csi of main links , but also the wiretap links csi should be considered in the best relay selection .this differs from the traditional relay selection in [ 10 ] where only the two - hop relay links csi is considered in performing relay selection . similarlyto [ 8 ] , we here assume that the global csi of both main and wiretap links are available , which is a common assumption in the physical - layer security literature . notice that the wiretaps link s csi can be estimated and obtained by monitoring the eavesdropper s transmissions as discussed in [ 8 ] . in the following ,we first describe the conventional direct transmission without relay and then present the traditional max - min relay selection and proposed best relay selection schemes . for comparison purpose, this subsection describes the conventional direct transmission without relay .consider that the source transmits a signal ( ) with power .thus , the received signal at destination is expressed as where represents a fading coefficient of the channel from source to destination and represents additive white gaussian noise ( awgn ) at destination . meanwhile , due to the broadcast nature of wireless transmission , the eavesdropper also receives a copy of the source signal and the corresponding received signal is written as where represents a fading coefficient of the channel from source to eavesdropper and represents awgn at eavesdropper . assuming the optimal gaussian codebook used at source , the maximal achievable rate ( also known as channel capacity ) of the direct transmission from source to destination is obtained from eq .( 1 ) as where is the noise variance .similarly , from eq .( 2 ) , the capacity of wiretap link from source to eavesdropper is easily given by it has been proven in [ 2 ] that the secrecy capacity is shown as the difference between the capacity of main link and that of wiretap link .hence , the secrecy capacity of direct transmission is given by where and are given in eqs .( 3 ) and ( 4 ) , respectively . as discussed in [ 2 ] , when the secrecy capacity is negative ( i.e. , the capacity of main link falls below the wiretap link s capacity ) , the eavesdropper can intercept the source signal and an intercept event occurs .thus , the probability that the eavesdropper successfully intercepts source signal , called _ intercept probability _ , is a key metric in evaluating the performance of physical - layer security . in this paper, we mainly focus on how to improve the intercept probability by exploiting the best relay selection to enhance the wireless security against eavesdropping .the following subsection describes the relay selection for physical - layer security in the presence of eavesdropper attack .considering the df protocol , the relay first decodes its received signal from source and then re - encodes and transmits its decoded outcome to the destination .more specifically , the source node first broadcasts the signal to relays that attempt to decode their received signals .then , only the best relay is selected to re - encode and transmit its decoded outcome to the destination .notice that in the df relaying transmission , the source signal is transmitted twice from the source and relay , respectively . in order to make a fair comparison with the direct transmission, the total amount of transmit power at source and relay shall be limited to . by using the equal - power allocation for simplicity ,the transmit power at source and relay is given by .in df relaying transmission , either source - relay or relay - destination links in failure will result in the two - hop df transmission in failure , implying that the capacity of df transmission is the minimum of the capacity from source to relay and that from relay to destination . hence ,considering as the best relay , we can obtain the capacity of df relaying transmission from source via to destination as where and , respectively , represent the channel capacity from source to and that from to destination , which are given by and meanwhile , the eavesdropper can overhear the transmission from to destination .hence , the channel capacity from to eavesdropper can be easily obtained as combining eqs .( 6 ) and ( 9 ) , the secrecy capacity of df relaying transmission with is given by the following presents the traditional max - min relay selection and proposed best relay selection schemes .let us first present the traditional max - min relay selection scheme for the comparison purpose . in the traditional relay selection scheme , the relay that maximizes the capacity of df relaying transmission is viewed as the best relay .thus , the traditional relay selection criterion is obtained from eq .( 6 ) as which is the traditional max - min relay selection criterion as given by eq .( 1 ) in [ 10 ] . as shown in eq . ( 11 ) , only the main links csi and is considered in the max - min relay selection scheme without considering the eavesdropper s csi .we now propose the best relay selection criterion considering the csi of both main and wiretap links , in which the relay that maximizes the secrecy capacity of df relaying transmission is selected as the best relay .thus , the best relay selection criterion is obtained from eq . ( 10 ) as one can observe from eq .( 12 ) that the proposed best relay selection scheme takes into account not only the main links csi and , but also the wiretap link s csi .this differs from the traditional max - min relay selection criterion in eq . ( 11 ) where only the main links csi is considered .notice that the transmit power in eq .( 12 ) is a known parameter and the noise variance is shown as [ 12 ] , where is boltzmann constant ( i.e. , ) , is room temperature , and is system bandwidth . since the room temperature and system bandwidth both are predetermined , the noise variance can be easily obtained .it is pointed out that using the proposed best relay selection criterion in eq .( 12 ) , we can further develop a centralized or distributed relay selection algorithm . to be specific , for a centralized relay selection, the source node needs to maintain a table that consists of relays and related csi ( i.e. , , and ) . in this way ,the best relay can be easily determined by looking up the table using the proposed criterion in eq .( 12 ) , which is referred to as centralized relay selection strategy . for a distributed relay selection ,each relay maintains a timer and sets an initial value of the timer in inverse proportional to in eq .( 12 ) , resulting in the best relay with the smallest initial value for its timer . as a consequence ,the best relay exhausts its timer earliest compared with the other relays , and then broadcasts a control packet to notify the source node and other relays [ 11 ] .in this section , we derive closed - form intercept probability expressions of the direct transmission , traditional max - min relay selection and proposed best relay selection schemes over rayleigh fading channels .let us first analyze the intercept probability of direct transmission as a baseline for the comparison purpose .as is known , an intercept event occurs when the secrecy capacity becomes negative .thus , the intercept probability of direct transmission is obtained from eq .( 5 ) as where the second equation is obtained by using eqs . ( 3 ) and ( 4 ) .considering that and are independent exponentially distributed , we obtain a closed - form intercept probability expression of direct transmission as where , , and is the ratio of average channel gain from source to destination to that from source to eavesdropper , which is referred to as the main - to - eavesdropper ratio ( mer ) throughout this paper .it is observed from eq . ( 14 ) that the intercept probability of direct transmission is independent of the transmit power , which implies that the wireless security performance can not be improved by increasing the transmit power .this motivates us to exploit cooperative relays to decrease the intercept probability and improve the physical - layer security .this subsection presents the intercept probability analysis of traditional max - min scheme in rayleigh fading channels . from eq . ( 11 ) , we obtain an intercept probability of the traditional max - min scheme as where denotes the channel capacity from the best relay to eavesdropper with df relaying protocol .similarly , assuming that and are identically and independently distributed and using the law of total probability , the intercept probability of traditional max - min scheme is given by notice that , and follow exponential distributions with means , and , respectively . letting , we obtain eq .( 17 ) at the top of the following page , } \frac{1}{{\sigma _ { me}^2}}\exp ( - \frac{x}{{\sigma _ { me}^2}})dx } } \\ & = \sum\limits_{m = 1}^m { \frac{1}{m}\int_0^\infty { \left ( { 1 + \sum\limits_{k = 1}^{{2^m } - 1 } { { { ( - 1)}^{|{{\mathcal{a}}_k}|}}\exp [ - \sum\limits_{i \in { { \mathcal{a}}_k } } { ( \frac{x}{{\sigma _ { si}^2 } } + \frac{x}{{\sigma _ { id}^2 } } ) } ] } } \right)\frac{1}{{\sigma _ { me}^2}}\exp ( - \frac{x}{{\sigma _ { me}^2}})dx } } \\ & = \sum\limits_{m = 1}^m { \frac{1}{m}\left ( { 1 + \sum\limits_{k = 1}^{{2^m } - 1 } { { { ( - 1)}^{|{{\mathcal{a}}_k}|}}{[1 + \sum\limits_{i \in { { \mathcal{a}}_k } } { ( \frac{{\sigma _ { me}^2}}{{\sigma _ { si}^2 } } + \frac{{\sigma _ { me}^2}}{{\sigma _ { id}^2 } } ) } ] ^{- 1 } } } } \right ) } \end{split}\ ] ] where the second equation is obtained by using the binomial expansion , represents the -th non - empty sub - collection of relays , and represents the number of elements in set .this subsection derives a closed - form intercept probability expression of the proposed best relay selection scheme . according to the definition of intercept event , an intercept probability of proposed schemeis obtained from eq .( 12 ) as where the second equation is obtained by using eq .notice that random variables , and follow exponential distributions with means , and , respectively . denoting , we can easily obtain the cumulative density function ( cdf ) of as wherein . using eq .( 19 ) , we have which completes the closed - form intercept probability analysis of oas scheme in rayleigh fading channels .2 shows the numerical intercept probability results of direct transmission , traditional max - min relay selection , and proposed best - relay selection schemes by plotting eqs .( 14 ) , ( 17 ) and ( 20 ) as a function of mer .one can see from fig .2 that for both cases of and , the intercept probability of proposed best relay selection scheme is always smaller than that of traditional max - min relay selection , showing the advantage of proposed best relay selection over traditional max - min scheme . fig .3 depicts the intercept probability versus the number of relays of the traditional max - min and proposed best relay selection schemes .it is observed from fig .3 that the proposed best relay selection scheme strictly performs better than the traditional max - min scheme in terms of the intercept probability .3 also shows that as the number of relays increases , the intercept probabilities of both the traditional and proposed relay selection schemes significantly decrease .this means that increasing the number of relays can greatly improve the physical - layer security against eavesdropping attack .in this paper , we investigated the physical - layer security in cooperative wireless networks with multiple df relays and proposed the best relay selection scheme to improve wireless security against eavesdropping attack .we also examined the direct transmission and traditional max - min relay selection as benchmark schemes .we derived closed - form intercept probability expressions of the direct transmission , traditional max - min relay selection , and proposed best relay selection schemes .numerical intercept probability results showed that the proposed best relay selection scheme always performs better than the direct transmission and max - min relay selection schemes .moreover , as the number of relays increases , the max - min relay selection and proposed best relay selection schemes both significantly improve , which shows the advantage of exploiting multiple relays for the physical - layer security improvement .the work presented in this paper is partially supported by the auto21 network of centre of excellence , canada .e. tekin and a. yener , the general gaussian multiple access and two - way wire - tap channels : achievable rates and cooperative jamming , " _ ieee trans .inf . theory _ ,54 , no . 6 , pp . 2735 - 2751 , jun .2008 .y. zou , j. zhu , b. zheng , and y .- d .yao , an adaptive cooperation diversity scheme with best - relay selection in cognitive radio networks , " _ ieee trans .signal process .5438 - 5445 , oct .
due to the broadcast nature of wireless medium , wireless communication is extremely vulnerable to eavesdropping attack . physical - layer security is emerging as a new paradigm to prevent the eavesdropper from interception by exploiting the physical characteristics of wireless channels , which has recently attracted a lot of research attentions . in this paper , we consider the physical - layer security in cooperative wireless networks with multiple decode - and - forward ( df ) relays and investigate the best relay selection in the presence of eavesdropping attack . for the comparison purpose , we also examine the conventional direct transmission without relay and traditional max - min relay selection . we derive closed - form intercept probability expressions of the direct transmission , traditional max - min relay selection , and proposed best relay selection schemes in rayleigh fading channels . numerical results show that the proposed best relay selection scheme strictly outperforms the traditional direct transmission and max - min relay selection schemes in terms of intercept probability . in addition , as the number of relays increases , the intercept probabilities of both traditional max - min relay selection and proposed best relay selection schemes decrease significantly , showing the advantage of exploiting multiple relays against eavesdropping attack . intercept probability , best relay selection , eavesdropping attack , physical - layer security , cooperative wireless networks .
bayesian inference using markov chain monte carlo simulation methods is used extensively in statistical applications . in this approach ,the parameters are generated from a proposal distribution , or several such proposal distributions , with the generated proposals accepted or rejected using the metropolis - hastings method ; see for example . in adaptive samplingthe parameters of the proposal distribution are tuned by using previous draws .our article deals with diminishing adaptation schemes , which means that the difference between successive proposals converges to zero . in practice, this usually means that the proposals themselves eventually do not change .important theoretical and practical contributions to diminishing adaptation sampling were made by , , , , , , and .the adaptive random walk metropolis method was proposed by with further contributions by , and . an adaptive independent metropolis - hastings method with a mixture of normals proposal which is estimated using a clustering algorithm .although there is now a body of theory justifying the use of adaptive sampling , the construction of interesting adaptive samplers and their empirical performance on real examples has received less attention .our article aims to fill this gap by introducing a -copula based proposal density .an antithetic version of this proposal is also studied and is shown to increase efficiency when the acceptance rate is above 70% . we also refine the adaptive metropolis - hastings proposal in by adding a heavy tailed component to allow the sampling scheme to traverse multiple modes more easily . as well as being of interest in its own right , in some of the examples we have also used this refined sampler to initialize the adaptive independent metropolis - hastings schemes . we study the performance of the above adaptive proposals , as well as the adaptive mixture of normals proposal of , for a number of models and priors using real data .the models and priors produce challenging but realistic posterior target distributions . is a longer version of our article that considers some alternative versions of our algorithms and includes more details and examples .suppose that is the target density from which we wish to generate a sample of observations , but that it is computationally difficult to do so directly .one way of generating the sample is to use the metropolis - hastings method , which is now described .suppose that given some initial we have generated the iterates .we generate from the proposal density which may also depend on some other value of which we call .let be the proposed value of generated from .then we take with probability and take otherwise .if does not depend on , then under appropriate regularity conditions we can show that the sequence of iterates converges to draws from the target density .see for details . in adaptive samplingthe parameters of are estimated from the iterates . under appropriate regularity conditions the sequence of iterates , converges to draws from the target distribution .see , and .we now describe the adaptive sampling schemes studied in the paper .the adaptive random walk metropolis proposal of is where is the dimension of and is a multivariate dimensional normal density in with mean and covariance matrix . in , for , with representing the initial iterations , for with ; is a constant covariance matrix , which is taken as the identity matrix by but can be based on the laplace approximation or some other estimate .the matrix is the sample covariance matrix of the first iterates .the scalar is meant to achieve a high acceptance rate by moving the sampler locally , while the scalar is considered to be optimal for a random walk proposal when the target is multivariate normal .we note that the acceptance probability for the adaptive random walk metropolis simplifies to we refine the two component random walk metropolis proposal in by adding a third component with and with .we take if , for and . alternatively, the third component can be a multivariate distribution with small degrees of freedom .we refer to this proposal as the three component adaptive random walk .the purpose of the heavier tailed third component is to allow the sampler to explore the state space more effectively by making it easier to leave local modes . to illustrate this issue we consider the performance of the two and three component adaptive random walk samplers when the target distribution is a two component and five dimensional multivariate mixture of normals .each component in the target has equal probability , the first component has mean vector and the second component has mean vector .both components have identity covariance matrices . for the three componentadaptive random walk we choose .the starting value is for both adaptive random walk samplers .figure [ fig : bimodal : normal ] compares the results and shows that the two component adaptive random walk fails to explore the posterior distribution even after 500 , 000 iterations , whereas the three component adaptive random walk can get out of the local modes .+ the proposal density of the adaptive independent metropolis - hastings approach of is a mixture with four terms of the form with the parameter vector for the density .the sampling scheme is run in two stages , which are described below . throughout each stage ,the parameters in the first two terms are kept fixed .the first term is an estimate of the target density and the second term is a heavy tailed version of .the third term is an estimate of the target that is updated or adapted as the simulation progresses and the fourth term is a heavy tailed version of the third term . in the first stage is a laplace approximation to the posterior if it is readily available and works well ; otherwise , is a gaussian density constructed from a preliminary run of 1000 iterates or more of the three component adaptive random walk . throughout , has the same component means and probabilities as , but its component covariance matrices are ten times those of .the term is a mixture of normals and is also a mixture of normals obtained by taking its component probabilities and means equal to those of , and its component covariance matrices equal to 20 times those of .the first stage begins by using and only with , for example , and , until there is a sufficiently large number of iterates to form .after that we set and .we begin with a single normal density for and as the simulation progresses we add more components up to a maximum of four according to a schedule that depends on the ratio of the number of accepted draws to the dimension of .see appendix [ s : sim details ] . in the second stage , is set to the value of at the end of the first stage and and are constructed as described above .the heavy - tailed densities and are included as a defensive strategy , as suggested by , to get out of local modes and to explore the sample space of the target distribution more effectively .it is too computationally expensive to update ( and hence ) at every iteration so we update them according to a schedule that depends on the problem and the size of the parameter vector . see appendix [ s : sim details ] .our article estimates the multivariate mixture of normals density for the third component using the method of who identify the marginals that are not symmetric and estimate their joint density by a mixture of normals using k - harmonic means clustering .we also estimated the third component using stochastic approximation but _ without _ first identifying the marginals that are not symmetric .we studied the performance of both approaches to fitting a mixture of normals and found that the clustering based approach was more robust in the sense that it does not require tuning for particular data sets to perform well , whereas for the more challenging target distributions it was necessary to tune the parameters of the stochastic approximation to obtain optimal results .the results for the stochastic approximation approach are reported in .we note that estimating a mixture of normals using the em algorithm also does not require tuning ; see , as well as for the online em. however , the em algorithm is more sensitive to starting values and is more prone to converge to degenerate solutions , particularly in an mcmc context where small clusters of identical observations arise naturally ; see for a discussion .the third proposal distribution is a mixture of a copula and a multivariate distribution .we use the copula as the major component of the mixture because it provides a flexible and fast method for estimating a multivariate density . in our applicationsthis means that we assume that after appropriately transforming each of the parameters , the joint posterior density is multivariate . or more accurately , such a proposal density provides a more accurate estimate of the posterior density than using a multivariate normal or multivariate proposal .the second component in the mixture is a multivariate distribution with low degrees of freedom whose purpose is to help the sampler move out of local modes and explore the parameter space more effectively . and provide an introduction to copulas , with details of the copula given in .let be a -dimensional density with location , scale matrix and degrees of freedom and let be the corresponding cumulative distribution function .let and be the probability density function and the cumulative distribution function of the marginal , with its parameter vector . then the density of the copula based distribution for is where is determined by , . for a given sample of observations , or in our case a sequence of draws , we fit the copula by first estimating each of the marginals as a mixture of normals .for the current degrees of freedom , , of the copula , we now transform each observation to using this produces a sample of dimensional observations which we use to estimate the scale matrix as the sample correlation matrix of the s .given the estimates of the marginal distributions and the scale matrix , we estimate the degrees of freedom by maximizing the profile likelihood function given by over a grid of values , with representing the gaussian copula .we could use instead the grid of values in , but this is more expensive computationally and we have found our current procedure works well in practice . the second component of the mixture is a multivariate distribution with its degrees of freedom fixed and small , and with its location and scale parameters estimated from the iterates using the first and second sample moments .the copula component of the mixture has a weight of 0.7 and the multivariate component a weight of 0.3 . to draw an observation from the copula , we first draw from the multivariate distribution with location 0 and scale matrix and degrees of freedom .we then use a newton - raphson root finding routine to obtain for a given from for .the details of the computation for the copula and the schedule for updating the proposals is given in appendix [ s : sim details ] .using antithetic variables in simulations often increases sampling efficiency ; e.g. . proposes using antithetic variables in markov chain monte carlo simulation when the target is symmetric .we apply antithetic variables to the copula based approach which generalizes tierney s suggestion by allowing for nonsymmetric marginals . to the best of our knowledge ,this has not been done before .the antithetic approach is implemented as follows .as above , we determine probabilistically whether to sample from the copula component or the multivariate component . if the copula component is chosen , then is generated as above and is also computed .the values and are then transformed to and respectively and are accepted or rejected one at a time using the metropolis - hastings method . if we decide to sample from the multivariate component to obtain , we also compute , where is the mean of the multivariate , and accept or reject each of these values one at a time using the metropolis - hastings method .we note that to satisfy the conditions for convergence in we would run the sampling scheme in two stages , with the first stage as above . in the second stage we would have a three component proposal , with the first two components the same as above .the third component would be fixed throughout the second stage and would be the second component density at the end of the first stage .the third component would have a small probability , e.g. 0.05 .however , in our examples we have found it unnecessary to include such a third component as we achieve good performance without it .this section studies the five algorithms discussed in section [ section : adap : samp ] .the two component adaptive random walk metropolis ( rwm ) and the three component adaptive random walk metropolis ( rwm3c ) are described in section [ ss : arwm ] .the adaptive independent metropolis - hastings with a mixture of normals proposal distribution fitted by clustering ( imh - mn - cl ) described in section [ ss : aimh ] .the adaptive independent metropolis - hastings which is a mixture of a copula and a multivariate proposal , with the marginal distributions of the copula estimated by mixtures of normals that are fitted by clustering ( imh - tct - cl ) .the fifth sampler is the antithetic variable version of imh - tct - cl , which we call imh - tct - cl - a .these proposals are described in section [ ss : tct ] .our study compares the performance of the algorithms in terms of the acceptance rate of the metropolis - hastings method , the inefficiency factors ( if ) of the parameters , and an overall measure of effectiveness which compares the times taken by all samplers to obtain the same level of accuracy .we define the acceptance rate as the percentage of accepted values of each of the metropolis - hastings proposals .we define the inefficiency of the sampling scheme for a given parameter as the variance of the parameter estimate divided by its variance when the sampling scheme generates independent iterates .we estimate the inefficiency factor as where is the estimated autocorrelation at lag and the truncated kernel function if and 0 otherwise . as a rule of thumb ,the maximum number of lags is given by the lowest index such that with being the sample size use to compute .we define the equivalent sample size as , where , which can be interpreted as iterates of the dependent sampling scheme are equivalent to independent iterates .the acceptance rate and the inefficiency factor do not take into account the time taken by a sampler . to obtain an overall measure of the effectiveness of a sampler ,we define its equivalent computing time , where is the time per iteration of the sampler and .we interpret as the time taken by the sampler to attain the same accuracy as that attained by independent draws of the same sampler . for two samplers and , is the ratio of times taken by them to achieve the same accuracy .we note that the time per iteration for a given sampling algorithm depends to an important extent on how the algorithm is implemented , e.g. language used , whether operations are vectorized , which affects but not the acceptance rates nor the inefficiencies .this section applies the adaptive sampling schemes to the binary logistic regression model using three different priors for the vector of coefficients .the first is a non - informative multivariate normal prior , the second is a normal prior for the intercept , and a double exponential , or laplace prior , for all the other coefficients , the regression coefficients are assumed to be independent a priori .we note that this is the prior implicit in the lasso .the prior for is , where means an inverse gamma density with shape and scale .the double exponential prior has a spike at zero and heavier tails than the normal prior . compared to their posterior distributions under a diffuse normal prior, this prior shrinks the posterior distribution of the coefficients close to zero to values even closer to zero , while the coefficients far from zero are almost unmodified . in the adaptive sampling schemes we work with rather than as it is unconstrained .the third prior distribution takes the prior for the intercept as and the prior for the coefficients as the two component mixture of normals , with the regression coefficients assumed a priori independent . using this prior for bayesian variable selection , with and small and large variances that are chosen by the user . in our article their valuesare given for each of the examples below .the prior for is uniform . in the adaptive sampling we work with the logit of because it is unconstrained .this section models the probability of labor force participation by women , , in 1975 as a function of the covariates listed in table [ t : labor force part ] .this data set is discussed by , p. 537 and has a sample size of 753 ..variables used in labor force participation data regression [ cols= " < , < " , ] [ t : pap truncated ]our article proposes a new copula based adaptive sampling scheme and a generalization of the two component adaptive random walk designed to explore the target space more efficiently than the proposal of .we studied the performance of these sampling schemes as well as the adaptive independent metropolis - hastings sampling scheme proposed by which is based on a mixture of normals .all the sampling schemes performed reliably on the examples studied in the article , but we found that the adaptive independent metropolis - hastings schemes had inefficiency factors that were often much lower and acceptance rates that were much higher than the adaptive random walk schemes .the copula based adaptive scheme often had the smallest inefficiency factors and highest acceptance rates .for acceptance rates over 70% the antithetic version of the copula based approach was the most efficient .our results suggest that the copula based proposal provides an attractive approach to adaptive sampling , especially for higher dimensions .however , the mixture of normals approach of also performed well and is useful for complicated and possibly multimodal posterior distributions .the research of robert kohn , ralph silva and xiuyan mun was partially supported by an arc discovery grant dp0667069 .we thank professor garry barret for the cps data and professor denzil fiebig for the pap smear data .all the computations were done on intel core 2 quad 2.6 ghz processors , with 4 gb ram ( 800mhz ) on a gnu / linux platform using matlab 2007b . however , in the tct algorithm we computed the univariate cumulative distribution functions and inverse cumulative distribution functions of the , normal and mixture of normals distributions using matlab mex files based on the corresponding matlab code .in addition , to speed up the computation , we tested each marginal for normality using the jarque - bera test at the 5% level .if normality was not rejected then we fitted a normal density to the marginal .otherwise , we estimated the marginal density by a mixture of normals . in stage 1 of the adaptive sampling schemes imh - mn - cl and imh - mn - sa that use a multivariate mixture of normals , the number of components ( ) used in the third term of the mixtureis determined by the dimension of the parameter vector ( ) and the number of accepted draws ( ) to that stage of the simulation .in particular , if , if , if and if .we now give details of the number of iterations , burn - in and updating schedules for all the adaptive independent metropolis - hastings schemes in the paper .in addition , we update the proposal in stage 1 if in 100 successive iterations the acceptance rate is lower than 0.01 . *normal prior : end of first stage = 5 000 ; burn - in = 75 000 ; number of iterations : 100 000 ; updates = [ 50 , 100 , 150 , 200 , 300 , 500 , 700 , 1000 , 2000 , 5000 , 10000 , 20000 , 30000 , 50000 , 75000 ] . * double exponential prior : end of first stage = 5 000 ; burn - in = 100 000 ; number of iterations : 150 000 ; updates = [ 50 , 100 , 150 , 200 , 300 , 500 , 700 , 1000 , 2000 , 5000 , 10000 , 20000 , 30000 , 50000 , 75000 , 100000 ] . * mixture of normals prior : end of first stage = 100 000 ; burn - in = 300 000 ; number of iterations : 400 000 ; updates = [ 100 , 150 , 200 , 300 , 500 , 700 , 1000 , 2000 , 3000 , 5000 , 7500 , 10000 , 15000 , 20000 , 30000 , 50000 , 75000 , 100000 , 125000 , 150000 , 175000 , 200000 , 225000 , 250000 , 300000 ] .* normal and double exponential priors , quantiles 0.1 , 0.5 and 0.9 : end of first stage = 3 000 ; burn - in = 150 000 ; number of iterations : 200 000 ; updates = [ 100 , 150 , 200 , 300 , 500 , 700 , 1000 , 2000 , 3000 , 5000 , 7500 , 10000 , 15000 , 20000 , 30000 , 50000 , 75000 , 100000 , 150000 ] . * mixture of normals prior , quantiles 0.1 , 0.5 and 0.9 : end of first stage = 200 000 ; burn - in = 400 000 ; number of iterations : 500 000 ; updates = [ 100 , 150 , 200 , 300 , 500 , 700 , 1000 , 2000 , 3000 , 5000 , 7500 , 10000 , 15000 , 20000 , 30000 , 50000 , 75000 , 100000 , 125000 , 150000 , 175000 , 200000 , 225000 , 250000 , 275000 , 300000 , 325000 , 350000 , 375000 , 400000 ] .probit random effects model , pap smear data .for all three priors , end of first stage = 5 000 ; burn - in = 10 000 ; number of iterations : 20 000 ; updates = [ 20 , 50 , 100 , 150 , 200 , 300 , 400 , 500 , 600 , 700 , 800 , 900 , 1000 , 1100 , 1200 , 1300 , 1400 , 1500 , 2000 , 2500 , 3000 , 3500 , 4000 , 4500 , 5000 , 6000 , 7000 , 8000 , 9000 , 10000 , 12000 , 15000 ] . in this examplethe importance sampling density is updated every iterations .we now give the details of the sampling for both adaptive random walk metropolis algorithms .for the hmda data the number of burn - in iterations was 300 000 and the total number of iterations was 500 000 for all three priors .the corresponding numbers for the cps data with normal and double exponential priors are 500 000 and 1000 000 , and for the mixture of normals prior 1000 000 and 1500 000 .the corresponding numbers for the pap smear data are 30 000 and 50 000 .donald , s.g . ,green , d.a . andpaarsch , h.j .differences in wage distributions between canada and the united states : an application of a flexible estimator of distribution functions in the presence of covariates ._ review of economic studies _ , 67(4 ) , 609 - 633 .
our article is concerned with adaptive sampling schemes for bayesian inference that update the proposal densities using previous iterates . we introduce a copula based proposal density which is made more efficient by combining it with antithetic variable sampling . we compare the copula based proposal to an adaptive proposal density based on a multivariate mixture of normals and an adaptive random walk metropolis proposal . we also introduce a refinement of the random walk proposal which performs better for multimodal target distributions . we compare the sampling schemes using challenging but realistic models and priors applied to real data examples . the results show that for the examples studied , the adaptive independent metropolis - hastings proposals are much more efficient than the adaptive random walk proposals and that in general the copula based proposal has the best acceptance rates and lowest inefficiencies . * keywords * : antithetic variables ; clustering ; metropolis - hastings ; mixture of normals ; random effects .
selection , random genetic drift and mutations are the processes underlying darwinian evolution . for a long timepopulation geneticists have analyzed the dynamics in the simplest setting consisting of two genotypes evolving under these processes . in those studies, a genotype represents an individual s genetic makeup , completely determining all relevant properties of the individual .a key concept is the so - called fitness of a genotype which represents the selection pressure for the individuals .the fitness defines the expected number of offspring an individual will produce . thus , selection acts on fitness differences preferring individuals with higher fitness over individuals with lower fitness .usually it is assumed that individuals have fixed fitnesses defined by their genotype alone . yet, experimental studies have revealed that many natural systems exhibit frequency - dependent selection , which means that an individual s fitness not only depends on its genotype , but also on its interactions with other individuals and hence on the frequency of the different genotypes in the population .although such frequency - dependent selection had already been studied early by crow and kimura , only recently has it received more attention . in these theoretical and computational studies ,individuals interactions are represented by interaction matrices from game theory .this leads to a frequency dependence where the fitness depends directly on the interaction parameters in a linear way .however , fitness may depend on many diverse factors such as cooperation ( i.e. individuals acting together to increase their fitness ) and resource competition , so that certain systems may exhibit frequency - dependent fitness that is nonlinear .for example , in experiments certain hermaphrodites exhibit such nonlinear fitness - dependence . to the best of our knowledgethe impact of such nonlinear dependencies on coevolutionary dynamics has not been investigated theoretically . in this articlewe show that nonlinear frequency dependence may induce new stable configurations of the evolutionary dynamics .furthermore , we study the impact of asymmetric mutation probabilities on the dynamics , which was also neglected in most models until now . as in previous works on coevolutionary dynamics we base our work on the moran process in a non - spatial environment which is a well established model to study evolutionary dynamics andwas already used in many applications .the moran process is a stochastic birth - death process which keeps the population size constant .therefore , in a two - genotype model the system becomes effectively one - dimensional , so that the dynamics may be described by a one - dimensional markov chain with transition rates defined by the moran process .we derive the stationary probability distribution of the system dynamics via its fokker - planck equation .sharp maxima of the distribution reveal metastable points of the dynamics and a multitude of such maxima lead to stochastic switching dynamics between multiple stable points .the article is structured as follows . in sectionii we introduce the model details and in section iii we derive the fokker - planck equation describing the probabilistic dynamics of the population .using this equation we derive the stationary probability distribution that describes the long - time behavior of the system . in sectioniv we analyze this probability distribution , which yields information about the impact of nonlinear frequency - dependent selection and of different mutation probabilities on the coevolutionary dynamics . in sectionv we give a summary and discuss our results .consider a population of individuals evolving in a non - spatial environment , meaning that the population is well - mixed so that each individual interacts with all other individuals at all times . in this populationthe individuals may assume one out of two genotypes and .the population sizes and ( ) evolve according to the time - continuous moran process described in the following , cf .the number of individuals of genotype completely determines the state of the system as . at all timesthe interactions of the individuals determine the actual ( frequency - dependent ) fitness , so that an individual s fitness of genotype or is defined by a fitness function or respectively .the fitness functions and may be any functions with the only condition that for all ] which can take various shapes .what are the possible shapes for the stationary distribution in the form given by equation ( [ eq : generalsolution ] ) ?until now , the term representing the asymmetric mutation probabilities has not been considered to our knowledge and the interaction functions and in the selection term were considered to be at most linear in .we should therefore be interested in the effects of nonlinear interaction functions and asymmetric mutation rates .let us first analyze the dynamics for nonlinear interaction functions describing the effects of cooperation and limited resources as described by equation ( [ eq : fitnessfunction ] ) , so that already with such interaction functions induce dynamics stochastically switching between three metastable points .an example is shown in figure [ fig : firstexample ] , where the theoretically calculated stationary distribution from equation ( [ eq : generalsolution ] ) is shown together with data from simulations with a population of individuals which is enough to obtain almost perfect fitting ( cf . also figure [ fig : nplot ] ) .the fitness functions ( figure [ fig : firstexample]b ) both show at first an increase on increasing the number of individuals of genotype or from due to cooperation effects and then a strong decrease due to resource competion .as the resulting fitness functions are asymmetric , also the stationary distribution is asymmetric .there is a maximum at due to genetic drift .selection drives the dynamics towards a metastable state at , because for genotype is fitter than thus increasing in frequency and for genotype is less fit than thus decreasing in frequency ( cf .figure [ fig : firstexample]b ) .the maximum of the stationary distribution is thus exactly at the point where . for genotype is fitter than and thus the dynamics are driven towards by selection as well as genetic drift .the mutational force induced by increases the height of the maximum at driving the system away from the maxima at and .so , in this example genetic drift , mutation and selection all significantly influence the dynamics . )( red , solid ) in perfect agreement with data from simulations with ( blue , ) .( b ) shows the fitness functions of genotype ( blue , solid ) and ( red , dashed ) .interaction parameters are , , and ( see equation ( [ eq : fofk ] ) and ( [ eq : interactions ] ) ) while the mutation rate is ( ).,width=566 ] let us now study the influence of the mutation rates in detail .interestingly , for asymmetric mutation rates ( ) the factor always diverges in the interval ] has a maximum .theoretically and can be any function with an arbitrary amount of extreme points in $ ] , so that there is no limit to the stationary distribution s number of maxima , if selection dominates the dynamics .however , for finite the number of possible maxima is naturally limited by .figure [ fig : multiplestability ] shows an example , where we used periodic interaction functions although this is not a realistic interaction function in most applications , it demonstrates what is theoretically possible in the introduced system . )( red , solid ) together with simulation data with ( blue , ) for the interaction functions given in ( [ eq : perfitfunc ] ) .the computed stationary distribution diverges for and while the simulation data remains finite due to the finite number of individuals .( b ) shows a sample path which exhibits switching between the different maxima of the distribution .parameters are , , and .,width=566 ]in this article we analyzed a two - genotype system in a very general setting with ( possibly ) asymmetric mutation probabilities and nonlinear fitness functions in finite populations .the underlying moran process is a well established model to gain an understanding of the interplay of selection , mutation and genetic drift in evolutionary dynamics .however , the moran process is studied mostly with symmetric mutation probabilities and at most linear interaction functions .we reasoned that neither need mutation probabilities be symmetric as experiments have shown , that mutation probabilities are often asymmetric nor can all interaction effects be described by linear interaction functions .for example cooperation in game theory leads to an interaction function increasing linearly in the frequency of the cooperating genotype . yet , in many applications also cooperators in the end compete for the same type of resource which is limited .therefore , due to limited resources a population being too large can not be sustained leading to a decrease in the fitness .there is no linear function that can reflect both of these effects at the same time .we derived the fokker - planck equation describing the dynamics of the number of individuals of genotype in the limit of large population sizes .we quantified the quality of the fokker - planck approach for an example ( cf . figure 4 ) where the difference of simulation data and theoretical solution became almost not detectable for population sizes larger than .actually , if the system exhibits absorbing states then the fokker - planck method does not work to study the corresponding quasi - stationary distributions .instead , wkb methods are more appropriate to describe the system dynamics as for example in , where fixation resulting from large fluctuations was studied . in our model systemno such absorbing states exist , as long as the mutation rates are positive ( ) and therefore the fokker - planck equation is appropriate to describe the system dynamics .we identified the individual effects of selection , mean mutation rate and mutation difference as well as genetic drift and derived the stationary probability distribution as determined by the fokker - planck equation . analyzing the distribution, we found that asymmetries in the mutation probabilities may not only induce the shifting of existing stable points of the dynamics to new positions , but also lead to the emergence of new stable points .thus , a genotype that has a selective disadvantage can anyway have a stable dynamical state where its individuals dominate the population due to a higher mutational stability ( see figure [ fig : asymmetricexample ] ) .further we found , that dynamic fitness leads to multiple stable points of the dynamics induced by selection and also genetic drift .we showed an example ( figure [ fig : firstexample ] ) where three stable points exist , two caused by selection and one by genetic drift .we conclude that frequency - dependent fitness together with asymmetric mutation rates induces complex evolutionary dynamics , in particular if the interactions imply nonlinear fitness functions . theoretically , there is no limit for the number of stable points that the dynamics can exhibit ( see figure [ fig : multiplestability ] ) .all in all , we interprete our results such that in real biological systems multiple metastable equilibria may exist , whenever species interact in a way complex enough to imply a fitness that nonlinearly depends on frequency . as a consequence , one species may exhibit a certain frequency for a long time before a sudden shift occurs and then a new frequency prevails .such a change may thus occur even in the absence of changes of the environment ; it may be induced as well by a stochastic switching from one metastable state to another due to complex inter - species interactions .the moran process is a standard tool to gain theoretical insights into experimental data . of course, we do not propose here that any experimental setup may be exactly described by the moran process .however , we think that it should be feasible to develop an experiment where two different mutants evolve with asymmetric mutation rates . to find an experimental setup where the two genotypes also exhibit nonlinear fitness could however prove more difficult .rather , our study is a theoretical study indicating that nonlinear fitness may be the cause of multible stable states when observed in experimental data . for further studies on systems with more genotypes it should be useful to combine our considerations presented here with the work of traulsen et al . , where an analysis of systems with more than two genotypes was carried out .extending those results it may be possible to gain a better understanding of the effects of nonlinear interactions for many different genotypes . also , it may be interesting to study the effects of changing interactions , where the interactions change according to the system dynamics .thus , our study might serve as a promising starting point to investigate how nonlinear frequency dependencies impact evolutionary dynamics in complex environments .* acknowledgements * we thank steven strogatz for fruitful discussions during project initiation and stefan eule for helpful technical discussions .stefan grosskinsky acknowledges support by epsrc , grant no .ep / e501311/1 .the master equation ( [ eq : meq ] ) may be transformed to a fokker - planck equation in the limit of large .we introduce the transformation together with the rescaled functions we fix the scaling functions , and such that in the limit all terms in equation ( [ eq : meq ] ) remain finite so that mutation , selection and genetic drift all act on the same scale .we further define and substituting all this into the master equation ( [ eq : meq ] ) , we obtain\rho({x_{-}},s)\right.\nonumber \\ & & \left[(1-\mu_{ba})(1+g_{b}({x_{+}}))(1-{x_{+}}){x_{+}}+\mu_{ab}(1+g_{a}({x_{+}})){x_{+}}^{2}\right]\rho({x_{+}},s)\nonumber \\ & & -\left[(1-\mu_{ab})(1+g_{a}(x))x(1-x)+\mu_{ba}(1+g_{b}(x))(1-x)^{2}\right.\nonumber \\ & & + \left.\left.(1-\mu_{ba})(1+g_{b}(x))x(1-x)+\mu_{ab}(1+g_{a}(x))x^{2}\right]\rho(x , s)\right\ } \label{eq : app1}\end{aligned}\ ] ] we choose and so that in the limit the terms stay finite .further we introduce the mean mutation rate and the mutation rate difference . to not overload the notationwe drop the time argument of in the following calculation .this leads to \right\ } \\ & & + n\left\ { { \tilde{g}_a}({x_{-}}){x_{-}}(1-{x_{-}})\rho({x_{-}})-{\tilde{g}_a}(x)x(1-x)\rho(x)+{\tilde{g}_b}({x_{+}}){x_{+}}(1-{x_{+}})\rho({x_{+}})-{\tilde{g}_b}(x)x(1-x)\rho(x)\right.\\ & & + \frac{{\tilde{\mu}}}{2}\left[(1 - 2{x_{-}})\rho({x_{-}})-(1 - 2{x_{+}})\rho({x_{+}})\right]\\ & & + \delta{\tilde{\mu}}\left[{x_{+}}\rho({x_{+}})-x\rho(x)-(1-{x_{-}})\rho({x_{-}})+(1-x)\rho(x)\right]\\ & & + \frac{{\tilde{\mu}}}{n}\left[\left({\tilde{g}_a}({x_{+}}){x_{+}}^{2}+{\tilde{g}_b}({x_{+}})({x_{+}}^{2}-{x_{+}})\right)\rho({x_{+}})-\left({\tilde{g}_a}(x)x^{2}+{\tilde{g}_b}(x)(x^{2}-x)\right)\rho(x)\right.\\ & & + \left.\left({\tilde{g}_a}({x_{-}})({x_{-}}^{2}-{x_{-}})+{\tilde{g}_b}({x_{-}})(1-{x_{-}})^{2}\right)\rho({x_{-}})-\left({\tilde{g}_a}(x)(x^{2}-x)+{\tilde{g}_b}(x)(1-x)^{2}\right)\rho(x)\right]\\ & & + \frac{\delta{\tilde{\mu}}}{n}\left[\left({\tilde{g}_a}({x_{+}}){x_{+}}^{2}-{\tilde{g}_b}({x_{+}})({x_{+}}^{2}-{x_{+}})\right)\rho({x_{+}})-\left({\tilde{g}_a}(x)x^{2}-{\tilde{g}_b}(x)(x^{2}-x)\right)\rho(x)\right.\\ & & + \left.\left.\left({\tilde{g}_a}({x_{-}})({x_{-}}^{2}-{x_{-}})-{\tilde{g}_b}({x_{-}})(1-{x_{-}})^{2}\right)\rho({x_{-}})-\left({\tilde{g}_a}(x)(x^{2}-x)-{\tilde{g}_b}(x)(1-x)^{2}\right)\rho(x)\right]\right\ } \end{aligned}\ ] ] in the limit the different terms with in front become second order derivatives with respect to , while the other terms become first order derivatives .the terms which have a factor vanish in the limit and thus the above equation becomes (1-x)\rho(x)\right)-\delta{\tilde{\mu}}\rho(x)\right]+\frac{\partial^{2}}{\partial x^{2}}\left[x(1-x)\rho(x)\right]\label{eq : appfpe}\ ] ] which is the focker - planck - equation of the system .we directly derive the stationary solution of the master equation ( [ eq : meq ] ) using the detailed balance equation which applies to any chain with only nearest neighbour transitions , cf .also .thus , using the rewritten balance equation iteratively , we obtain finally , we may use the normalization condition of the stationary distribution to eliminate the factor .we then obtain the exact stationary solution of the master equation which can be evaluated numerically for any transition rates and . for more details on the exact solution of the master equation in the moran processsee for example the work by claussen and traulsen .
evolution is simultaneously driven by a number of processes such as mutation , competition and random sampling . understanding which of these processes is dominating the collective evolutionary dynamics in dependence on system properties is a fundamental aim of theoretical research . recent works quantitatively studied coevolutionary dynamics of competing species with a focus on linearly frequency - dependent interactions , derived from a game - theoretic viewpoint . however , several aspects of evolutionary dynamics , e.g. limited resources , may induce effectively nonlinear frequency dependencies . here we study the impact of nonlinear frequency dependence on evolutionary dynamics in a model class that covers linear frequency dependence as a special case . we focus on the simplest non - trivial setting of two genotypes and analyze the co - action of nonlinear frequency dependence with asymmetric mutation rates . we find that their co - action may induce novel metastable states as well as stochastic switching dynamics between them . our results reveal how the different mechanisms of mutation , selection and genetic drift contribute to the dynamics and the emergence of metastable states , suggesting that multistability is a generic feature in systems with frequency - dependent fitness . * keywords : * population dynamics ; dynamic fitness ; stochastic switching ; multistability
life is intimately related to movement on many different time and length scales , from molecular movements to the motility of cells and organisms .one type of movement which is ubiquitous on the molecular and cellular scale , although not specific to the organic world , is brownian motion or passive diffusion : biomolecules , vesicles , organelles , and other subcellular particles constantly undergo random movements due to thermal fluctuations. within cells , these random movements depend strongly on the size of the diffusing particles , because the effective viscosity of the cytoplasm increases with increasing particle size. while proteins typically diffuse through cytoplasm with diffusion coefficients in the range of /s to tens of /s and therefore explore the volume of a cell within a few minutes to several tens of minutes ( for a typical cell size of a few tens of microns ) , a 100 nm sized organelle typically has a diffusion coefficient of /s within the cell, and would need days to diffuse over the length of the cell . for fast and efficient transport of large cargoes , cells therefore use active transport based on the movements of molecular motors along cytoskeletal filaments. these molecular motors convert the chemical free energy released from the hydrolysis of atp ( adenosinetriphosphate ) into directed motion and into mechanical work .they move in a directed stepwise fashion along the linear tracks provided by the cytoskeletal filaments .there are three large families of cytoskeletal motors , kinesins and dyneins which move along microtubules , and myosins which move along actin filaments .the filaments have polar structures and encode the direction of motion for the motors . a specific motor steps predominantly in one direction , the forward direction of that motor .backward steps are usually rare as long as the motor movement is not opposed by a large force .motor velocities are typically of the order of 1 m/s , which allows a motor - driven cargo to move over typical intracellular distances in a few seconds to a few minutes . on the other hand ,the force generated by a motor molecule is of the order of a few pn , which is comparable or larger than estimates for the viscous force experienced by typical ( 100 nm sized ) motor - driven cargoes in the cytoplasm .a large part of our present knowledge about the functioning of molecular motors is based on _ in vitro _experiments which have provided detailed information about the molecular mechanisms of the motors and which have allowed for systematic measurements of their transport properties. in order to obtain such detailed information , the overwhelming majority of these experiments has addressed the behavior of single motor molecules . within cells ,however , transport is often accomplished by the cooperation of several motors rather than by a single motor as observed by electron microscopy and by force measurements and the analysis of cargo particle trajectories _ in vivo_. in order to understand the cargo transport in cells , it is therefore necessary to go beyond the single molecule level and to address how several motors act together in a team , in particular in cases where the cooperation of different types of motors is required such as bidirectional cargo transport .the latter situation , i.e. the presence of different types of motors bound to one cargo particle , is rather common and has been observed for kinesins and dyneins , kinesins and myosins as well as for different members of the kinesin family and even for members of all three motor families. in this article , we review our recent theoretical analysis of the cooperation of several motors pulling one cargo .we emphasize the ability of transport driven by several motors to deal with high viscosities and present an extended discussion of the case where a strong viscous force opposes the movement of the cargo particle .we also discuss how diffusion can be enhanced by motor - driven active transport and conclude with some remarks on the regulation of active transport .motors along a cytoskeletal filament . the number of motors which actually pull the cargo changes in a stochastic fashion due to the binding and unbinding of motors to and from the filament . ] to study the cooperation of several molecular motors theoretically , we have recently introduced a model which describes the stochastic binding and unbinding of motors and filaments as well as the movements of the cargo particle to which these motors are attached. the state of the cargo particle is described by the number of motors bound to the filament . as shown in fig .[ f1 ] , this number changes stochastically between 0 and , the total number of motors bound to the cargo , since motors bind to and unbind from the filament. the model is therefore defined by a set of rates and which describe the unbinding and binding of a motor , respectively , and which depend on the number of bound motors , and by a set of velocities with which the cargo particle moves when pulled by motors . in the simplest case , the motors bind to and unbind from the filament in a fashion independent of each other .in that case , the binding and unbinding rates are given by with the single motor unbinding and binding rates and , respectively .for non - interacting motors , the cargo velocity is independent of the number of pulling motors and given by the single motor velocity , , as shown both by microtubule gliding assays and by bead assays for kinesin motors. for this case we have obtained a number of analytical results. in particular , the model indicates a strong increase of the average run length , i.e. , the distance a cargo particle moves along a filament before it unbinds from it . for motors which bind strongly to the filament , so that , the average run length is given by and essentially increases exponentially with increasing number of motors . using the single molecule parameters for conventional kinesin ( kinesin 1 ) , we have estimated that run lengths in the centimeter range are obtained if cargoes are pulled by 78 motors. as these long run lengths exceed the length of a microtubule ( typically a few tens of microns ) , they can however only be realized if microtubules are aligned in a parallel and isopolar fashion and if cargoes can step from one microtubule to another as observed _ in vitro _ using aligned microtubules. the increase of cargo run lengths with increasing number of motors has been observed in several _ in vitro _ experiments, it has been difficult to determine the number of motors pulling the cargo .one method to determine the motor number is to use a combination of dynamic light scattering measurements and comparison of measured run length distributions with theoretical predictions. if the cargo is pulled against an opposing force , this force is shared among the bound motors , so that each bound motor experiences the force . under the influence of an external force ,the single motor velocity decreases approximately linearly , , and the unbinding rate increases exponentially , as obtained from optical tweezers experiments. the two force scales are the stall force and the detachment force .for a cargo pulled by several motors , the velocities and unbinding rates in the different binding states are then given by since the velocity now depends on the number of bound motors , the velocity of the cargo changes every time a motor unbinds or an additional motor binds to the filament .the trajectory of the cargo therefore consists of linear segments with constant velocity , and the distribution of the instantaneous velocities has several peaks which become more and more distinct if the force is increased .in addition , the sharing of the force induces a coupling between the motors which leads to cascades of unbinding events , since the unbinding of one motor increases the force and , thus , the unbinding rate for the remaining bound motors .such unbinding cascades occur also in many other biophysical systems which have a similar unbinding dynamics , in particular they have been studied extensively for the forced unbinding of clusters of adhesion molecules. for the motors , the most important consequence of this type of force - induced coupling of the motors is that an increase in force not only slows down the motors , but also decreases the number of bound motors .therefore , the force - velocity relation given by the average velocity as a function of the load force is a nonlinear relation for cargoes pulled by several motors , although it is approximately linear for a single motor. rather than being imposed by an optical laser trap or other force fields that can be directly controlled _ in vitro _ , an opposing force can also arise from other motors which pull the cargo into the opposite direction .the presence of two types of motors which move into opposite directions bound to the same cargo is commonly found in cells and is required for bidirectional transport in essentially unidirectional systems of filaments as they are typical for the microtubule cytoskeleton . in general , the two types of motors interact both mechanically by pulling on each other _ and _ via biochemical signals or regulatory molecules .if there are only mechanical interactions our model predicts a tug - of - war - like instability : if the motors pull on each other sufficiently strongly , one species will win , and the cargo performs fast directed motion rather than being stalled by the pulling of motors in both directions . since the number of motors pulling the cargo is typically small , the direction of motion will however be reversed from time to time with a reversal frequency which decreases as the motor numbers are increased .one universal force that is always experienced by molecular motors is the viscous drag caused by the medium through which the cargo is pulled . in water or aqueous solutions , however , the viscous drag of the cargo is usually negligible since it corresponds to a force of only a small fraction of the motor stall force .for example , a bead with diameter 1 m which moves at 1 m/s through water experiences a viscous force of 0.02 pn which is tiny compared to a motor stall force of a few pn .therefore , _ in vitro _ experiments are hardly affected by the viscosity of the solution , and changes in motor number do not lead to a change of the cargo velocity unless the viscosity is increased to times that of water. in highly viscous environments , this is different : if the viscous drag force is of the same order of magnitude as the single motor stall force , the velocity can be increased if the number of motors which share this force is increased . the latter effect has been observed in microtubule gliding assays with high solution viscosity where for low motor density on the surface the velocity decreases as a function of the microtubule length , while for high motor density the velocity is independent of the microtubule length. for a cargo pulled by motors , inserting the stokes friction force ( with the friction coefficient ) into the linear force velocity relation leads to this equation shows that the velocity increases with increasing number of motors if is not negligibly small compared to one . in particular , in the limit of high viscosity or large , for which the last approximation in eq .( [ eq : vn_gamma ] ) is valid , the velocity is proportional to the number of pulling motors .. as a consequence of this , an increase of to motor numbers large compared to will increase the consumption of atp without substantially increasing the cargo velocity . ] in a highly viscous environment , the cargo s velocity distribution therefore exhibits maxima at integer multiples of a minimal velocity .similar velocity distributions have recently been observed for vesicles and melanosomes in the cytoplasm , see refs . .to first order in the motors experience the force which implies that the force per bound motor is independent of the number of bound motors and that the motors behave as independent motors for large viscous force , however with an increased effective single motor unbinding rate .we have emphasized that large particles experience a strong viscous drag in the cytoplasm and that therefore brownian motion is too slow to drive transport of large particles in the cell .while this observation suggests that active transport is necessary within cells , it does not imply that the active transport must necessarily be directed transport .alternatively , active transport could also be used to generate effectively diffusive motion , which is faster than passive brownian motion , e.g. if a cargo particle performs a sequence of active molecular motor - driven runs in random direction .we call this effectively diffusive motion , which depends on chemical energy , active diffusion. in cells , active diffusion can be achieved either by ( i ) switching the direction of motion by switching between different types of motors which walk along a unipolar array of filaments or by ( ii ) a single type of motor and isotropic ( e.g. , bidirectional or random ) arrangements of filaments .the first case is typical for microtubule - based transport : microtubules are often arranged in a directed fashion , either in radial systems emanating from a central microtubule organizing center with their plus ends pointing outwards or in unidirectional systems where microtubules are aligned in a parallel and isopolar fashion such as in axons .bidirectional movements along these unidirectional microtubule arrangements have been observed for a large variety of intracellular cargoes. these movements are driven by a combination of plus end and minus end directed motors . on the other hand , actin - based movements are often of the second type , since the actin cytoskeleton usually forms an isotropic random mesh , on which , e.g. , myosin v - driven cargoes perform random walks. active diffusion has also been observed for random arrays of microtubules in cell extracts. let us mention that these two types of active diffusion are highly simplified .more complex scenarios include bidirectional , but biased movements along microtubules and the switching of cargoes between microtubules and actin filaments. in vitro , one can use various techniques such as chemically or topographically structured surfaces, motor filament self - organization, and filament crosslinking on micropillars to create well - defined patterns of filaments for active diffusion which may be useful to enhance diffusion in bio - nanotechnological transport systems. the maximal effective diffusion coefficient which can be achieved by molecular motor - driven active diffusion is given by where is the length of essentially unidirectional runs , given by either the average run length before unbinding from filaments or the mesh size of the filament pattern , and is the probability that the cargo is bound to a filament. in order to obtain large effective diffusion one therefore has to make sure that the cargo particle is bound to filaments most of the time , and that it has an average run length which is comparable or larger than the pattern mesh size .one possibility to satisfy both conditions is to use a sufficiently large number of motors .a larger number of motors also decreases the probability of switching direction at an intersection, so that unidirectional runs exceeding the mesh size can be achieved .since the motor velocity is rather insensitive to the viscosity for small viscosities , active diffusion is much less affected by the viscous drag of the solution than passive brownian motion, in particular if a cargo is pulled by several motors . for a cargo of size 100 nm , brownian motion in wateris characterized by a diffusion coefficient in the order of a few /s .an increase in viscosity by a factor of 10 leads to a decrease of the diffusion coefficient by that factor .the active diffusion coefficient , on the other hand can be estimated to be of similar size or slightly smaller ( using m/s , m and ) , however the latter value is essentially unaffected by an increase of the viscosity by up to a factor 100 compared to the viscosity of water , since the viscous force which arises from the movement of such a bead is only a fraction of of the motor stall force .the effect is increased if a cargo is pulled by several motors , since the viscous force starts to affect the motor movement only if it is of the order .therefore only viscosities which lead to viscous forces that exceed have a considerable effect on active diffusion .in the preceding sections we have discussed how cells achieve cargo transport over large distances through a viscous environment by the cooperation of a small number of molecular motors . in principle , cells could also use a single motor which generates a larger force and binds more strongly to the corresponding filament rather than a team of motors , but motor cooperation appears to be preferred .the use of several motors has the advantage that the transport parameters can be easily controlled by controlling the number of pulling motors , e.g. by activating motors , by increasing their binding to filaments or by recruiting additional motors to the cargo .the use of multiple weak bonds rather than a single strong bond in order to enable simple ways of control appears to be a general principle in cellular biology which applies to various cellular processes as diverse as the build - up of strong , but at the same time highly dynamic clusters of adhesion molecules and the binding of transcription factors to dna where programmable specificity of binding is achieved by sequence - dependent binding of the transcription factor to a short stretch of dna. if it is true that the main purpose of motor cooperation is the controllability of motor - driven movements , one may ask whether this function imposes constraints on the properties of the motors . for example , in order to both up- and down - regulate the cargo velocity or run length , it is clearly desirable to be able to both increase and decrease the number of pulling motors and , thus , to have an average number , , of pulling motors that is not too close to either 1 or , the total number of motors , but rather is of the order of . itself may be fixed by the tradeoff between increased cargo velocity and efficient atp usage , such that . ] the average number of pulling motors can be estimated by $ ] , so that the requirement implies or , for cargoes subject to a strong viscous force , .the latter condition represents a relation between the single motor parameters possibly imposed by a functional constraint related to motor cooperation .the parameters of conventional kinesin approximately satisfy this relation , which could however be coincidental. it would therefore be interesting to see whether this relation is also satisfied by the parameters of other motors .if this is not the case , these differences might provide hints towards functional differences between different types of motors .cargo transport in cells is often carried out by small teams of molecular motors rather than by single motor molecules .we have described how one can analyze the transport by several motors using a phenomenological model based on the understanding of single motors that has been obtained from single molecule experiments during the last decade .our theoretical approach is sufficiently versatile to be extended to more complex situations where additional molecular species are present such as the cooperation of two types of motors or the interaction of motors with regulatory proteins . in particular, it will be interesting to use this model to study control mechanisms of motor - driven transport .sk acknowledges support by deutsche forschungsgemeinschaft ( kl 818/1 - 1 ) .
molecular motors power directed transport of cargoes within cells . even if a single motor is sufficient to transport a cargo , motors often cooperate in small teams . we discuss the cooperative cargo transport by several motors theoretically and explore some of its properties . in particular we emphasize how motor teams can drag cargoes through a viscous environment .
every physically relevant computational model must be mapped into physical space - time and _ vice versa _ . in this line of thought , von neumann s self - reproducing cellular automata have been envisioned by zuse and other researchers as `` calculating space ; '' i.e. , as a locally connected grid of finite automata capable of universal algorithmic tasks , in which intrinsic observers are embedded .this model is conceptually discreet and noncontinuous and resolves the eleatic `` arrow '' antinomy against motion in discrete space by introducing the concept of information about the state of motion in between time steps .alas , there is no direct physical evidence supporting the assumption of a tessellation of configuration space or time . given enough energy , and without the possible bound at the planck length of about m, physical configuration space seems to be potentially infinitely divisible .indeed , infinite divisibility of space - time has been utilized for proposals of a kind of `` zeno oracle '' , a progressively accelerated turing machine capable of hypercomputation .such accelerated turing machines have also been discussed in the relativistic context .in general , a physical model capable of hypercomputation by some sort of `` zeno squeezing '' has to cope with two seemingly contradictory features : on the one hand , its infinite capacities could be seen as an obstacle of evolution and therefore require a careful analysis of the principal possibility of motion in finite space and time _ via _ an infinity of cycles or stages . on the other hand ,the same infinite capacities could be perceived as an advantage , which might yield algorithms beyond the turing bound of universal computation , thus extending the church - turing thesis .the models presented in this article unify the connectional clarity of von neumann s cellular automaton model with the requirement of infinite divisibility of cell space . informally speaking ,the scale - invariant cellular automata presented `` contain '' a multitude of `` spatially '' and `` temporally '' ever decreasing copies of themselves , thereby using different time scales at different layers of cells .the cells at different levels are also capable to communicate , i.e. , exchange information , with these copies , resulting in ever smaller and faster cycling cells .the second model is based on petri nets which can enlarge themselves .the advantage over existing models of accelerated turing machines which are just turing machines with a geometrically progression of accelerated time cycles resides in the fact that the underlying computational medium is embedded into its environment in a uniform and homogeneous way . in these new models , the entire universe , and not just specially localized parts therein , is uniformly capable of the same computational capacities .this uniformity of the computational environment could be perceived as one further step towards the formalization of continuous physical systems in algorithmic terms . in this respects ,the models seem to be closely related to classical continuum models , which are at least in principle capable of unlimited divisibility and information flows at arbitrary small space and time dimensions . at present however , for all practical purposes , there are finite bounds on divisibility and information flow . to obtain a taste of some of the issues encountered in formalizing this approach , note that an infinite sequence of ever smaller and faster cycling cells leads to the following situation . informally speaking ,let a _ self - similar cellular automaton _be a variant of a one - dimensional elementary cellular automaton , such that each cell is updated twice as often as its left neighbor .the cells of a self - similar cellular automaton can be enumerated as .starting at time 0 and choosing an appropriate time unit , cell is updated at times .remarkably , this definition leads to indeterminism . to see this ,let be the state of cell at time .now , the state depends on , which itself depends on and so on , leading to an infinite regress . in general , in analogyto thomson s paradox , this results in an undefined or at least nonunique and thus indeterministic behavior of the automaton .this fact relates to the following variant of zeno s paradox of a runner , according to which the runner can not even get started .he must first run to the half way point , but before that he must run half way to the half way point and so on indefinitely . whereas zeno s runner can find rescue in the limit of convergent real sequences, there is no such relieve for the discrete systems considered .later on , two restrictions on self - similar automata ( build from scale - invariant cellular automata ) are presented , which are sufficient conditions for deterministic behavior , at least for finite computations .furthermore , a similar model based on a variant of petri nets will be introduced , that avoids indeterminism and halts in the infinite limit , thereby coming close to the spirit of zeno s paradox .the article is organized as follows .section [ sec - tm ] defines the turing machine model used in the remainder of the article , and introduces two hypercomputing models : the accelerated and the right - accelerated turing machine . in section [ chap : sica ] self - similar as well as scale - invariant cellular automata are presented .section [ chap : hypercomputer ] is devoted to the construction of a hypercomputer based on self - similar cellular automata .there is a strong resemblance between this construction and the right - accelerated turing machine , as defined in section [ sec - tm ] .a new computing model , the self - similar petri net is introduced in section [ chap : petri ] .this model features a step - to - step equivalence to self - similar cellular automata for finite computations , but halts in the infinite case .the same construction as in section [ chap : hypercomputer ] is used to demonstrate that self - similar petri nets are capable of hypercomputation .the final section contains some concluding remarks and gives some directions for future research .the turing machine is , beside other formal systems that are computationally equivalent , the most powerful model of classical computing .we use the following model of a turing machine . formally , a _turing machine _ is a tuple , where is the finite set of states , is the finite set of tape symbols , is the set of input symbols , is the start state , is the blank , and is the set of final states . the next move function or transition function is a mapping from to , which may be undefined for some arguments .the turing machine works on a tape divided into cells that has a leftmost cell but is infinite to the right .let .one step ( or move ) of in state and the head of positioned over input symbol consists of the following actions : scanning input symbol , replacing symbol by , entering state and moving the head one cell either to the left ( ) or to the right ( ) . in the beginning , starts in state with a tape that is initialized with an input word , starting at the leftmost cell , all other cells blank , and the head of positioned over the first symbol of .we need sometimes the function split up into three separate functions : .the configuration of a turing machine is denoted by a string of the form , where and . here is the current state of , is the tape content to the left , and the tape content to the right of the head including the symbol that is scanned next .leading and trailing blanks will be omitted , except the head has moved to the left or to the right of the non - blank content .let and be two configurations of .the relation states that with configuration changes in one step to the configuration .the relation denotes the reflexive and transitive closure of .the original model of a turing machine as introduced by alan turing contained no statement about the time in which a step of the turing machine has to be performed .in classical computation , a `` yes / no''-problem is therefore decidable if , for each problem instance , the answer is obtained in a finite number of steps .choosing an appropriate time scheduling , the turing machine can perform infinitely many steps in finite time , which transcends classical computing , thereby leading to the following two hypercomputing models .the concept of an accelerated turing machine was independently proposed by bertrand russell , ralph blake , hermann weyl and others ( see refs . ) .an accelerated turing machine is a turing machine which performs the -th step of a calculation in units of time .the first step is performed in time 1 , and each subsequent step in half of the time before . since , the accelerated turing machine can perform infinitely many steps in finite time .the accelerated turing machine is a hypercomputer , since it can , for example , solve the halting problem , see e.g. , ref . .if the output operations are not carefully chosen , the state of a cell becomes indeterminate , leading to a variation of thomson s lamp paradox .the open question of the physical dynamics in the limit reduces the physical plausibility of the model .the following model of a hypercomputing turing machine has a different time scheduling , thereby avoiding some of the paradoxes that might arise from the previous one .let the cells of the tape be numbered from the left to the right .a right - accelerated turing machine is a turing machine that takes units of time to perform a step that moves the head from cell to one of its neighbor cells .[ th - right - acc - tm ] there exists a right - accelerated turing machine that is a hypercomputer .let be a universal turing machine .we construct a turing machine that alternates between simulating one step of and shifting over the tape content one cell to the right .we give a sketch of the construction , ref . contains a detailed description of the used techniques .the tape of contains one additional track that is used to mark the cell that is read next by the simulated .the finite control of is able to store simultaneously the state of the head of as well as a tape symbol of .we assume that the input of is surrounded by two special tape symbols , say . at the start of a cycle , the head of is initially positioned over the left delimiter . scans the tape to the right , till it encounters a flag in the additional track that marks the head position of . accessing the stored state of , simulates one step of thereby marking either the left or the right neighbor cell as the cell that has to be visited next in the simulation of .if necessary , a blank is inserted left to the right delimiter , thereby extending the simulated tape of . afterwardsthe head of moves to the right delimiter to start the shift over that is performed from the right to the left . repeatedly stores the symbols read in its finite control and prints them to the cell to the right . after the shift over ,the head of is positioned over the left delimiter which finishes one cycle .we now give an upper bound of the cycle time .let be the number of cells , from the first to the second one .without loss of generality we assume that contains the left . scans from the left to the right and simulates one step of which might require to go an additional step to the left . if cell is to be read next , the head of can not move to the right , otherwise it would fall off the tape of . therefore the worst case occurs if the cell is marked as cell that has to be read next . in this casewe obtain .the head of is now either over cell , or over cell if a insertion was performed .the shift over visits each cell three times , and two times .therefore the following upper bound of the time of the shift over holds : .we conclude that if the cycle started initally in cell it took less than time .if halts on its input , finishes the simulation in a time less than . therefore solves the halting problem of turing machines .we remark that if does not halt , the head of vanishes in infinity , leaving a blank tape behind .a right - accelerated turing machine is , in contrast to the accelerated one , in control over the acceleration .this can be used to transfer the result of a computation back to slower cells .the construction of an infinite machine , as proposed by davies , comes close to the model of a right - accelerated turing machine , and his reasoning shows that a right - accelerated turing machine could be build within a continuous newtonian universe ._ cellular automata _ are dynamical systems in which space and time are discreet . the states of cells in a regular lattice are updated synchronously according to a local deterministic interaction rule .the rule gives the new state of each cell as a function of the old states of some `` nearby '' neighbor cells .each cell obeys the same rule , and has a finite ( usually small ) number of states . for a more comprehensive introduction to cellular automata, we refer to refs . .a _ scale - invariant cellular automaton _ operates like an ordinary _ cellular automaton _ on a cellular space , consisting of a regular arrangement of cells , whereby each cell can hold a value from a finite set of states . whereas the cellular space of a cellular automaton consists of a regular one- or higher dimensional lattice , a scale - invariant cellular automaton operates on a cellular space of recursively nested lattices which can be embedded in some euclidean space as well .the time behavior of a scale - invariant cellular automaton differs from the time behavior of a cellular automaton : cells in the same lattice synchronously change their state , but as cells are getting smaller in deeper nested lattices , the time steps between state changes in the same lattice are assumed to _ decrease _ and approach zero in the limit .thereby , a finite speed of signal propagation between adjacent cells is always maintained .the scale - invariant cellular automaton model gains its computing capabilities by introducing a local rule that allows for interaction between adjacent lattices .we will introduce the scale - invariant cellular automaton model for the one - dimensional case , the extension to higher dimensions is straightforward .a scale - invariant cellular automaton , like a cellular automaton , is defined by a cellular space , a topology that defines the neighborhood of a cell , a finite set of states a cell can be in , a time model that determines when a cell is updated , and a local rule that maps states of neighborhood cells to a state .we first define the cellular space of a scale - invariant cellular automaton . to this end , we make use of standard interval arithmetic . for a scalar and a ( half - open ) interval : and .we denote the unit interval by . the cellular space , the set of all cells of the scale - invariant cellular automaton , is the set .the neighborhood of a cell is determined by the following operators .for a cell in let be the left neighbor , the right neighbor , the parent , the left child , and the right child of .the predicate is true if and only if the cell is the left child of its parent .the cellular space is the union of all lattices , where is an integer .this topology is depicted in fig .[ fig:1-dim - interaction ] . for notationalconvenience , we introduce a further operator , this time from to , that maps a cell to its both child cells : .we remark that according to the last definition for each cell either or is true .later on , we will consider scale - invariant cellular automata where not each cell has a parent cell . if is such a cell , we set by convention if , otherwise .all cells in lattice are updated synchronously at time instances where is an integer .the time interval between two cell updates in lattice is again a half - open interval and the cycle time , that is the time between two updates of the cell , is therefore .a simple consequence of this time model is that child cells cycle twice as fast and the parent cell cycle half as fast as the cell itself .the time scale is the set of all possible time intervals , which is in the one - dimensional case equal to the set : .the temporal dependencies of a cell are expressed by the following time operators . for a timeinverval let , , , and .the predicate is true if and only if the state change of a cell at the beginning of occurs simultaneously with the state change of its parent cell .the usage of time intervals instead of time instances , has the advantage that a time interval uniquely identifies the lattice where the update occurs .[ fig : timeops ] depicts the temporal dependencies of a cell : to the left it shows a coupled state change , to the right an uncoupled one .we remark that we denoted space and time operators by the same symbols , even if their mapping is different . in applying these operators, we take in the remainder of this paper care , that the context of the operator is always clearly defined . at any time, each cell is in one state from a finite state set .the cell state in a given time interval is described by the state function , which maps cells and time intervals to the state set .the space - time scale of the scale - invariant cellular automaton describes the set of allowed pairs of cells and time intervals : .then , the state function can be expressed as a mapping .the local rule describes the evolution of the state function .for a cell and a time interval , where is in , the evolution of the state is given by the local rule of the scale - invariant cellular automaton in accordance with the definition , the expanded form of a expression of the kind is .the local rule is a mapping from to . beside the dependencies on the states of the neighbor cells ,the new state of the cell further depends on whether the cell is the left or the right child of its parent cell and whether the state change is coupled or uncoupled to the state change of its parent cell .formally , a scale - invariant cellular automaton is denoted by the tuple .there are some simplifications of the local rule possible , if one allows for a larger state set .for instance , the values of the predicates and could be stored as substate in the initial configuration .if the local rule accordingly updates the value of , the dependencies on the boolean predicates could be dropped from the local rule .as noted in the introduction the application of the local rule in its general form might lead to indeterministic behavior .the next subsection introduces two restrictions of the general model that avoid indeterminism at least for finite computations .a special case of the local rule is a rule of the form , which is the constituting rule of a one - dimensional 3-neighborhood cellular automaton . in this case , the scale - invariant cellular automaton splits up in a sequence of infinitely many nonconnected cellular automata .this shows that the scale - invariant cellular automaton model is truly an extension of the cellular automaton model and allows us to view a scale - invariant cellular automaton as an infinite sequence of interconnected cellular automata .we now examine the signal speed that is required to communicate state changes between neighbor cells . to this end , we select the middle point of a cell as the source and the target of a signal that propagates the state change of a cell to one of its neighbor cells .a simple consideration shows that the most restricting cases are the paths from the space time points , , to if not .the simple calculation delivers the results , and , respectively , hence a signal speed of 1 is sufficient to deliver the updates in the given timeframe .a more general examination takes also the processing time of a cell into account .if a cell in takes time to process their inputs and if we assume a finite signal speed of , the cycle time of a cell in must be at least . in sum , as long as the processing time is proportional to the diameter of a cell , we can always find a scaling factor , such that the scale - invariant cellular automaton has cycle times that conform to the time scale .the construction of a hypercomputer in section [ chap : hypercomputer ] uses a simplified version of a scale - invariant cellular automaton , which we call a self - similar cellular automaton . a _ self - similar cellular automaton _ has the cellular space , the time scale , and the finite state set .the space - time scale of a self - similar cellular automaton is the set .the self - similar cellular automaton has the following local rule : for all the local rule is a mapping from to .formally , a self - similar cellular automaton is denoted by a tuple . by restricting the local rule of a scale - invariant cellular automaton, a self - similar cellular automaton can also be constructed from a scale - invariant cellular automaton .consider a scale - invariant cellular automaton whose local rule does not depend on the cell neighbors , , and .then , the resulting scale - invariant cellular automaton contains the self - similar cellular automaton as subautomaton .we introduce the following notation for self - similar cellular automata .we index a cell by the integer , that is a cell with index has a cyle time of .we call the cell the upper neighbor and the cell the lower neighbor of cell . time instances can be conveniently expressed as a binary number .if not stated otherwise , we use the cycle time of cell 0 as time unit .we noted already in the introduction that the evolution of a scale - invariant cellular automaton might lead to indeterministic behavior .we offer two solutions , one based on a special quiescent state , the other one based on a dynamically growing lattice .a state in the state set is called a quiescent state with regard to the short - circuit evaluation , if , where the question mark sign `` '' either represents an arbitrary state or a boolean value , depending on its position .whenever a cell is in state , the cell does not access its lower neighbor .the cell remains as long in the quiescent state as long as the upper neighbor is in the quiescent state , too .this modus of operandi corresponds to the short - circuit evaluation of logical expressions in programming languages like c or java .if the self - similar cellular automaton starts in an initial configuration of the form at cell , the infinite regress is interrupted , since cell evolves to without being dependent on cell .let be a state in the state set , called the quiescent state .a dynamically growing self - similar cellular automaton initially starts with the finite set of cells and the following boundary condition .whenever cell or the cell with the highest index is evolved , the state of the missing neighbor cell is assumed to be .the self - similar cellular automaton dynamically appends cells to the lower end when needed : whenever the cell with the highest index enters a state that is different from the quiescent state , a new cell is appended , initialized with state , and connected to the cell . to be more specific :if is the highest index , and cell evolves at time to state , a new cell in state is appended .the cell performs its first transition at time , assuming state for its missing lower neighbor cell .we note that the same technique could also be applied to append upper cells to the self - similar cellular automaton , although in the remainder of this paper we only deal with self - similar cellular automata which are growing to the bottom .both enhancements ensure a deterministic evaluation either for a configuration where only a finite number of cells is in a nonquiescent state or for a finite number of cells . a configuration of a self - similar cellular automaton is called finite if only a finite number of cells is different from the quiescent state .let be a finite configuration and the next configuration in the evolution that is different to . is again finite .we denote this relationship by .the relation is again the reflexive and transitive closure of .a self - similar cellular automaton as a scale - invariant cellular automaton can not halt by definition and runs forever without stopping .the closest analogue to the turing machine halting occurs , when the configuration stays constant during evolution .such a configuration that does not change anymore is called final .in this section , we shall construct an accelerated turing machine based on a self - similar cellular automaton . a self - similar cellular automaton which simulates the turing machine specified in the proof of theorem [ th - right - acc - tm ] in a step - by - step manneris a hypercomputer , since the resulting turing machine is a right - accelerated one .we give an alternative construction , where the shift over to the right is directly embedded in the local rule of the self - similar cellular automaton .the self - similar cellular automaton will simultaneously simulate the turing machine and shift the tape content down to faster cycling cells .the advantages of this construction are the smaller state set as well as a resulting faster simulation .let be an arbitrary turing machine .we construct a self - similar cellular automaton that simulates as follows .first , we simplify the local rule by dropping the dependency on , obtaining the state set of is given by we write for an element in , for an element in , and for an element in . to simulate on the input in , , is initialized with the sequence starting at cell 0 , all other cells shall be in the quiescent state .if , is initialized with the sequence , and if , the empty word , is initialized with the sequence .we denote the initial configuration by , or by if we want to emphasize the dependency on the input word .the computation is started at time 0 , i.e. the first state change of cell occurs at time . the elements and act as head of the turing machine including the input symbol of the turing machine that is scanned next . to accelerate the turing machine , we have to shift down the tape content to faster cycling cells of the self - similar cellular automaton , thereby taking care that the symbols that represent the non - blank content of the turing machine tape are kept together .we achieve this by sending a pulse , which is just a symbol from a subset of the state set , from the left delimiter to the right delimiter and back .each zigzag of the pulse moves the tape content one cell downwards and triggers at least one move of the turing machine .furthermore a blank is inserted to the right of the simulated head if necessary .the pulse that goes down is represented by exactly one element of the form , or , the upgoing pulse is represented by the element . the specification of the values for the local rule for all possible arguments is tedious , therefore we use the following approach . a coupled transition of two neighbor cells can perform a simultaneous state change of the two cells . if the state changes of these two neighbor cells is independent of their other neighbors , we can specify the state changes as a transformation of a state pair into another one .let be elements in .we call a mapping of the form a block transformation . the block transformation defines a function mapping of the form and for all in .furthermore , we will also allow block transformations that might be ambiguous for certain configurations .consider the block transformations and that might lead to an ambiguity for a configuration that contains . instead of resolving these ambiguities in a formal way, we will restrict our consideration to configurations that are unambiguous .the evolution of the self - similar cellular automaton is governed by the following block transformations : 1 ._ pulse moves downwards . _set if set if set set 2 ._ pulse moves upwards_. set if to a certain cell no block transformation is applicable the cell shall remain in its previous state .furthermore , we assume a short - circuit evaluation with regard to the quiescent state : , whereby the lower neighbor cell is not accessed . [ cols="^,^,^,^,^,^ " , ] we illustrate the working of by a simple example .let be the formal language consisting of strings with 0 s , followed by 1 s : .a turing machine that accepts this language is given by with the transition function depicted in fig .[ fig : example - delta ] .note that is a context - free language , but will serve for demonstration purposes .the computation of on input is given below : fig .[ fig : example - hyper - sca-2 ] depicts the computation of on the turing machine input 01 .the first column of the table specifies the time in binary base . performs 4 complete pulse zigzags and enters a final configuration in the fifth one after the turing machine simulation has reached the final state .[ fig : evolution ] depicts the space - time diagram of the computation .it shows the position of the left and right delimiter ( gray ) and the position of the pulse ( black ) .we split the proof that is a hypercomputer into several steps .we first show that the block transformations are well - defined and the pulse is preserved during evolution . afterwardswe will prove that simulates correctly and we will show that represents an accelerating turing machine .let be the set of elements that represent the downgoing pulse , be the singleton that contains the upgoing pulse , , and the remaining elements .the following lemma states that the block transformations are unambiguous for the set of configurations we consider and that the pulse is preserved during evolution .if the finite configuration contains exactly one element of then the application of the block transformations [ tr : start - state ] [ tr : up - lhd ] is unambiguous and at most one block transformation is applicable .if a configuration with exists , then contains exactly one element of as well .note that the domains of all block transformations are pairwise disjoint .this ensures that for all pairs in at most one block transformation is applicable . block transformations [ tr : start - state ] [ tr : new - blank ] are all subsets or elements of , block transformation [ tr : reflection - right ] is element of , block transformations [ tr : up ] and [ tr : up - state ] are subsets of , and finally block transformation [ tr : up - lhd ] is element of .since the domain is either a subset of or the block transformations are unambiguous if contains at most one element of . a configuration with must be the result of the application of exactly one block transformation . since each block transformation preserves the pulse, contains one pulse if and only if contains one .we introduce a mapping that aims to decode a self - similar cellular automaton configuration into a turing machine configuration .let be a finite configuration .then is the string in that is formed of as following : 1 . all elements in are omitted .all elements of the form are replaced by and all elements of the form or are replaced by the two symbols and .all other elements of the form are added as they are .4 . leading or trailing blanks of the resulting string are omitted .the following lemma states that correctly simulates .let , be configurations of . if , then there exist two finite configurations , of such that , , and . especially if the initial configuration of satisfies , then there exists a finite configuration of , such that and .if has the form we consider without loss of generality .therefore let .if or and we choose .if and we insert an additional blank : . in any case holds .we show the correctness of the simulation by calculating a complete zigzag of the pulse for the start configuration : .the number of the block transformation that is applied , is written above the derivation symbol .we split the zigzag up into three phases . 1. pulse moves down from the left delimiter to the left neighbor cell of the simulated head .+ for we obtain + if the pulse piggybacked by the left delimiter is already in the left neighbor cell of the head and this phase is omitted .2 . downgoing pulse passes the head .+ if in the beginning of the zigzag the head was to the right of the left delimiter then if no further block transformation is applicable and the configuration is final .the case will be handled later on .we now continue the derivation [ der : start ] . if then + if then we distinguish two cases : and .if then if the next steps of are moving the head again to the right , block transformation [ tr : right-2 ] will repeatedly applied , till the head changes its direction or till the head is left of the right delimiter . if the turing machine changes its direction before the right delimiter is reached , we obtain or if the direction change happens just before the right delimiter then if or if the right - moving head hits the right delimiter the derivation has the following form which inserts a blank to the right of the simulated head .downgoing pulse is reflected and moves up . + we proceed from configurations of the form .then which finishes the zigzag .note that the continuation of derivations [ der : right ] and [ der : new - blank ] is handled by the later part of derivation [ der : up ] .we also remark that the zigzag has shifted the whole configuration one cell downwards .all block transformations except transformations [ tr : right-2 ] and [ tr : left-1 ] keep the -value of the configuration unchanged .block transformations [ tr : right-2 ] and [ tr : left-1 ] correctly simulate one step in the calculation of the turing machine : if , , and then .let be the resulting configuration of the zigzag .we conclude that holds .we have chosen in such a way that at least one step of is performed , if does not halt , either by block transformation [ tr : right-2 ] or [ tr : left-1 ] . if does not halt the configuration after the zigzag is again of the form . the case and is excluded by derivation [ der : new - blank ] , which inserts a blank to the right of the head , if .this means that has the same form as and that any subsequent zigzag will perform at least one step of as well if does not halt .in summary , we conclude that reaches after a finite number of zigzags a configuration such that . on the other hand , if halts , enters a final configuration since derivations [ der : tm - step - left ] or [ pulse - passed ] are not applicable anymore and the pulse can not cross the simulated head .since we have chosen to be of the same form as in the beginning of the proof , the addendum of the lemma regarding the initial configuration is true .next , the time behavior of the self - similar cellular automaton will be investigated .let be a finite configuration of that starts in cell .if does not halt , the zigzag of the pulse takes 3 cycles of cell and is afterwards in a finite configuration that starts in cell . without loss of generality , we assume that the finite configuration starts in cell 0 .we follow the zigzag of the pulse , thereby tracking all times , compare with fig .[ fig : example - hyper - sca-2 ] and fig .[ fig : evolution ] .the pulse reaches at time 1 cell 1 , and at time cell 2 .in general , the downgoing pulse reaches cell in time . at time the cell changes to which marks the reversal of direction of the pulse .the next configuration change ( ) occurs at .the pulse reaches cell in time and in general cell in time . the final configuration change of the zigzag ( ) that marks also the beginning of a new pulse zigzag occurs synchronously in cell 0 and cell 1 at time 3 .we remark that the overall time of the pulse zigzag remains unchanged if the simulated head inserts a blank between the two delimiters .[ th - rca ] if halts on and is initialized with then enters a final configuration in a time less than 6 cycles of cell 0 , containing the result of the calculation between the left and right delimiter .if does not halt , enters after 6 cycles of cell 0 the final configuration that consists of an infinite string of the quiescent element : . needs 3 cycles of cell 0 to perform the first zigzag of the pulse .after the 3 cycles the configuration is shifted one cell downwards , starting now in cell 1 .the next zigzag takes 3 cycles of cell 1 which are 3/2 cyles of cell 0 , and so on .each zigzags performs at least one step of the turing machine , if does not halt .we conclude that if halts , enters a final configuration in a time less than cycles of cell 0 .if does not halt , the zigzag disappears in infinity after 6 cycles of cell 0 leaving a trail of s behind .if is a universal turing machine , we immediately obtain the following result , which proves that is a hypercomputer for certain turing machines .let be a universal turing machine .then solves the halting problem for turing machines .initialize with an encoded turing machine and an input word .then enters a final configuration with the result of on in less than 6 cycles of cell 0 if and only if halts . in the current form of turing machine simulationthe operator has to scan a potentially unlimited number of cells to determine whether has halted or not , which limits its practical value .if has halted , we would like to propagate at least this fact back to the upper cells .the following obvious strategy fails in a subtle way . add a rule to that whenever has no next move , replaces it by the new symbol .add the rule to that propagates upwards to cell .the propagation upwards is only possible if we change also the block transformation [ tr : up - lhd ] to , thereby introducing a new symbol that is not subject of the short - circuit evaluation .the last point , even if necessary , causes the strategy to fail , since if does not halt , is after 6 cycles in the configuration that leads to indeterministic behavior of .this is in so far problematic , since we can not be sure whether a state in cell is really the outcome of a halting turing machine or the result of indeterministic behavior . instead of enhancing the self - similar cellular automaton model, we will introduce in the next section a computing model that is computational equivalent for finite computations , but avoids indeterminism for infinite computations .the evolution of a cellular automaton as well as the evolution of a self - similar cellular automaton depends on an extrinsic clock representing a global time that triggers the state changes .since a self - similar cellular automaton can not halt , a self - similar cellular automaton is forced to perform a state change , even if no state with a causal relationship to the previous one exists , leading to indeterministic behavior , as described in the introduction . in this section ,we present a model based on petri nets , the self - similar petri nets , with a close resemblance to self - similar cellular automata . even though petri nets in general are not deterministic , there exist subclasses that are . as will be shown below, self - similar petri nets are deterministic .they are also capable of hypercomputing , but compared to self - similar cellular automata , their behavior differ in the limit . whereas a self - similar cellular automaton features indeterministic behavior , the self - similar petri net halts .petri introduced petri nets in the 1960s to study asynchronous computing systems .they are now widely used to describe and study information processing systems that are characterized as being concurrent , asynchronous , distributed , parallel , nondeterministic , and/or stochastic .it is interesting to note that very early , and clearly ahead of its time , petri investigated the connections between physical and computational processes , see e.g. , ref . . inwhat follows , we give a brief introduction to petri nets to define the terminology . for a more comprehensive treatmentwe refer to the literature ; e.g. , to ref . .a petri net is a directed , weighted , bipartite graph consisting of two kinds of nodes , called places and transitions .the weight is the weight of the arc from place to transition , is the weight of the arc from transition to place .a marking assigns to place a nonnegative integer , we say that is marked with tokens .if a place is connected with a transition by an arc that goes from to , is an input place of , if the arc goes from to , is an output place .a petri net is changed according to the following transition ( firing ) rule : 1 .a transition may fire if each input place of is marked with at least tokens , and 2 . a firing of an enabled transition removes tokens from each input place of , and adds tokens to each output place of .formally , a petri net is a tuple where is the set of places , is the set of transitions , is the set of arcs , is the weight function , and is the initial marking . in graphical representation , places are drawn as circles and transitions as boxes . if a place is input place of more than one transition , the petri net becomes in general indeterministic , since a token in this place might enable more than one transition , but only one can actually fire and consume the token .the subclass of petri nets given in the following definition avoids these conflicts and is therefore deterministic . in a standard petri net ,tokens are indistinguishable . if the petri net model is extended so that the tokens can hold values , the petri net is called a colored petri net . a marked graph is a petri net such that each place has exactly one input transition and exactly one output transition .a colored petri net is a petri net where each token has a value .it is well - known that cellular automata can be modeled as colored petri nets . to do this , each cell of the cellular automatonis replaced by a transition and a place for each neighbor .the neighbor transitions send their states as token values to their output places , which are the input places of the transition under consideration .the transition consumes the tokens , calculates the new state , and send its state back to its neighbors .a similar construction can be done for self - similar cellular automata , leading to the class of self - similar petri nets . underlying graph of a self - similar petri net . ]a _ self - similar petri net _ is a colored petri net with some extensions .a self - similar petri net has the underlying graph partitioned into cells that is depicted in fig .[ petri ] .we denote the transition of cell by , the place to the left of the transition by , the place to the right of the transition by and the central place , in the figure the place above the transition , by .let be a finite set , the state set , be the quiescent state , and be a ( partial ) function .the set is the value set of the tokens .tokens are added to a place and consumed from the place according to a first - in first - out order .initially , the self - similar petri net starts with a finite number of cells , and is allowed to grow to the right .the notation defines the following action : create a token with value and add it to place .the firing rule for a transition in cell of a self - similar petri net extends the firing rule of a standard petri net in the following way : 1 .if the transition is enabled , the transition removes token from place , token from and tokens from .the value of token shall be of the form in , the other token values and shall be in .if the tokens do not conform , the behavior of the transition is undefined .2 . the transition calculates .( left boundary cell ) _ if then , , , .( inner cell ) _ if and is not the highest index , then : , , , .( right boundary cell ) _if is the highest index then : 1 . _( quiescent state ) _ [ firing - rule - quiescent ] if then , , , 2 . _( new cell allocation ) _ if then a new cell is created and connected to cell . furthermore : , , , , , , , .formally , we denote the self - similar petri net by a tuple .a self - similar petri net is a marked graph and therefore deterministic .the initial markup is chosen in such a way that initially only the rightmost transition is enabled .( initial markup ) let be an input word in and let be a self - similar petri net with cells , whereby . the initial markup of the petri net is as follows : * , ( , ) for , ( , ) for * for , for , * for , for , and .note that the place is initialized with two tokens .we identify the state of a cell with the value of its -token. if is empty , because the transition is in the process of firing , the state shall be the value of the last consumed token of . token flow in a self - similar petri net .] fig .[ token - flow ] depicts the token flow of a self - similar petri net consisting of cells under the assumption that the self - similar petri net does not grow .tokens that are created and consumed by the same cell are not shown .the numbers indicate whether the firing is uncoupled ( 0 ) or coupled ( 1 ) .the only transition that is enabled in the begin is , since was initialized with 2 tokens .the firing of bootstraps the self - similar petri net by adding a second token to , thereby enabling , and so on , until all transitions have fired , and the token flow enters periodic behavior .we now compare self - similar petri nets with self - similar cellular automata .we call a computation finite , if it involves either only a finite number of state updates of a self - similar cellular automaton , or a finite number of transition firings of a self - similar petri net , respectively .[ lemma : comp - equivalence ] for finite computations , a dynamically growing self - similar cellular automaton and a self - similar petri net are computationally equivalent on a step - by - step basis if the start with the same number of cells and the same initial configuration .let be a self - similar petri net which has cells initially .for the sake of the proof consider an enhanced self - similar petri net that is able to timestamp its token .a token of does not hold only a value , but also a time interval .we refer to the time interval of by and to the value of by .we remark that the timestamps serve only to compare the computations of a self - similar cellular automaton and a self - similar petri net and do not imply any time behavior of the self - similar petri net .the firing rule of works as for , but has an additional pre- and postprocessing step : * _ ( preprocessing ) _ let , , , and be the consumed token , where the alphabetical subscript denotes the input place and the numerical subscript the order in which the tokens were consumed . calculate , where is the inverse time operator of .if or or the firing fails and the transition becomes permanently disabled . * _ ( postprocessing ) _ for each created token , set .the initial marking must set the -field , otherwise the first transitions will fail . for the initial tokens in cell ,set for both tokens in place , , and .set for the second token in .the firings of cell add tokens with timestamps to the output place .if transition does not fail , the state function for the arguments and is well - defined : if cell has produced or was initialized in place with a token with and .let be the state function of the scale - invariant cellular automaton .due to the initialization , the two state functions are defined for the first cells and first time intervals .assume that the values of and differ for some argument or that their domains are different .consider the first time interval where the difference occurs : , or exactly one of or is undefined .if there is more than one time interval choose an arbitrary one of these . since was the first time interval where the state functions differ , we know that , , , and .we handle the case that the values of the state functions are different or that is undefined for whereas is .the other case ( defined , but not ) can be handled analogously .if , we conclude that tokens with timestamps , , , were sent to cell , and no other tokens were sent afterwards to cell , since the timestamps are created in chronological order .hence , the precondition of the firing rule is satisfied and we conclude that , which contradicts our assumption .the allocation of new cells introduces some technicalities , but the overall strategy of going back in time and concluding that the conditions for a state change or cell allocation were the same in both models works here also .we complete the proof , by the simple observation that and perform the same computation .the proof can be simplified using the following more abstract argumentation .a comparison of fig .[ token - flow ] with fig .[ fig : timeops ] shows that each computation step has in both models the same causal dependencies .since both computers use the same rule to calculate the value of a cell , respectively the value of a token , we conclude that the causal nets of both computations are the same for a finite computation , and therefore both computers yield the same output , in case the computation is finite .a large number of different approaches to introducing time concepts to petri nets have been proposed since the first extensions in the mid 1970s .we do not delve into the depths of the different models , but instead , define a very simple time schedule for the class of self - similar petri nets .a timed self - similar petri net is a self - similar petri net that fires as soon as the transition is enabled and where a firing of an enabled transition takes the time . in the beginning of the firing ,the tokens are removed from the input places , and at the end of the firing the produced tokes of the firing are simultaneously entered into the output places .this time model can be satisfied if the cells of the timed self - similar petri net are arranged as the cells of a self - similar cellular automaton . under the assumption of a constant token speed , a firing time that is proportional to the cell length , and an appropriate unit of time we yield again cycle times of .we now come back to the simulation of turing machines and construct a hypercomputing timed self - similar petri net , analogous to the hypercomputing self - similar cellular automaton in section [ chap : hypercomputer ] .let be an arbitrary turing machine .let be the state set that we used in the simulation of a turing machine by a self - similar cellular automaton , and let the local rule that is defined by the block transformations [ tr : start - state ] - [ tr : up - lhd ] , without the short - circuit evaluation . by lemma [ lemma : comp - equivalence ]we know that the timed self - similar petri net simulates correctly for a finite number of turing machine steps .hence , if halts on input , enters a final configuration in less than 6 cyles of cell 0 .we examine now the case that does not halt . a pivotal difference between a self - similar cellular automaton and a self - similar petri netis the ability of the latter one to halt on a computation .this happens if all transitions of the self - similar petri net are disabled .[ lemma : apn - halting ] let be an arbitrary turing machine and an input word in .if does not halt on , the timed self - similar petri net halts on after 6 cycles of cell 0 . as long as the number of cells is finite , the boundary condition [ firing - rule - quiescent ] of the firing rule adds by each firing two tokens to the -place of the rightmost cell that successively enable all other transitions as well .this holds no longer for the infinite case .let be a turing machine , and an input word , such that does not halt on .we consider again the travel of the pulse zigzags down to infinity for the timed self - similar petri net with initial configuration , thereby tracking the marking of the -places for times after the zigzag has passed by .the first states of cell are , , , and , including the initial one .the state is the result of the firing at time 3 , exhausting thereby the tokens in place . at time 3the left delimiter ( ) of the pulse zigzag is now in cell 1 .cell 1 runs from time 3 on through the same state sequence , , , and , thereby adding in summary 4 tokens to . after creating the token with value , is empty as well .we conclude that after the zigzag has passed by a cell , the lower cell sends in summary 4 tokens to the upper cell , till the zigzag has left the lower cell as well .for each cell these four tokens in enable two firings of cell thereby adding two tokens to .these two tokens of enable again one firing of cell thereby adding one token to .we conclude that each cell fires 3 times after the zigzag has passed by and that the final marking of each is one .hence , no has the necessary two tokens that enable the transition , therefore all transitions are disabled and halts at time 6 . since halts for nonhalting turing machines , there are no longer any obstacles that prevent the construction of the proposed propagation of the halting state back to upper cells .we replace block transformation [ tr : start - state ] with the following two and add one new .if set if or is not defined set if is not defined set the following definition propagates the state up to cell : we denote the resulting timed self - similar petri net by .the following theorem makes use of the apparently paradoxical fact , that halts if and only if the simulated turing machine does not halt .let be a universal turing machine .then solves the halting problem for turing machines .consider a turing machine and an input word .initialize with where is the encoding of and . if does not halt on , halts at time 6 by lemma [ lemma : apn - halting ] .if halts on , then one cell of enters the state by block transformation [ tr : h ] or [ tr : down - to - head2 ] according to theorem [ th - rca ] and lemma [ lemma : comp - equivalence ] and taking the changes in into account . the mapping [ eq : h - up ] propagates up to cell 0 .an easy calculation shows that cell 0 is in state , in time 7 or less .we have proven that is indeed a hypercomputer without the deficiencies of the scale - invariant cellular automaton - based hypercomputer .we end this section with two remarks .the timed self - similar petri net sends a flag back to the upper cells , if the simulated turing machine halts . strictly speaking ,this is not necessary , if the operator is able to recognize whether the timed self - similar petri net has halted or not . on the other hand ,a similar construction is essential , if the operator is interested in the final tape content of the simulated turing machine . transferring the whole tape content of the simulated turing machine upwards , could be achieved by implementing a second pulse that performs an upwards - moving zigzag .the construction is even simpler as the described one , since the tape content of the turing machine becomes static as soon as the turing machine halts .the halting problem of turing machines is not the only problem that can be solved by self - similar cellular automata , scale - invariant cellular automata , or timed self - similar petri nets , but is unsolvable for turing machines .a discussion of other problems unsolvable by turing machines and of techniques to solve them within infinite computing machines , can be found in davies .we have presented two new computing models that implement the potential infinite divisibility of physical configuration space . these models are purely information theoretic and do not take into account kinetic and other effects . with these provisos , it is possible , at least in principle , to use the potential infinite divisibility of space - time to perform hypercomputation , thereby extending the algorithmic domain to hitherto unsolvable decision problems .both models are composed of elementary computation primitives .the two models are closely related but are very different ontologically .a cellular automaton depends on an _ extrinsic _ time requiring an _ external _ clock and a rigid synchronization of its computing cells , whereas a petri net implements a causal relationship leading to an _ intrinsic _ concept of time .scale - invariant cellular automata as well as self - similar petri nets are built in the same way from their primitive building blocks .each unit is recursively coupled with a sized - down copy of itself , potentially leading to an infinite sequence of ever decreasing units .their close resemblance leads to a step - by - step equivalence of finite computations , yet their ontological difference yields different behaviors for the for the case that the computation involves an infinite number of units : a scale - invariant cellular automaton exhibits indeterministic behavior , whereas a self - similar petri net halts .two supertasks which operate identically in the finite case but differ in their limit is a puzzling observation which might question our present understanding of supertasks .this may be considered an analogy to a theorem in recursive analysis about the existence of recursive monotone bounded sequences of rational numbers whose limit is not a computable number .one striking feature of both models is their scale - invariance .the computational behavior of these models is therefore the first example for what might be called scale - invariant or self - similar computing , which might be characterized by the property that any computational space - time pattern can be arbitrary squeezed to finer and finer regions of space and time .although the basic definitions have been given , and elementary properties of these new models have been explored , a great number of questions remain open for future research .the construction of a hypercomputer was a first demonstration of the extraordinary computational capabilities of these models .further investigations are necessary to determine their limits , and to relate them with the emerging field of hypercomputation . another line of research would be the investigation of their phenomenological properties , analogous to the statistical mechanics of cellular automata .m. margenstern and k. morita , `` a polynomial solution for 3-sat in the space of cellular automata in the hyperbolic plane , '' journal of universal computer science * 5 * , 563573 ( 1999 ) .j. durand - lose , `` abstract geometrical computation for black hole computation , '' in _ machines , computations , and universality , 4th international conference , mcu 2004 , saint petersburg , russia , september 21 - 24 , 2004 , revised selected papers _ , m. margenstern , ed .( springer , 2005 ) , pp .i. nmeti and g. dvid , `` relativistic computers and the turing barrier , '' applied mathematics and computation * 178 * , 118142 ( 2006 ) , special issue on hypercomputation .o. bournez and m. l. campagnolo , `` a survey on continuous time computations , '' in _ new computational paradigms . changing conceptions of what is computable _ , s. cooper , b. lwe , and a. sorbi , eds .( springer verlag , new york , 2008 ) , pp .383423 .h. gutowitz , `` cellular automata : theory and experiment , '' physica * d45 * , 3483 ( 1990 ) , previous ca conference proceedings in _ international journal of theoretical physics _* 21 * , 1982 ; as well as in _ physica _ , * d10 * , 1984 and in * complex systems * * 2 * , 1988 .e. specker , `` nicht konstruktiv beweisbare stze der analysis , '' the journal of smbolic logic * 14 * , 145158 ( 1949 ) , reprinted in ; english translation : _ theorems of analysis which can not be proven constructively_.
two novel computing models based on an infinite tessellation of space - time are introduced . they consist of recursively coupled primitive building blocks . the first model is a scale - invariant generalization of cellular automata , whereas the second one utilizes self - similar petri nets . both models are capable of hypercomputations and can , for instance , `` solve '' the halting problem for turing machines . these two models are closely related , as they exhibit a step - by - step equivalence for finite computations . on the other hand , they differ greatly for computations that involve an infinite number of building blocks : the first one shows indeterministic behavior whereas the second one halts . both models are capable of challenging our understanding of computability , causality , and space - time .
in recent years , advances in quantum control have emerged through the introduction of appropriate and powerful tools coming from mathematical control theory and by the use of sophisticated experimental techniques to shape the corresponding control fields .all these efforts have lead nowadays to an unexpected and satisfactory agreement between theory and experiment . on the theoretical side ,one major tool to design the control field is optimal control theory ( oct ) . over the past few years, numerical iterative methods have been developed in quantum control to solve the optimization problems .basically , they can be divided into two families , the gradient ascent algorithms and the krotov or the monotonic ones .the design of optimal control fields by standard iterative algorithms can require the computation of several hundreds numerical propagations of the dynamical quantum system .while the efficiency of this procedure has been established for low dimensional quantum systems , this approach can numerically be prohibitive for large dimensions . in this latter case, it is possible to use more easily accessible numerical methods , such as the local control approach . roughly speaking, the optimization procedure is built from a lyapunov function over the state space , which is minimum ( or maximum ) for the target state .a control field that ensures the monotonic decrease ( or increase ) of is constructed with the help of the first derivative of .note that this approach has largely been explored in quantum control .thanks to the progresses in numerical optimization techniques , it is now possible to design high quality control along with some experimental imperfections and constraints .recent studies have shown how to extend the standard optimization procedures in order to take into account some experimental requirements such as the spectral constraints , the non - linear interaction between the system and the laser field , the robustness against experimental errors . in view of experimental applications in quantum control , it is also desirable to design pulse sequences with a zero global time - integrated area .several works have pointed out the inherent experimental difficulties associated with the use of non - zero area fields , in particular for laser fields in the thz regime .since the dc component of the field is not a solution of maxwell s equation , such pulses are distorted when they propagate in free space as well as through focusing optics .the standard optimization procedures do not take into account this basic requirement , designing thus non - physical control fields . in this framework ,a question which naturally arises is whether one can adapt the known optimization algorithms to this additional constraint .this paper aims at taking a step toward the answer of this open question by proposing new formulations of optimal and local control algorithms .the zero - area requirement for the laser fields is mathematically fulfilled by the introduction of a lagrange multiplier and the derivation of the corresponding optimal equations .the goal of this paper is to explore the efficiency of the generalized optimal and local control algorithms on two key control problems of molecular dynamics : orientation and photodissociation .the remainder of this paper is organized as follows .the new formulations of optimization control algorithms are presented in sec .section [ sec3 ] is devoted to the application of the optimal control algorithm to the enhancement of molecular orientation of co by thz laser fields at zero temperature .local control is used in sec .[ sec4 ] to manipulate efficiently the photodissociation of heh .conclusion and prospective views are given in sec .we consider a quantum system whose dynamics is governed by the following hamiltonian : where is the control field .the state of the system satisfies the differential equation : with the initial ( ) state .the units used throughout the paper are atomic units .as mentioned in the introduction , we add the physical constraint of zero - area on the control field : where is the control ( or total pulse ) duration .the goal of the control problem is to maximize the following standard cost functional at time : -\lambda \int_0^{t_f } e(t)^2dt , \label{eq : joc}\ ] ] where is the target state .other cost functionals could of course be chosen .more specifically , a monotonic optimal control algorithm can be formulated to satisfy the zero - area constraint .for such a purpose let us consider the following cost functional -\mu \left[\int_0^{t_f}{e(t)dt}\right]^2-\lambda \int_0^{t_f } { [ e(t)-e_{ref}(t)]^2/s(t)dt}\ , , \label{eq : newjoc}\ ] ] where is a reference pulse and an envelope shape given by .the function ensures that the field is smoothly switched on and off at the beginning and at the end of the control .the positive parameters and , expressed in a.u . , weight the different parts of , which penalize the area of the field ( - term ) and its energy ( - term ) . in this algorithm , we determine the field at step from the field at step , such that the variation . at step , the reference field is taken as and we denote by its time - integrated area .note that the choice leads to a smooth transition of the control field between two iterations of the algorithm .the variation can be expressed as follows : -\lambda ( e_{k+1}-e_k)/s(t)-2\mu a_k ( e_{k+1}-e_k)]dt,\ ] ] is obtained from backward propagation of the target taken as an intial state for eq.(2 ) and we assume that .one deduces that the choice : }{\lambda}-(e_{k+1}-e_k)-2\frac{\mu}{\lambda}a_k s(t)\ ] ] ensures a monotonic increase of .this leads to the following correction of the control field : }{2\lambda}-\frac{\mu}{\lambda}s(t)a_k .\label{eq : newfield}\ ] ] the monotonic algorithm can thus be schematized as follows : 1 .guess an initial control field , which at step is taken as .2 . starting from , propagate backward with 3 .evaluate the correction of the control field as obtained from eq .( [ eq : newfield ] ) , while propagating forward the state of the system from with .4 . with the new control ,go to step 2 by incrementing the index by 1 .the same formalism can be used to extend the local control approach to the zero - area constraint .we refer the reader to for a complete introduction of this method . using the same notations as in the previous section , we consider as lyapunov function the cost functional : where is any operator such that $ ] .the cost is maximum at a given time if is maximum and .the computation of the time derivative of gives : }{i}|\psi(t ) \rangle-2\mu a(t ) e(t).\ ] ] one can choose : }{i}|\psi(t ) \rangle -2\mua(t)\big ) , \label{eq : fieldlc}\ ] ] to ensure the monotonic increase of from the initial state of the system .the parameter is a small positive parameter which is adjusted to limit the amplitude of the control field . in order to illustrate the efficiency of the algorithm with the zero - area constraint , we introduce two measures : where and are the optimized field with and without the zero - area constraint , respectivelyin this section , we present the theoretical model for the control of molecular orientation by means of thz laser pulses . we consider the linear co molecule described in a rigid rotor approximation and driven by a linearly polarized electric field along the - axis of the laboratory frame .the molecule is assumed to be in its ground vibronic state .the hamiltonian of the system is given by : the first term is the field - free rotational hamiltonian , where is the rotational constant of the molecule and the angular momentum operator .the second term represents the interaction of the system with the laser field , the parameter being the permanent molecular dipole moment .the spatial position of the diatomic molecule is given in the laboratory frame by the spherical coordinates . due to the symmetry of revolution of the problem with respect to the - axis, the hamiltonian depends only on , the angle between the molecular axis and the polarization vector .numerical values of the molecular parameters are taken as and . in the case of zero rotational temperature ( ) which is assumed hereafter ,the system is described at time by the wave function and its dynamics by the time - dependent schrdinger equation ( eq . ( [ eq : dyn ] ) ) .the dynamics is solved numerically using a third - order split operator algorithm .we finally recall that the degree of orientation of the molecular system is given by the expectation value .in this paragraph , we test the new formulation of the optimal control algorithm presented in sec .[ secopti ] on the specific example of co orientation dynamics .the initial state is the ground state denoted in the spherical harmonics basis set . due to symmetry, the projection of the angular momentum on the is a good quantum number .this property leads to the fact that only the states , , will be populated during the dynamics .in the numerical computations , we consider a finite hilbert space of size .note that this size is sufficient for an accurate description of the dynamics considering the intensity range of the laser field used in this work . to define the cost functional , rather than maximizing the expectation value , we focus on a target state maximizing this value in a finite - dimensional hilbert space spanned by the states with . the details of the construction of can be found in refs . . here, the parameter is taken to be . the guess field is a gaussian pulse of 288 fs full width at half maximum , centered at .the optimization time , , is taken to be the rotational period of the molecule .the parameter defined in the cost functional is fixed to 100 .both , with and without zero - area constraint , the optimized field achieves a fidelity higher than 99 .the results are shown in fig.[fig : jt_area ] .note that , since the ratio has the dimension of time , we found convenient to plot the evolution of the measures with respect to ( panel ( b ) ) .the upper right panel , fig.[fig : jt_area](a ) , shows the deviation from unity of the fidelity ( or final objective ) as a function of the number of iterations . from unity as a function of the number of iterations for corresponding to the optimization without zero - area constraint ( dashed dotted blue ) and for ( solid black , dashed black ) .( b ) plot of ( solid black ) and ( solid green ) as a function of the parameter times the final time .( c ) the guess field ( dashed red ) and the optimized fields without ( dotted dashed blue ) and with ( solid black ) zero - area constraint .( d ) comparison of the time evolution of the expectation value of with the guess field ( dashed red ) and with the fields without ( dotted dashed blue ) and with ( solid black ) zero - area constraint . ]as could be expected , optimization with zero - area constraint requires a large number of iterations to reach a high fidelity . for ,i.e. without zero - area constraint , the final cost reaches a value of 0.99 after 30 iterations , while in the case for which , this value is reached after 40 iterations .the variations of the and with respect to the parameter are shown in fig.[fig : jt_area](b ) . for ,the area of the optimized field obtained with the zero - area constraint is two orders of magnitude smaller than the one obtained without the zero - area constraint .note that the values of and displayed in fig.[fig : jt_area](b ) are computed with optimized fields which achieve a fidelity higher than 99 .figure [ fig : jt_area](c ) compares the guess ( reference ) field and the optimized fields with and without the zero - area constraint . the optimized field with the zero - area constraint shown in fig . [ fig : jt_area](c ) is obtained with .the optimized fields are similar to the guess field at the beginning of the control and become slightly different for .figure [ fig : jt_area](d ) corresponds to the time evolution of during two rotational periods ( one with the field and one without ) .it compares the dynamics of induced by the three fields shown in the lower left panel .clearly , the dynamics with the guess field is very different from the one obtained with the optimized fields , which have similar features .we consider the photodissociation of heh through the singlet excited states coupled by numerous nonadiabatic interactions .a local control strategy has recently been applied to find a field able to enhance the yield in two specific dissociation channels either leading to he , or to he .the diatomic system is described by its reduced mass and the internuclear distance . due to the short duration of the pulses (as compared with the rotational period ) a frozen rotation approximated is valid .in addition , the molecule is assumed to be pre - aligned along the -direction of the laboratory frame .the initial state is the vibrationless ground electronic adiabatic state .dynamics is performed in the diabatic representation .the total hamiltonian is written as : where is the field free hamiltonian , is the number of diabatic electronic states under consideration and .all the adiabatic potential energy curves and the nonadiabatic couplings have been computed in ref . are the diabatized dipole transition matrix elements .we shall consider only parallel transitions among the singlet states induced by the dipole operator by assuming the internuclear axis pointing along the -direction .the adiabatic - to - diabatic transformation matrix has been derived by integrating from the asymptotic region where both representations coincide .the local control of a nonadiabatic dissociation requires a careful choice of the operator referred to in eq .( [ eq : jlc ] ) since it has to commute with the field free hamiltonian . in the nonadiabatic case ,the projectors on either adiabatic or diabatic states are thus not relevant as they do not commute with this hamiltonian due to kinetic or potential couplings respectively .this crucial problem can be overcome by using projectors on eigenstates of , i.e. on scattering states correlating with the controlled exit channels . in this example, the operator takes the form : where represents the two channels leading to the target he fragments , the objective being .the local control field now reads involving two adjustable parameters and . ,dashed red : = 0.05 , dotted blue : = 0.10 .the parameter is expressed in atomic units .( b ) objectives of the local control measured by the total he yield ( same color code than in ( a ) ) .( c ) ( solid black ) and ( solid green ) as a function of .( d ) fields after filtration of the low frequencies of the spectrum removing the static stark component .( e ) objectives with the filtered fields .( f ) and ( same color code than in ( c ) ) as a function of for the filtered fields . ]the ingoing scattering states are estimated using a time - dependent approach based on mller - operators as derived in ref .this control strategy remains local in time but can preemptively account for nonadiabatic transitions that occur later in the dynamics .the photodissociation cross section shows that there is no spectral range where the he dissociation channels dominate .the local control finds a very complicated electric field which begins by an oscillatory pattern followed after 10 fs by a strong complex positive shape whose area is obviously not zero ( see fig . [fig : heh_area](a ) ) .we therefore choose this example to check the efficiency of the zero - area constraint algorithm .we use 4.2 and different values of to assess the effect of the zero - area constraint on the control field .figure [ fig : heh_area](a ) shows the dramatic evolution of the pulse profile for two values of ( 0.05 and 0.10 ) .the main change is the suppression of the erratic positive structure which can be interpreted as a stark field .however , as shown in fig .[ fig : heh_area](c ) the variation of is not monotonic and the best correction is actually obtained for a given value of around , leading to almost zero value for .close to the optimal , one observes a clear improvement of the zero - area constraint , but at the price of a decrease of the objective ( from without constraint to with ) as shown in fig .[ fig : heh_area](b ) .however , a brute force strategy by filtration of near - zero frequencies already provides a large correction to the non vanishing area while also decreasing the yield . the field with and after filtration of the low frequencies is shown in black line in fig .[ fig : heh_area](d ) and the corresponding yield is given in fig .[ fig : heh_area](e ) .the objective is divided by a factor 2 .we also filter the fields generated by the constrained control with equal to 0.05 and 0.10 which improves even further the zero - area constraint .the measures evaluated with the filtered fields ( fig . [ fig : heh_area](f ) ) are smaller than the unfiltered ones by about two orders of magnitude , while the objectives are reduced by about a factor 1.5 , as seen by comparing fig .[ fig : heh_area](e ) and fig .[ fig : heh_area](b ) .we have presented new formulations of optimization algorithms in order to design control fields with zero area .the procedure is built from the introduction of a lagrange multiplier aiming at penalizing the area of the field .the value of the corresponding parameter is chosen to express the relative weight between the area and the other terms of the cost functional , i.e. the projection onto the target state and the energy of the electric field .the efficiency of the algorithms is exemplified on two key control problems of molecular dynamics : namely , the molecular orientation of co and the photodissociation of heh . from a theoretical view point , the zero - area constraint is crucial since all electromagnetic fields which are physically realistic must fulfill this requirement .surprisingly , this physical requirement has been considered , up to date , by only very few works in quantum control . assuch , this work can be viewed as a major advance of the state - of - the - art of optimization algorithms that could open the way to promizing future prospects .s. v. acknowledges financial support from the fonds de la recherche scientifique ( fnrs ) of belgium . financial support from the conseil rgional de bourgogne and the quaint coordination action ( ec fet - open )is gratefully acknowledged by d. s. and m. n ..
we propose a new formulation of optimal and local control algorithms which enforces the constraint of time - integrated zero - area on the control field . the fulfillment of this requirement , crucial in many physical applications , is mathematically implemented by the introduction of a lagrange multiplier aiming at penalizing the pulse area . this method allows to design a control field with an area as small as possible , while bringing the dynamical system close to the target state . we test the efficiency of this approach on two control purposes in molecular dynamics , namely , orientation and photodissociation .
the identification of a deterministic linear operator from the operator s response to a probing signal is an important problem in many fields of engineering .concrete examples include system identification in control theory and practice , the measurement of dispersive communication channels , and radar imaging .it is natural to ask under which conditions ( on the operator ) identification is possible , in principle , and how one would go about choosing the probing signal and extracting the operator from the corresponding output signal .this paper addresses these questions by considering the ( large ) class of linear operators that can be represented as a continuous weighted superposition of time - frequency shift operators , i.e. , the operator s response to the signal can be written as where denotes the spreading function associated with the operator . the representation theorem ( * ? ? ?14.3.5 ) states that the action of a large class of continuous ( and hence bounded ) linear operators can be represented as in . in the communications literature operators with input - output relation as in are referred to as linear time - varying ( ltv ) channels / systems and is the delay - doppler spreading function . for the special case of linear time - invariant ( lti ) systems , we have , so that reduces to the standard convolution relation the question of identifiability of lti systems is readily answered by noting that the system s response to the dirac delta function is given by the impulse response , which by fully characterizes the system s input - output relation .lti systems are therefore always identifiable , provided that the probing signal can have infinite bandwidth and we can observe the output signal over an infinite duration . for ltv systems the situation is fundamentally different . specifically , kailath s landmark paper shows that an ltv system with spreading function compactly supported on a rectangle of area is identifiable if and only if . this condition can be very restrictive .measurements of underwater acoustic communication channels , such as those reported in for example , show that the support area of the spreading function can be larger than . the measurements in exhibit , however , an interesting structural property : the nonzero components of the spreading function are scattered across the -plane and the sum of the corresponding support areas , henceforth called `` overall support area '' , is smaller than .a similar situation arises in radar astronomy .bello shows that kailath s identifiability result continues to hold for arbitrarily fragmented spreading function support regions as long as the corresponding overall support area is smaller than .kozek and pfander and pfander and walnut found elegant functional - analytical identifiability proofs for setups that are more general than those originally considered in and .however , the results in require the support region of to be known prior to identification , a condition that is very restrictive and often impossible to realize in practice .in the case of underwater acoustic communication channels , e.g. , the support area of depends critically on surface motion , water depth , and motion of transmitter and receiver . for wireless channels , knowing the spreading function s support region would amount to knowing the delays and doppler shifts induced by the scatterers in the propagation medium .[ [ contributions ] ] contributions + + + + + + + + + + + + + we show that an operator with input - output relation is identifiable , without prior knowledge of the operator s spreading function support region and without limitations on its total extent , if and only if the spreading function s total support area satisfies .what is more , this factor - of - two penalty relative to the case where the support region is known prior to identification can be eliminated if one asks for identifiability of _ _ almost all _ _ operators only .this result is surprising as it says that ( for almost all operators ) there is no price to be paid for not knowing the spreading function s support region in advance .our findings have strong conceptual parallels to the theory of spectrum - blind sampling of sparse multi - band signals .furthermore , we present algorithms which , in the noiseless case , provably recover all operators with , and almost all operators with , without requiring prior knowledge of the spreading function s support region ; not even its area has to be known .specifically , we formulate the recovery problem as a continuous multiple measurement vector ( mmv ) problem .we then show that this problem can be reduced to a finite mmv problem .the reduction approach we present is of independent interest as it unifies a number of reduction approaches available in the literature and presents a simplified treatment . in the case of wireless channels or radar systems ,the spreading function s support region is sparse and typically contained in a rectangle of area . in the spirit of compressed sensing , where sparse objects are reconstructed by taking fewer measurements than mandated by their `` bandwidth '' ,we show that in this case sparsity ( in the spreading function s support region ) can be exploited to identify the system while undersampling the response to the probing signal . in the case of channel identificationthis allows for a reduction of the identification time , and in radar systems it leads to increased resolution .[ [ relation - to - previous - work ] ] relation to previous work + + + + + + + + + + + + + + + + + + + + + + + + + recently , taubck et al . and bajwa et al . considered the identification of ltv systems with spreading function compactly supported in a rectangle of area . while assume that the spreading function consists of a finite number of dirac components whose delays and doppler shifts are unknown prior to identification , the methods proposed in do not need this assumption .in the present paper , we allow general ( i.e. , continuous , discrete , or mixed continuous - discrete ) spreading functions that can be supported in the entire -plane with possibly fragmented support region .herman and strohmer , in the context of compressed sensing radar , and pfander et al . considered the problem of identifying finite - dimensional matrices that are sparse in the basis of time - frequency shift matrices .this setup can be obtained from ours by discretization of the input - output relation through band - limitation of the input signal and time - limitation and sampling of the output signal .the signal recovery problem in is a standard _ single measurement _ recovery problem .as we start from a continuous - time formulation we find that , depending on the resolution induced by the discretization through time / band - limitation , the resulting recovery problem can be an mmv problem .this is relevant as multiple measurements can improve the recovery performance significantly . in fact , it is the mmv nature of the recovery problem that allows identification of _ almost all _ operators with .[ [ organization - of - the - paper ] ] organization of the paper + + + + + + + + + + + + + + + + + + + + + + + + + the remainder of the paper is organized as follows . in section[ sec : prform ] , we formally state the problem considered .section [ sec : mainresults ] contains our main identifiability results with the corresponding proofs given in sections [ sec : proofmthm ] and [ sec : almostall ] . in sections [ sec :reconstruct ] and [ sec : almostall ] , we present identifiability algorithms along with corresponding performance guarantees . in section [ sec : discior ] , we consider the identification of systems with sparse spreading function compactly supported within a rectangle of area . section [ sec : numerical_recres ] contains numerical results . [ [ notation ] ] notation + + + + + + + + the superscripts , , and stand for complex conjugation , hermitian transposition , and transposition , respectively .we use lowercase boldface letters to denote ( column ) vectors , e.g. , , and uppercase boldface letters to designate matrices , e.g. , .the entry in the row and column of is {k , l} ] .the euclidean norm of is denoted by , and stands for the number of non - zero entries in .the space spanned by the columns of is , and the nullspace of is denoted by . designates the cardinality of the smallest set of linearly dependent columns of . stands for the cardinality of the set . for sets and ,we define set addition as .for a ( multi - variate ) function , denotes its support set . for ( multi - variate ) functions and , both with domain , we write for their inner product and for the norm of .we say that a set of functions with domain is linearly independent if there is no vector , such that , where }^t} ] is given by where is the spreading function corresponding to the ( single - input ) operator between input and the output .for the case where the support regions of all spreading functions are known prior to identification , pfander showed in that the operator is identifiable if and only if .when the support regions are unknown , an extension of theorem [ th : sufficiency ] shows that is identifiable if and only if .if one asks for identifiability of `` almost all '' operators only , the condition is replaced by .finally , we note that these results carry over to the case of operators with multiple inputs and multiple outputs ( mimo ) .specifically , a mimo channel is identifiable if each of its miso subchannels is identifiable , see for the case of known support regions .to prove necessity in theorem [ pr : identifiabilitynecsparse ] , we start by stating an equivalence condition on the identifiability of . this condition is often easier to verify than the condition in definition [ def : defstableidentification ] , and is inspired by a related result on sampling of signals in unions of subspaces (2 ) . identifies if and only if it identifies all sets with and , where .[ pr : blind_identification ] first , note that the set of differences of operators in can equivalently be expressed as from definition [ def : defstableidentification ] it now follows that identifies if there exist constants such that for all we have next , note that for , we have that .we can therefore conclude that is equivalent to for all , and for all and with . recognizing that is nothing but saying that identifies for all and with , the proof is concluded .necessity in theorem [ th : sufficiency ] now follows by choosing such that and .this implies and hence application of theorem [ pr : necessity ] to the corresponding set establishes that is not identifiable . by lemma [ pr : blind_identification] this then implies that is not identifiable .we provide a constructive proof of sufficiency by finding a probing signal that identifies , and showing how can be obtained from .concretely , we take to be a weighted -periodic train of dirac impulses the specific choice of the coefficients }^t} ] with as defined in . since for , the vector , , fully characterizes the spreading function . with these definitionscan now be written as with the matrix , \quad { \mathbf } a_{{\mathbf } c , k } { \coloneqq}{\mathbf } c_{{\mathbf } c , k } { { { \mathbf } f}^h } \label{eq : defac}\ ] ] where {p , m } = e^{-j2\pi \frac{pm}{l } } , \ ; p , m=0, ... ,l-1 ] and set , p = 0, ... ,l-1\} ] and contains the expansion coefficients of in the basis .it follows from , p = 0, ...,l-1 \} ] , which stands in contradiction to , p = 0, ... ,l-1 \ } = k ] . by, we then have \,{{{\mathbf b}}^h}_z = { \mathbf b}_{z } { { { \mathbf b}}^h}_z.\ ] ] from it follows that there exists a unitary matrix such that , which is seen as follows .we first show that any solution to can be written as , where is unitary ( * ? ? ?* exercise on p. 406 ) .indeed , we have where the last equality follows since is self adjoint , according to ( * ? ? ?7.2.6 ) . from itis seen that is unitary , and hence , with unitary .therefore , we have and , where and are unitary , and hence .as is unitary , we proved that there exists a unitary matrix such that .with , the minimization variable of is given by , where is the minimization variable of , hence and are equivalent .another approach to reducing to an mmv problem was put forward in ( * ? ? ?2 ) . in our setting and notation the resulting mmv problemis given by where the minimization is performed over all and all corresponding . here, the matrix can be taken to be any matrix whose column span is equal to . to explain this approach in our basis expansion framework , we start by noting that implies that .we can therefore take to equal .on the other hand , for every with , we can find a basis such that . choosing different matrices in therefore simply amounts to choosing different bases .we are now ready to study uniqueness conditions for .specifically , we will find a necessary and sufficient condition for to deliver the correct solution to the continuously indexed linear system of equations in .this condition comes in the form of a threshold on that depends on the `` richness '' of the spreading function , specifically , on .let , with . then applied to recovers if and only if [ thm : uniqueness_imv ] since , theorem [ thm : uniqueness_imv ] guarantees exact recovery if , and hence by ( see section [ sec : prform ] ) , if , which is the recovery threshold in theorem [ th : sufficiency ] . recovery for will be discussed later .sufficiency in theorem [ thm : uniqueness_imv ] was shown in ( * ? ? ?* prop . 1 ) and in the context of spectrum - blind sampling in ( * ? ? ?necessity has not been proven formally before , but follows directly from known results , as shown in the proof of the theorem below . the proof is based on the equivalence of and , established in the previous section , and on the following uniqueness condition for the mmv problem .let with .then applied to recovers if and only if [ prop : cond_uniq_mmv ] sufficiency was proven in ( * ? ? ?1 ) , ( * ? ? ?1 ) , ( * ? ? ?2.4 ) , necessity in ( * ? ? ?we present a different , slightly simpler , argument for necessity in appendix [ app : neccmmv ] . in section [ sec : redtommv ] , we showed that , p = 0, ... ,l-1 \ } = k ] .analogously , by using the fact that contains the expansion coefficients of in the basis , it can be shown that .it now follows , by application of proposition [ prop : cond_uniq_mmv ] , that correctly recovers the support set if and only if is satisfied . by equivalence of and , recovers the correct support set , provided that is satisfied .once is known , is obtained by solving .solving the mmv problem is np - hard .various alternative approaches with different performance - complexity tradeoffs are available in the literature .mmv - variants of standard algorithms used in single measurement sparse signal recovery , such as orthogonal matching pursuit ( omp ) and -minimization ( basis - pursuit ) can be found in .however , the performance guarantees available in the literature for these algorithms fall short of allowing to choose to be linear in as is the case in the threshold . a low - complexity algorithm that provably yields exact recovery under the threshold in is based on ideas developed in the context of subspace - based direction - of - arrival estimation , specifically on the music - algorithm .it was first recognized in the context of spectrum - blind sampling that a music - like algorithm can be used to solve a problem of the form .the algorithm described in implicitly first reduces the underlying infinite measurement vector problem to a ( finite ) mmv problem .recently , a music - like algorithm and variants thereof were proposed to solve the mmv problem directly . as we will see below, this class of algorithms imposes conditions on and will hence not guarantee recovery for all .we will present a ( minor ) variation of the music algorithm as put forward in , and used in the context of spectrum blind sampling ( * ? ? ?1 ) , in section [ sec : almostall ] below .for , theorem [ thm : uniqueness_imv ] hints at a potentially significant improvement over the worst - case threshold underlying theorem [ th : almost_all ] whose proof will be presented next .the basic idea of the proof is to show that applied to , recovers the correct solution if the set condition implies that .therefore , with in theorem [ thm : uniqueness_imv ] , we get that delivers the correct solution if , i.e. , if , which is guaranteed by .we next present an algorithm that provably recovers with , i.e. , almost all with . specifically , this low - complexity music - like algorithm solves is np - hard ( as noted before ) , since it `` only '' solves for _ almost all _ . ] ( which is equivalent to ) and can be shown to identify the support set correctly for provided that condition is satisfied .the algorithm is a minor variation of the music algorithm as put forward in , and used in the context of spectrum blind sampling ( * ? ? ?the following algorithm recovers all , provided that . 1 . given the measurement , find a basis ( not necessarily orthogonal ) , for the space spanned by , p=0, ... ,l-1\} ] , and determine the coefficient matrix in the expansion .2 . compute the matrix of eigenvectors of corresponding to the zero eigenvalues of .3 . identify with the indices corresponding to the columns of that are equal to .[ thm : music_recovery ] in the remainder of the paper , we will refer to steps and above as the mmv - music algorithm . as shown next, the mmv - music algorithm provably solves the mmv problem given that has full rank . the proof is effected by establishing that for under condition the support set is uniquely specified through the indices of the columns of that are equal to . to see this ,we first obtain from ( where and are as defined in section [ sec : reconstruct ] ) next , we perform an eigenvalue decomposition of in to get where contains the eigenvectors of corresponding to the non - zero eigenvalues of . as mentiond in section [ sec : suff ] , each set of or fewer columns of is necessarily linearly independent , if is chosen judiciously .hence has full rank if , which is guaranteed by .thanks to condition , and hence ( this was shown in the proof of theorem [ thm : uniqueness_imv ] ) , which due to implies that .consequently , we have where the second equality follows from . is the orthogonal complement of in .it therefore follows from that . hence , the columns of that correspond to indices are equal to .it remains to show that no other column of is equal to .this will be accomplished through proof by contradiction .suppose that where is any column of corresponding to an index pair . since is the orthogonal complement of in , .this would , however , mean that the or fewer columns of corresponding to the indices would be linearly dependent , which stands in contradiction to the fact that each set of or fewer columns of must be linearly independent .the results presented thus far rely on probing signals of infinite bandwidth and infinite duration .it is therefore sensible to ask whether identification under a bandwidth - constraint on the probing signal and under limited observation time of the operator s corresponding response is possible .we shall see that the answer is in the affirmative with the important qualifier of identification being possible up to a certain resolution limit ( dictated by the time- and bandwidth constraints ) .the discretization through time- and band - limitation underlying the results in this section will involve approximations that are , however , not conceptual .the discussion in this section serves further purposes .first , it will show how the setups in can be obtained from ours through discretization induced by band - limiting the input and time - limiting and sampling the output signal .more importantly , we find that , depending on the resolution induced by the discretization , the resulting recovery problem can be an mmv problem .the recovery problem in is a standard ( i.e. , single measurement ) recovery problem , but multiple measurements improve the recovery performance significantly , according to the recovery threshold in theorem [ thm : uniqueness_imv ] , and are crucial to realize recovery beyond .second , we consider the case where the support area of the spreading function is ( possibly significantly ) below the identification threshold , and we show that this property can be exploited to identify the system while undersampling the response to the probing signal . in the case of channel identification , this allows to reduce identification time , and in radar systems it leads to increased resolution .consider an operator and an input signal that is band - limited to , and perform a time - limitation of the corresponding output signal to .then , the input - output relation becomes ( for details , see , e.g. ) for , where band - limiting the input and time - limiting the corresponding output hence leads to a discretization of the input - output relation , with `` resolution '' in -direction and in -direction .it follows from that for a compactly supported the corresponding quantity will not be compactly supported .most of the volume of will , however , be supported on , so that we can approximate by restricting summation to the indices satisfying . note that the quality of this approximation depends on the spreading function as well as on the parameters , and . here, we assume that and .these constraints are not restrictive as they simply mean that we have at least one sample per cell .we will henceforth say that is identifiable with resolution , if it is possible to recover , for , from .we will simply say `` is identifiable '' whenever the resolution is clear from the context . in the ensuing discussion , is referred to as the discrete spreading function .the maximum number of non - zero coefficients of the discrete spreading function to be identified is .next , assuming that , as defined in , satisfies , it follows that is approximately band - limited to . from wecan therefore conclude that lives in a -dimensional signal space ( here , and in the following , we assume , for simplicity , that is integer - valued ) , and can hence be represented through coefficients in the expansion in an orthonormal basis for this signal space .the corresponding basis functions can be taken to be the prolate spheroidal wave functions . denoting the vector containing the corresponding expansion coefficients by ,the input - output relation becomes where the columns of contain the expansion coefficients of the time - frequency translates in the prolate spheroidal wave function set , and contains the samples for , of which at most are non - zero , with , however , unknown locations in the -plane .we next show that the recovery threshold continues to hold , independently of the choice of and .[ [ necessity ] ] necessity + + + + + + + + + it follows from ( * ? ? ?1 ) that is necessary to recover from given . with and , which follows trivially, we trivially have . ]since is of dimension , we get and hence . since, by definition , we have shown that is necessary for identifiability .[ [ sufficiency ] ] sufficiency + + + + + + + + + + + sufficiency will be established through explicit construction of a probing signal and by sampling the corresponding output signal . since is ( approximately ) band - limited to , we can sample at rate , which results in for . in the following ,denote the number of samples of per cell , in -direction as and in -direction as ; see figure [ fig : disctnp ] for an illustration .note that , since , and is sampled at integer multiples of in -direction and of in -direction , we have and .the number of samples per cell will turn out later to equal the number of measurement vectors in the corresponding mmv problem . to have multiple measurements , and hence make identification beyond possible , it is therefore necessary that is large relative to .as mentioned previously , the samples , for , fully specify the discrete spreading function .we can group these samples into the active cells , indexed by , by assigning , for , to the cell with index , where , and .discretization of the -plane , with . ]the probing signal is taken to be such that where the coefficients , are chosen as discussed in section [ sec : suff ] .note that the sequence is -periodic .algebraic manipulations yield the discrete equivalent of as = { \mathbf } a_{{\mathbf } c } { { \mathbf s}}[n , r ] , \quad ( n , r ) \in { u_d}. \label{eq : syseqdisc}\ ] ] here , was defined in , and \right]_p { \coloneqq}{z}_p[n , r ] e^{-j2\pi \frac{rp}{{d}l } } , \quad p = 0 , ... , l-1\ ] ] with { \coloneqq}z_{y}^{({e}l,{d})}[n+{e}p , r ] ] . for general properties of the discrete zak transform we refer to .further , { \coloneqq}[{s}_{0,0}[n , r],\allowbreak { s}_{0,1}[n , r ] , \allowbreak ... , \allowbreak { s}_{0,l-1}[n , r ] , { s}_{1,0}[n , r],\allowbreak ... , { s}_{l-1,l-1}[n , r ] ] ^t ] , fully specifies the discrete spreading function .the identification equation can be rewritten as where the columns of and are given by the vectors ] , respectively , .hence , each row of corresponds to the samples of in one of the cells .since the number of samples per cell , , is equal to the number of columns of , we see that the number of samples per cell corresponds to the number of measurements in the mmv formulation .denote the matrix obtained from by retaining the rows corresponding to the active cells , indexed by , by and let be the matrix containing the corresponding columns of . then becomes once is known , can be solved for . hence , recovery of the discrete spreading function amounts to identifying from the measurements , which can be accomplished by solving the following mmv problem : where the minimization is performed over all and all corresponding .it follows from proposition [ prop : cond_uniq_mmv ] that is recovered exactly from by solving , whenever , where .correct recovery is hence guaranteed whenever . since and , this shows that is sufficient for identifiability . as noted before, is np - hard . however , if then mmv - music provably recovers with , i.e. , when ( this was shown in the proof of theorem [ thm : music_recovery ] ) . for to have full rank ,it is necessary that the number of samples satisfy . for all have full rank .the development above shows that the mmv aspect of the recovery problem is essential to get recovery for values of beyond .we conclude this discussion by noting that the setups in in the context of channel estimation and in in the context of compressed sensing radar are structurally equivalent to the discretized operator identification problem considered here , with the important difference of the mmv aspect of the problem not being brought out in . in the preceding sections, we showed under which conditions identification of an operator is possible if the operator s spreading function support region is not known prior to identification .we now turn to a related problem statement that is closer to the philosophy of sparse signal recovery , where the goal is to reconstruct sparse objects , such as signals or images , by taking fewer measurements than mandated by the object s `` bandwidth '' .we consider the discrete setup and assume that is ( possibly significantly ) smaller than the identifiability threshold .concretely , set for an integer .we ask whether this property can be exploited to recover the discrete spreading function from a subset of the samples , only .we will see that the answer is in the affirmative , and that the corresponding practical implications are significant , as detailed below . for concreteness , we assume that , with . to keep the exposition simple ,we take , in which case becomes a vector .note that , since ( this follows from ) , the operator can be identified by simply solving for , which we will refer to as `` reconstructing conventionally '' .here and contain the columns of and rows of , respectively , corresponding to the indices in .since , the area of the ( unknown ) support region of the spreading function satisfies .we next show that the discrete spreading function can be reconstructed from only of the rows of .the index set corresponding to these rows is denoted as , and is an ( arbitrary ) subset of ( of cardinality ) .let and be the matrices corresponding to the rows of and , respectively , indexed by .the matrix is a function of the samples only ; hence , reconstruction from amounts to reconstruction from an undersampled version of .note that we can not reconstruct the discrete spreading function by simply inverting since is a wide matrix is of limited interest and will not be considered . ] .next , implies ( see also ) that theorem 4 in establishes that for almost all , .hence , according to proposition [ prop : cond_uniq_mmv ] , can be recovered uniquely from provided that and hence , by solving where the minimization is performed over all and all corresponding .we have shown that identification from an undersampled observation is possible , and the undersampling factor can be as large as . a similar observation has been made in the context of radar imaging .recovery of from has applications in at least two different areas , namely in radar imaging and in channel identification . [[ increasing - the - resolution - in - radar - imaging ] ] increasing the resolution in radar imaging + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in radar imaging , targets correspond to point scatterers with small dispersion in the delay - doppler plane . since the number of targets is typically small , the corresponding spreading function is sparsely supported . in our model, this corresponds to a small number of the in being non - zero .take .the discussion above then shows that , since only the samples , which in turn only depend on for ] ,we have it therefore follows from that ( ) is consistent with , which concludes the proof .p. feng and y. bresler , `` spectrum - blind minimum - rate sampling and reconstruction of multiband signals , '' in _ proc . of ieee int .speech sig .( icassp ) _ , vol . 3 , atlanta , ga , usa , may 1996 , pp. 16881691 .g. taubck , f. hlawatsch , d. eiwen , and h. rauhut , `` compressive estimation of doubly selective channels in multicarrier systems : leakage effects and sparsity - enhancing processing , '' _ ieee j. sel .topics signal process . _ ,vol . 4 , no . 2 ,pp . 255271 , 2010 .w. u. bajwa , k. gedalyahu , and y. c. eldar , `` identification of parametric underspread linear systems and super - resolution radar , '' _ ieee trans .signal process ._ , vol . 59 , no . 6 , pp . 25482561 , jun2011 .h. g. feichtinger and g. zimmermann , `` a banach space of test functions for gabor analysis , '' in _ gabor analysis and algorithms _ , ser . applied and numerical harmonic analysis , h. g. feichtinger and t. strohmer , eds .1em plus 0.5em minus 0.4embirkhuser boston , jan .1998 , pp .123170 .s. cotter , b. rao , k. engan , and k. kreutz - delgado , `` sparse solutions to linear inverse problems with multiple measurement vectors , '' _ ieee trans .signal process ._ , vol .53 , no . 7 , pp .24772488 , 2005 .
we consider the problem of identifying a linear deterministic operator from its response to a given probing signal . for a large class of linear operators , we show that stable identifiability is possible if the total support area of the operator s spreading function satisfies . this result holds for an arbitrary ( possibly fragmented ) support region of the spreading function , does not impose limitations on the total extent of the support region , and , most importantly , does not require the support region to be known prior to identification . furthermore , we prove that stable identifiability of _ almost all operators _ is possible if . this result is surprising as it says that there is no penalty for not knowing the support region of the spreading function prior to identification . algorithms that provably recover all operators with , and almost all operators with are presented .
restarts have been shown to boost the performance of backtracking sat solvers ( see for example , ) . a restart strategy ( ,, , ... ) is a sequence of restart lengths that the solver follows in the course of its execution .the solver first performs steps ( in case of sat solvers a step is usually a conflict ) .if a solution is not found , the solver abandons its current partial assignment and starts over . the second time it runs for steps , and so on .luby , sinclair and zuckerman show that for each instance there exists , an optimal restart length that leads to the optimal restart strategy ( ,, , ... ) . in order to calculate , one needs to have full knowledge of the runtime distribution ( rtd ) of the instance , a condition which is rarely met in practical cases .since the rtd is not known , solvers commonly use `` universal restart strategies '' .these strategies do not assume prior knowledge of the rtd and they attempt to perform well on any given instance .huang shows that when applied with conflict driven clause learning solvers ( cdcl ) , none of the commonly used universal strategies dominates all others on all benchmark families .he also demonstrates the great influence on the runtime of different restart strategies , when all its other parameters are fixed . in this paperwe show that the recent success in applying machine learning techniques to estimate solvers runtimes can be harnessed to improve solvers performance .we start by discussing the different universal strategies and recent machine learning success in sect .[ sec : background ] . in sect .[ sec : restart_strategy_portfolio ] we present _ lmpick _ , a restart strategy portfolio based solver .experimental results are presented and analyzed in sect .[ sec : results ] .we conclude and suggest optional future study in sect .[ sec : conclusion ] .competitive dpll solvers typically use restarts .most use `` universal '' strategies , while some use `` dynamic '' restart schemes , that induce or delay restarts ( such as the ones presented in and ) . currently , the most commonly used universal strategies fall into one of the following categories : * _ fixed strategy - _ ( ) . in this strategya restart takes place every constant number of conflicts .while some solvers allow for a very short interval between restarts , others allow for longer periods , but generally fixed strategies lead to a frequent restart pattern .examples of its use can be found in berkmin ( where the fixed restart size is 550 conflicts ) and seige ( fixed size is 16000 conflicts ) . *_ geometric strategy - _ ( ) . in this strategythe size of restarts grows geometrically .this strategy is defined using an initial restart size and a geometric factor . wu andvan beek show that the expected runtime of this strategy can be unbounded worse than the optimal fixed strategy in the worst case .they also present several conditions which , if met , guarantee that the geometric strategy would yield a performance improvement .this strategy is used by minisat v1.14 with initial restart size of 100 conflicts and a geometric factor of 1.5 . *_ luby strategy - _ ( ) . in this strategythe length of restart is when is a constant `` unit size '' and + the first elements of this sequence are 1,1,2,1,1,2,4,1,1,2,1,1,2,4,8,1,1 , ... luby , sinclair and zuckerman show that the performance of this strategy is within a logarithmic factor of the true optimal strategy , and that any universal strategy that outperforms their strategy will not do it by more than a constant factor .these results apply to pure las vegas algorithms , and do not immediately apply to cdcl solvers in which learnt clauses are kept across restarts .the effectiveness of the strategy in cdcl solvers appears mixed ( , ) and there is still no theoretical work that that analyzes its effectiveness in such solvers .however , luby s restart strategy is used by several competitive solvers including minisat2.1 and tinisat . * _ nested restart strategy - _ ( ) this strategy and can be seen as a simplified version of the luby strategy . after every iteration the restart length grows geometrically until it reaches a higher bound , at this point the restart size is reset to the initial value and the higher bound is increased geometrically .this strategy is used by picosat and barcelogic .previous work shows that restart strategies perform differently on different data sets .huang compares the performance of different strategies both for benchmark families and different benchmarks .he shows that there is no one strategy that outperformed all others across all benchmark families which suggests that adapting a strategy to a benchmark family , or even a single benchmark , could lead to performance gain .this suggests that choosing the best strategy from a set of strategies could improve the overall runtime , and for some benchmark families , improves it significantly .machine learning was previously shown to be an effective way to predict the runtime of sat solvers .satzilla is a portfolio based solver that uses machine learning to predict which of the solvers it uses is optimal for a given instance .satzilla uses an hierarchical approach and can use different evaluation criteria for solver performance .satzilla utilizes runtime estimations to pick the best solver from a solver portfolio .the solvers are used as - is , and satzilla does not have any control over their execution .satzilla was shown to be very effective in the sat competition of 2007 .two other machine learning based approaches for local search and cdcl are presented in and respectively .ruan et al . suggest a way to use dynamic programming to derive dynamic restart strategies that are improved during search using data gathered in the beginning of the search . this idea corresponds with the `` observation window '' that we will discuss in the next section .there are several differences between this work and ours .one important difference is that their technique chooses a different instance from the ensemble at each restart .while our intention is to solve each of the instances in the ensemble , it seems their technique is geared towards a different goal , where the solver is given an ensemble of instances and is required to solve as many of them as possible .another approach for runtime estimation was presented in our previous work . in that paperwe introduce a linear model predictor ( lmp ) which demonstrates that runtime estimation can also be achieved using parameters that are gathered in an online manner , while a search is taking place , as opposed to the mostly static features gathered by satzilla .another difference between the methods is the way training instances are used .while satzilla uses a large number of instances that are not tightly related , lmp uses a much smaller set of problems , but these should be more homogeneous .since restart strategies are an important factor in the performance of dpll style sat solvers , a selection of a good restart strategy for a given instance should improve the performance of the solver for that instance .we suggest that by using supervised machine learning , it is possible to select a good restart strategy for a given instance .we present _ lmpick _ , a machine learning based technique which enhances cdcl solvers performance ._ lmpick _ uses a portfolio of restart strategies from which it chooses the best one for a given instance .following we recognize several restart strategies that have shown to be effective on one benchmark family or more .we chose 9 restart strategies that represent , to our understanding , a good mapping of commonly used restart strategies . * _ luby-32 - _ a luby restart strategy with a `` unit run '' of 32 conflicts .this strategy represents a luby restart strategy with a relatively small `` unit run '' .this technique was shown to be effective by huang . * _ luby-512 - _ a luby restart strategy with a `` unit run '' of 512 conflicts .this strategy represents a luby restart strategy with a larger `` unit run '' .this is the original restart scheme used by tinisat , the solver we used in this study . *_ fixed-512 - _ a fixed restart scheme with a restart size of 512 conflicts .similar restart schemes that are used by solvers are berkmin s _ fixed-550 _ , and the 2004 version of zchaff , _ fixed-700_ . *_ fixed-4096 - _ a fixed balance scheme with a restart size of 4096 conflicts .we chose this restart scheme because it balances the short and long fixed schemes , and because it performed very well in our preliminary tests . * _ fixed-16384 - _ a fixed balance scheme with a restart size of 16384 conflicts .a longer fixed strategy , similar to the one used by siege ( _ fixed-16000 _ ) . * _ geometric-1.1 - _ a geometric restart scheme with a first restart size of 32 conflicts and a geometric factor of 1.1 .inspired by the one used by hunag .* _ geometric-1.5 - _ a geometric restart scheme with a first restart size of 100 conflicts and a geometric factor of 1.5 .this is the restart scheme used in minisat v1.14 . *_ nested-1.1 - _ a nested restart strategy , with an inner value of 100 conflicts , an outer value of 1000 values and a geometric factor of 1.1 .this strategy is parameterized as the one used by picosat . *_ nested-1.5 - _ a nested restart strategy , with an inner value of 100 conflicts , an outer value of 1000 values and a geometric factor of 1.5 .inspired by the results presented in .satsifiable and unsatisfiable instances from the same benchmark family tend to have different runtime distributions . a runtime prediction model that is trained using both sat and unsat instances performs worse than a homogeneous model .it is better to train a layer of two models , one trained with satisfiable instances ( ) and the other with unsatisfiable instances ( ) . since in most caseswe do not know whether a given instance is satisfiable or not we need to determine which of the models is the correct one to query for a given instance according to its probability to be satisfiable .previous work ( , ) suggests that machine learning can be successfully used for this task as well .a classifier can be trained to estimate the probability of an instance to be satisfiable .some classification techniques perform better than others , but it seems that for most benchmark families , a classifier with 80% accuracy or more is achievable . using supervised machine learning , we train models offline in order to use them for predictions online . for every training example , where is the training set, we gather the feature vector using the features presented in section [ sec : featurevector ] . once the raw datais gathered , we perform a feature selection .we repeatedly remove the feature with the smallest standardised coefficient until no improvement is observed based on the standard aic ( akaike information criterion ) . we then searched and eliminate co - linear features in the chosen set .the reduced feature vector is then used to train a classifier and several runtime prediction models .the classifier predicts the probability of an instance to be satisfiable and the runtime models predict cpu - runtime ._ lmpick _ trains one classifier , but two runtime models for each restart strategy ( where is the set of all participating strategies ) to the total of models .each training instance is used to train the satisfiability classifier , labeled with its satisfiability class , and runtime models , for each model it is labeled with the appropriate runtime . as the classifier , we used a logistic regression technique .any classifier that returns probabilities would be suitable .we found logistic regression to be a simple yet effective classifier which was also robust enough to deal with different data sets .we have considered both sparse multinomial linear regression ( suggested to be effective for this task in ) , and the classifiers suggested by devlin and osullivan in , but the result of all classifiers were on par when using the presented feature vector on our datasets . for the runtime prediction models we used ridge linear regression .using ridge linear regression , we fit our coefficient vector to create a linear predictor .we chose ridge regression , since it is a quick and simple technique for numerical prediction , and it was shown to be effective in the linear model predictor ( lmp ) . while lmp predicts the log of number of conflicts , in this work we found that predicting cpu - runtime is more effective as a selection criterion for restart strategies . using the number of conflicts as a selection criterion tends to bias the selection towards frequent restart strategies for large instances .this is because an instance with many variables spends more time going down the first branch to a conflict after a restart .this work is unaccounted for when conflicts are used as the cost criterion .hence a very frequent restart strategy might be very effective in the number of conflict while much less effective in cpu - time .there are 4 different sets of features that we used in this study , all are inspired by the two previously discussed techniques - satzilla and lmp .the first set include only the number of variables and the number of clauses in the original clause database .these values are the only ones that are not normalized .the second set includes variables that are gathered before the solver starts but after removing clauses that are already satisfiable , shrinking clauses with multiple appearances and propagating unit clauses in the original formula .these features are all normalized appropriately .they are inspired by satzilla and were first suggested in . the third set include statistics that are gathered during the `` observation window '' , this is a period where we analyze the behavior of the solver while solving the instance .the `` observation window '' was first used in . the way the observation windowis used in this study will be discussed shortly .the variables in this set are the only ones which are dpll dependent .the last set includes the same features as the second , but they are calculated at the end of the observation window . a full list of the features is presented in fig .[ fig : featurvector ] . for further explanation about these features see and .[ fig : steps ] .,title="fig:",width=309 ] once all runtime models are fitted and the satisfiability classifier is trained , we can use them to improve performance for future instances .the steps that are taken by _lmpick _ are presented in fig .[ fig : steps ] . since no prediction can be made before the observation window is terminated , and since we favor an early estimation , it is important that the observation window should terminate early in the search . in our preliminary testingswe have noticed that the first restart tends to be very noisy , and that results are better if data is collected in the second restart onwards .we have tried several options for the observation window location and size , eventually we opted for a first restart which is very short ( 100 conflicts ) , followed by a second restart ( of size 2000 ) which hosts the observation window .hence the observation window is closed and all data is gathered after 2100 conflicts .once the feature vector is gathered it is used with the classifier to determine the probability of the instance to be satisfiable , . for each of the strategies ,both models are queried and a best strategy is picked using .\ ] ] the restart strategy which is predicted to be the first to terminate is picked , and the solver starts following this strategy from the next restart onwards .although restart strategies are usually followed from the beginning of the search , we do not want to lose the learned clauses from the first 2100 conflicts .therefore , we continue the current solving process and keep the already learnt clauses .we denote the restart sequence that takes place from the first restart to termination as .it is important to note that .[ fig : script : bmc : sat ] dataset .different instances are made using different array_size values and a different number of unwinding iterations.,title="fig:",width=196 ] [ fig : script : bmc : unsat ] dataset .different instances are made using different array_size values and a different number of unwinding iterations.,title="fig:",width=196 ]in this study we used tinisat ( version 0.22 ) as the basic solver .tinisat is a lightweight dpll style solver that was first presented in the sat race of 2006 .tinisat is a modern solver that uses clause learning and a unique decision heuristic that generally favours variables from recent assignments ( as in berkmin ) and uses vsids over the literals as a backup .we chose to use tinisat since ( i ) it is tuned in a way that would make comparison of restart strategies more meaningful and ( ii ) it is a compact and straightforward implementation which allows for greater ease of use .tinisat is not equipped with a pre - processor , and we have not used any in our study . by default , tinisat uses a luby restart strategy with a run unit of 512 conflicts .all our experiments were conducted on a cluster of 14 dual intel xeon cpus with em64 t ( 64-bit ) extensions , running at 3.2ghz with 4 gb of ram under debiangnu / linux 4.0 . by implementing a runtime cutoff of 90 minutes per instance, we managed to complete all experiments in approximately 290 cpu days ..summary of features of datasets . for each datasetthe following details are presented : the dataset s classification ( class ) , the number of instances ( ins . ) and its size in mb ( all file are zipped ) . also , we present the time ( in hours ) it took for all 9 restart strategies to solve these datasets .we present the mean time ( mean ) , the standard deviation ( sd ) and the minimal and maximal time .runtime cutoff is 5400 seconds and it is the maximal runtime per instance . [ cols="<,<,<,<,^,^,^,^ " , ] in order for _ lmpick _ to enhance the solver s performance , runtime estimation models need to perform well .nevertheless , it is not crucial that each model s prediction is accurate .the important factor is the relative order of these predictions .we would prefer the chosen strategy ( ) to be amongst the best strategies for that instance .table [ table : bestpick_location ] demonstrates the quality of the chosen strategy .the table presents the percentage of strategies that outperform .this table shows that in all cases is better than picking a random strategy , and that for the _ crypto - unsat _ instances it performs very well .the difference in the performance on the _ crypto - sat _ data set is not easily explained by the data presented so far . while both the classifier and runtime prediction models perform better than on the _ velev - sat _ , the overall performance is worse .we conjecture that the cause of this difference is the differing likelihood of an instance being solved by multiple strategies .[ fig : compare - sat - datasets : bars ] [ fig : compare - sat - datasets : plots ] figure [ fig : compare - sat - datasets ] compares the 3 non - random satisfiable data sets . in fig .[ fig : compare - sat - datasets : bars ] bars represent the percent of instances in the data set that were solved within the cutoff time by each number of strategies .there is a clear difference between _ crypto - sat _ and the other 2 data sets . while for both industrial verification based data sets , most instances were solved by the majority of strategies , for_ crypto - sat _ many of the instances are solved by a small set of strategies .the effect of this difference is demonstrated in fig .[ fig : compare - sat - datasets : plots ] .this plot presents the probability of a randomly picked instance being solved within the cutoff time given the quality of selected strategy . from left to right , the picked strategy shift from best to worse .if _ lmpick _ picks one of the the two best strategies , the probability of all three data sets is quite similar , but when the chosen strategy is 3rd or 4th , the probability of solving a _ crypto - sat _ instance drops significantly compared to the other two .many instances in the _ crypto - sat _ data set are only solved within the cutoff by a small subset of strategies , making this data set harder as a sub - optimal selection is likely to lead to a timeout .restart strategies have an important role in the success of dpll style sat solvers .the performance of different strategies varies over different benchmark families .we harness machine learning to enhance the performance of sat solvers .we have presented _ lmpick _ , a technique that uses both satisfiability classification and solver runtime estimation to pick promising restart strategies for instances . we have demonstrated the effectiveness of _ lmpick _ and compared its results with the most commonly used restart strategies .we have established that in many cases _ lmpick _ outperforms any single restart strategy and that it is never worse than a randomly picked strategy .we have also discussed the influence of different components of _ lmpick _ on its performance .while universal restart strategies are more commonly used than dynamic ones , dynamic strategies are getting more attention lately . an interesting continuation tothis work would be to use machine learning to develop a fully dynamic restart strategy .such a strategy could use unsupervised machine learning algorithm to develop a dynamic restart policy for a benchmark family of problems .restart strategies are not the only aspect of sat solving influencing performance on different benchmark families .machine learning can be also used to tune other parameters that govern the behavior of modern dpll based solvers , namely , parameters that are commonly set manually as a result of a trial and error process , such as decision heuristic parameters , clause deletion policy , etc .nicta is funded by the department of broadband , communications and the digial economy , and the arc through backing australia s ability and the ict centre of excellence program .hutter , f. , hamadi , y. , hoos , h. , leyton - brown , k. : performance prediction and automated tuning of randomized and parametric algorithms .proc . of the 12th int .conf . on principles and practice of constraint programming ( 2006 ) krishnapuram , b. , figueiredo , m. , carin , l. , hartemink , a. : sparse multinomial logistic regression : fast algorithms and generalization bounds . ieee transactions on pattern analysis and machine intelligence , 27:957 - 968 ( 2005 ) nudelman , e. , leyton - brown , k. , hoos , h.h . ,devkar , a. , shoham , y. : understanding random sat : beyond the clauses - to - variables ratio . proc . of the 10th int .conf . on principles and practice of constraint programming ( 2004 )
restart strategies are an important factor in the performance of conflict - driven davis putnam style sat solvers . selecting a good restart strategy for a problem instance can enhance the performance of a solver . inspired by recent success applying machine learning techniques to predict the runtime of sat solvers , we present a method which uses machine learning to boost solver performance through a smart selection of the restart strategy . based on easy to compute features , we train both a satisfiability classifier and runtime models . we use these models to choose between restart strategies . we present experimental results comparing this technique with the most commonly used restart strategies . our results demonstrate that machine learning is effective in improving solver performance .
the advent of smartphones and tablets has made the use of touchscreen keyboards pervasive in modern society .however , the ubiquitous qwerty keyboard was not designed with the needs of a touchscreen keyboard in mind , namely accuracy and speed .the introduction of gesture or stroke - based input methods significantly increased the speed that text could be entered on touchscreens [ ] .however , this method introduces some new problems that can occur when the gesture input patterns for two words are too similar , or sometimes completely ambiguous , leading to input errors .an example gesture input error is illustrated in figure [ fig : example - swipe - collision ] .a recent study showed that gesture input has an error rate that is about 5 - 10% higher compared to touch typing [ ] . with the fast and inherently imprecise nature of gesture inputthe prevalence of errors is unavoidable and the need to correct these errors significantly slows down the rate of text entry .the qwerty keyboard in particular is poorly suited as a medium for swipe input .characteristics such as the `` u '' , `` i '' , and `` o '' keys being adjacent lead to numerous gesture ambiguities and potential input errors .it is clearly not the optimal layout for gesture input .the rise of digital keyboard use , first on stylus based keyboards in the 90 s and then on modern touchscreens a decade later , has led to a lot of research and development in breaking away from qwerty to a layout that s statistically more efficient .this work resulted in various improved layouts for digital stylus keyboards such as the opti keyboard [ ] , the metropolis and hooke keyboards [ ] , and the atomik keyboard [ ] .in addition to statistical efficiency , attempts were also made to improve statistical efficiency while simultaneously making the new layout as easy to use for novices as possible [ ] .more recently , a few keyboards have been introduced that improve text input for certain situations on modern smartphones and tablets : optimizing for the speed of two - thumb text entry on tablets [ ] ; optimizing tap - typing ambiguity ( the swrm keyboard ) and simultaneously optimizing single - finger text entry for speed , reduced tap - typing ambiguity , and familiarity with the qwerty keyboard ( the sath keyboard ) [ ] ; and optimizing the autocorrect feature itself to simultaneously increase the accuracy of word correction and completion [ ] .most of the aforementioned work was done specifically for touch typing since that is the most common form of text input on touchscreen devices .however , the relatively recent rise in popularity of gesture typing has led to some interesting new keyboard layouts that were specifically optimized for improved gesture typing performance .the square osk and hexagon osk keyboards were optimized to maximize gesture input speed using fitt s law [ ] .various optimizations were also done by smith , bi , and zhai while maintaining some familiarity with qwerty by using the same layout geometry and only changing the letter placements .this resulted in four new keyboards : the gk - c keyboard , which is optimized to maximize gesture input clarity ; the gk - s keyboard , which is optimized for gesture input speed ; the gk - d keyboard , which was simultaneously optimized for gesture clarity and speed using pareto optimization ; and the gk - t keyboard , which was simultaneously optimized for gesture clarity , gesture speed , and qwerty similarity [ ] . ]evaluating and comparing various keyboard layouts is a difficult problem given the complexity and variability associated with text entry .measuring text entry and error rates from user based trials is typically done to evaluate or directly compare the effectiveness of various keyboards and input methods .these studies usually require the test subjects to transcribe a set of predefined phrases using a specified input device .text entry evaluations of mini - qwerty keyboards [ ] , chording keyboards [ ] , handwriting recognition systems [ ] , and various gesture input systems [ , ] have all been done using this approach .the main downside of this approach is the fact that in day - to - day use most users spend very little time transcribing phrases .the majority of text entry is done by composing original phrases .therefore , text entry evaluations from transcription based user studies are not realistic and can introduce unintended biases into the results .vertanen and kristensson showed how these biases can be mitigated by including composition tasks in user trials to complement the standard transcription tasks [ ] . despite the recent work done to improve text entry evaluations with user based studies the metrics used for optimizationare typically based on surrogate models of the actual performance characteristic of interest .for example , the gesture clarity metric used by is correlated with how frequently words are correctly reconstructed but does not measure this directly .the reason that these approximate measures have been used is that accurately evaluating real keyboard reconstruction error rates would require an immense amount of user input data .modern optimization techniques typically evaluate and compare hundreds of thousands of different keyboard layouts , making it completely infeasible to obtain the necessary user data .the methodology that we propose allows for the direct evaluation of gesture reconstruction error rates , or any other desired metric , by simulating realistic user interactions with a keyboard .this is similar to the approach used by fowler et al when they simulated noisy tap typing input to estimate the effect of language model personalization on word error rate [ ] . to demonstrate the effectiveness of our methodologywe will show how it can be used to find a keyboard layout that minimizes gesture input errors .this requires accurately modeling gesture input for a given layout , interpreting the input , and quantifying how frequently typical inputs would be misinterpreted .we employ several different models for gesture input and a dictionary of the most common words in the english language to simulate realistic usage and take into account variations between users .we also attempt to develop a highly accurate algorithm for recognizing gesture inputs that is not limited to a specific keyboard layout .it should be noted that although this paper focuses on the error rate performance , the overall methodology can be used to evaluate and compare keyboard layouts based on any performance measure . finally , in order to address the problem we designed and built an open source software framework , called dodona , for exploring different input methods .this framework is well suited for examining a wide range of possible keyboard designs and input models .it was built with optimization in mind and has a focus on efficient implementations and extensibility .the library is freely available on github [ ] and was used to perform the analysis and generate all keyboard related graphics presented here .an extremely large dataset of gesture inputs is needed in order to accurately evaluate the error rate of a given keyboard layout .the only way to obtain such a dataset on a reasonable time - scale is to generate gesture input data based on models of user input . to accomplish this we developed several models which can take a word and produce what we refer to as a gesture input vector , a sequential series of points that represent discrete samples along a gesture input pattern .we then used words that were randomly generated based on their frequency of use in the english language to feed into these models and generate realistic sets of input .in general , our input model can produce either a `` random vector '' or a `` perfect vector '' .the former is used for realistic , inexact gesture input while the latter represents the ideal input pattern that is free from variation . to construct random vectorswe begin by drawing control points for each letter in a given word from a two dimensional gaussian distribution that s centered around each corresponding key on the keyboard .the and widths of the gaussian , in addition to the correlation in the offsets between subsequent control points , can be changed as parameters of the input model .we then interpolate between these control points for each letter to produce a continuous gesture input as a function of time .this is then sampled at evenly spaced intervals along the entire interpolation in order to produce an input vector with a set number of points .perfect vectors are constructed in the same way but use the centers of the keys as control points . the idea that their exists a unique perfect vector for each word in the lexicon was first introduced by kristensson and zhai in their seminal paper about the shark text input system for stylus keyboards [ ] . in their workthey refer to perfect vectors as _+ we chose to implement a variety of different interpolations to account for the variations in individual gesture input style .we settled on five different interpolation techniques : a straight - line spatial interpolation , a natural cubic spline , a cubic hermite spline [ ] , a monotonic cubic hermite spline [ ] , and a modified natural cubic spline where the first and last segments are required to be straight lines . using randomly generated control points with various interpolation techniques allows us to capture a large range of input possibilities .this is demonstrated in figure [ fig : five - different - random ] , which shows five different possible swipe patterns corresponding to the word `` cream '' .each pattern was constructed with a different interpolation and random variations of the control points .we used the google web trillion word corpus as compiled by peter norvig to provide the dictionary for our input model [ ] .this dictionary contains approximately 300,000 of the most commonly used words in the english language and their frequencies . however , more than half of the entries are misspelled words and abbreviations .our final dictionary only contained words from the google web trillion word corpus that also occurred in the official scrabble dictionary , the most common boys / girls names , or winedts us dictionary [ ] .the result was a dictionary containing 95,881 english words and their frequencies .the individual word frequencies ( magenta ) and the associated cumulative distribution ( green ) are shown in figure [ fig : individual - word - frequencies ] . in order to reduce the computational needsassociated with using a dictionary this large we elected to only use the top words in the optimization procedures described later . even though this is only of the words contained in the dictionary it accounts for of the total word occurrences .furthermore , the average vocabulary size of a college educated adult in the u.s . is words with a range extending from about to words , which is consistent with the size of the dictionary used in this analysis [ ] .word marker , which is where we cut off for inclusion in the analysis dictionary.[fig : individual - word - frequencies ] ]in order to evaluate the gesture clarity of a given keyboard layout , a quantitative metric must be defined . in the recent paper by smith , bi , andzhai they define gesture clarity as the average distance between a vector representation of each word and its nearest neighbor on a given keyboard layout [ ] .the vector used to represent each word corresponds to what they call its _ ideal trace _( identical to _ perfect vectors _ defined in this paper ) .this definition is naturally related to how effective a keyboard will be since keyboards with smaller distances between nearest neighbors will tend to have more frequent reconstruction errors .however , there are a number of important factors that it does not take into account : more than one neighboring word can be a source of recognition errors , there are threshold distances beyond which the impact on gesture clarity is negligible , and there are countless subtle interplays between the specific layout of a keyboard and the way that users input imperfect gestures .therefore , a different procedure for computing something akin to gesture clarity is required if we want to take these effects into account and more accurately reflect the frequency of words being successfully reconstructed .for this reason we decided to use the _ gesture recognition error rate _ for a given keyboard layout and lexicon as our metric . ideally , this would be measured experimentally but this is time consuming , expensive , and essentially impossible when the goal is to evaluate a large number of keyboards as would be done in an optimization . in the absence of real user testing , modeling and simulating how users input gestures allows for an approximate evaluation of the actual error rate .the error rate , , for a given keyboard layout can be approximated by generating random input vectors from random words in the lexicon .the generated input vectors can then be reconstructed using a realistic algorithm and checked against the words that they were meant to represent . if input vectors were reconstructed as the wrong word thenthe recognition error rate is simply .this quantity can very roughly be thought of as relating to the gesture clarity , , according to , though this relationship is just a heuristic .as mentioned in the previous paragraph , there are subtle nuances that contribute to gesture input errors that can affect the error rate of a given keyboard layout but not its gesture clarity .since our error rate metric depends on how accurately we can recognize gesture inputs on a given keyboard layout we need a gesture input recognition algorithm that can accurately recognize input vectors on any keyboard layout . using our input modelwe can define gesture input recognition as the ability correctly match a random input vector with the word that generated it .this is a difficult problem to solve because as a user passes their finger / stylus over the keyboard they typically pass over many more characters than those required to spell the desired word .this means that we must be able to pick out a particular word from an ambiguous input .if you look at the example gesture input pattern for the word `` whole '' in figure [ fig : example - swipe - collision ] , it is easy to see how even differentiating between two words can be a challenge .our first approach to this problem was to simply take the euclidean distance between two input vectors .this requires each input vector to be normalized so that they each have the same number of interpolation points , which are equally spaced along the interpolation .implementing the euclidean distance approach is then straightforward and given by the equation , },\label{eq : cartdist}\ ] ] where is the total number of interpolation points in the gesture input vector and is the -component of the interpolation point in the first of the two input vectors being compared .this is very similar to the _ proportional shape matching _channel used in the shark writing system [ ] and in the gesture clarity metric used by .although this method can correctly identify when two gesture inputs match exactly , it could also return a large distance between two input vectors that are qualitatively very similar .for example , a user may start near the bottom of the key for the first letter of the word and end up shifting the entire gesture input pattern below the centers of the subsequent keys .this input pattern could pass over all of the correct keys but still result in a large euclidean distance when compared to the perfect vector for the word .the shortcomings of this approach made it clear that we were not utilizing all of the useful information contained in the input vectors . if a poor distance measure were to cause misidentifications that would not happen in practice then this could introduce significant biases during the optimization procedure , resulting in a keyboard that is not optimal for real use .kristensson and zhai accounted for this in the shark writing system by incorporating language and location information in addition to proportional shape matching in their recognition algorithm . similarly , in order to reduce the impact of these systematic effects , we needed to identify additional features that would improve our gesture input recognition .our first step was to uncouple the and coordinates and treat them individually . given the anisotropic nature of most keyboards , the relative importance of biases between the two spatial dimensions is not clear a priori .therefore , we decided that the first two features should be the euclidean distance ( eq .[ eq : cartdist ] ) between two input vectors for the and components individually , are also shown in each plot.[fig : a - closer - look],title="fig : " ] are also shown in each plot.[fig : a - closer - look],title="fig : " ] are also shown in each plot.[fig : a - closer - look],title="fig : " ] are also shown in each plot.[fig : a - closer - look],title="fig : " ] are also shown in each plot.[fig : a - closer - look],title="fig : " ] are also shown in each plot.[fig : a - closer - look],title="fig : " ] in order to address the issue with offset input vectors , translational symmetry needed to be taken into account . to do thiswe decided to look at the time derivatives of the input vectors with respect to their and coordinates . since the input vectors are sets of sequential , discrete points we can easily estimate the derivative at each point .we can then construct a distance measure by taking the euclidian distance between the time derivatives of two swipe patterns at each point .the equation for the derivative distance in the -dimension is given by : ^{2}}},\label{eq : ddist}\ ] ] where and are the -components of the two input vectors being compared .we assume a constant input velocity for the models considered here so we ve implicitly rescaled the time coordinated such that to simplify the equations .we also wanted a distance measure that would be more sensitive to the positions of sharp turns in a gesture input pattern .this led us to include the distance between the second derivatives of the input vectors in a similar fashion to the first derivatives ( eq . [ eq : ddist ] ) .the quantity is rotationally invariant as well so we can see how these might help allow for more leniency in situations where there might be some variation in the orientation of a touchscreen device relative to a users hand .the utility of these features in regards to correctly identifying gesture input is apparent when you take a closer look at the differences between a random input vector and a perfect vector for a given word .the and values as a function of time for a random input vector and a perfect vector for the word `` cream '' , as well as their first and second derivatives , are shown in figure [ fig : a - closer - look ] .this example illustrates how the first and second derivatives can be useful for finding the correct match even when the swipe pattern is shifted significantly from the center of the appropriate keys .two additional distinguishing features of each gesture input pattern are the start and end positions .these points carry information about the overall position of an input vector while being less sensitive to the shape than and .the addition of this information to the euclidean distance was shown by to reduce the number of perfect vector ambiguities by 63% in their 20,000 word lexicon .consequently , the distance between the and components of the first and last points of the input vectors were included in the feature set .this gives us four additional features to add to the six previously discussed .finally , we realized that the length of each gesture input pattern can be a very distinguishing characteristic , leading us to include the difference in the length of each gesture input pattern as the final addition to our feature set , giving us a total of eleven features which are related to the difference between two gesture input patterns . however , in order avoid repeatedly calculating square roots we decided to put the squared value of each distance in the feature set .to summarize , the set contains : the squared euclidean distance between the and components , the squared distance between the and components of the first derivatives , the squared distance between the and components of the second derivatives , the squared distance between the and components of the first point , the squared distance between the and components of the last point , and the difference in the squared length of the two gesture input patterns being compared . the last step in creating our desired distance measure was figuring out a way to combine the eleven elements in our feature set to a single output representing the `` distance '' between two gesture input vectors . despite the intuitive basis of our feature set, the relationship of each element to the overall performance of a classifier is highly non - trivial given the complexity of gesture input vectors .fortunately , this is a problem that is well suited for a neural network based solution . to build a classifier using our feature set, we created a deep , fully - connected artificial neural network with eleven input nodes ; one for each of the variables in the previously discussed feature set .the network architecture consists of three hidden layers with 11 nodes each and a fourth hidden layer with only two nodes .the activation function for each hidden node and the output node is an elliot function , where is the steepness and is set to 0.5 for each layer .this is a computationally efficient sigmoid function that asymptotes slower than a standard sigmoid .this was necessary since we employed the rprop algorithm [ ] to train the network , which is susceptible to the flat - spot problem when using steep sigmoid activation functions .the artificial neural network used in the analysis was implemented using the fast artificial neural network software library ( fann ) [ ] and is available in the repository as a fann binary file and as a text file listing the weights of every connection in the deep neural network [ ] .-axis ) and the neural network distance measure ( -axis ) .the dashed line is just and is only meant to help the comparison . the average error rate for each method is represented by the bold green dot.[fig : the - performance - of - nn ] ] the neural network was trained on a dataset of pairs of random input vectors and perfect vectors .the random input vectors were constructed with each of the different interpolation models used ( ,000 for each model ) but the perfect vectors were restricted to be linear interpolations .this was done to make the algorithm as realistic as possible since a practical gesture recognition algorithm would not be able to make any assumptions about a user s input style .the training set was divided up so that of the input pairs corresponded to the same word , corresponded to completely random words , and the remaining corresponded to different but similar words . the exact definition of `` similar words '' is given in the next section. the performance of the neural network recognition algorithm can be seen in figure [ fig : the - performance - of - nn ] .this plot shows a comparison between the neural network method and the euclidean distance method . in this study, 50 random keyboards layouts were created and for each layout 5,000 random gesture input vectors were generated .the input vectors were then matched to a word using the two methods .the fraction of attempts that each method got wrong is shown as the error rate ( as discussed in much more detail in the next section ) .it is easy to see that for each keyboard layout the neural network recognition algorithm outperformed the euclidean distance algorithm .the average error rate using the neural network algorithm on random keyboard layouts is compared to for the euclidean distance measure , which is a reduction of in the gesture input recognition error rate . with a more accurate gesture recognition algorithmwe can confidently evaluate a given keyboard s gesture recognition error rate .the general approach is to use a monte carlo based algorithm to determine the error rate .this technique can be described as follows : a random word is chosen from the lexicon with a probability proportional to its frequency of occurrence .a random gesture input vector is then generated for this word based on a given input model .the gesture recognition algorithm then determines the word that is the best match for that specific random input vector .if the selected word matches the original word then the match is considered a success . this process is repeated times so that the error rate is given by the ratio of successful matches to the total number of attempts . due to the statistical nature of this techniquethere will be an uncertainty in each measurement .as with most efficiency calculations , the uncertainty is given by the variance of a binomial distribution scaled by .although effective , this method is very computationally intensive .a reasonable optimization procedure will require around matching attempts in each efficiency calculation to reduce the effects of statistical fluctuations ( specifically , this produces error rate calculations with statistical uncertainties of .7% ) .each matching attempt requires a comparison for every word in the lexicon , which contains words , so every efficiency determination will require distance measure calculations .since the goal is to use the error rate calculation in an optimization procedure , increasing the total time by several orders of magnitude , another approach was needed .consider the case where a user is trying to input the word `` pot '' on a standard qwerty keyboard .clearly words such as `` cash '' and `` rules '' are not going to be calculated as the best match by the distance measure because they have dramatically different gesture input patterns .therefore , there is no need to spend time comparing the perfect vectors of these words as part of the error rate calculation .the error rate calculation can be made much faster without sacrificing much accuracy by comparing only to the perfect vectors of more plausible candidate words such as pit " , put " , `` lit '' , etc .the difficulty lies only in determining which words are plausible candidates . ] to determine what words should be included in the candidate list we created what we call the `` string form '' for each gesture input vector .the string form is just the sequential list of letters that the gesture input pattern traversed .for example , if a user inputs the word `` pot '' the corresponding string form might be `` poiuyt '' .if we were implementing an algorithm for determining candidates given a fixed keyboard layout we could first generate a large number of random input vectors for every word in the dictionary .we could then build a lookup table where each observed string form corresponds to a list of all words that had been observed to ever produce that string form .that approach is unfortunately not possible when optimizing the keyboard layout because the lookup table would need to be rebuilt for every new keyboard .instead , we generate a large number of random vectors for the word that would be the correct match and find the string form for each of those .we then allow any words that are contained within any of these string forms to be potential candidates .this would not be possible if we did nt know the intended word but it results in a superset of the candidates we would find using the precomputed lookup table given that a sufficient number of random vectors are used .the words that are consistent with each string form are determined by recursively searching a radix tree representation of all of the words in the dictionary as shown in figure [ fig : a - radix - tree ] . in this examplewe would start by looking at the radix tree that begins with the letter `` p '' .then we would look to see if this tree has a child node corresponding to the letter `` p '' ; repeated letters are not always distinguishable in a swipe pattern so we have to always explicitly check for the possibility . since there is no word that begins with `` pp '' we then move on to the next letter in the string form , `` o '' , and look for a child node corresponding to this letter .the search is then done recursively for the sub - tree beginning on the child node corresponding to `` o '' with the string form `` oiuyt '' .the search continues in this recursive manner , traversing all of the branches of the subtree that are consistent with the string form and returning any leaf that is found as a candidate word .this will effectively find all candidate words beginning with `` po '' that could potentially be contained in the string form .once this subtree is exhausted we move on to the next letter of the original string form , `` i '' , and recursively search the subtree corresponding to this letter with the string form `` iuyt '' for candidate words beginning with `` pi '' .this process continues until the final subtree corresponding to the letter `` t '' is searched , thus finding all candidate words contained in the string form . ]this approach , which we call _ radix tree pruning _ ( abbreviated as `` radixmc '' when combined with the standard monte carlo algorithm ) , reduces the number of comparisons to make for each input vector and subsequently speeds up the calculation significantly .the time required scales roughly linearly with the number of random vectors which are used to find candidates so some balance between efficiency and accuracy is required . in order to determine a suitable threshold we calculated the error for a given random keyboard as a function of the number of random vectors used in the radix tree pruning as shown in figure [ fig : radix_vs_nvectors ] . the flattening out of the error rateis expected since the words that are most similar are typically found in the earlier iterations .we chose to use 20 random vectors in the pruning step since they allow for the vast majority of nearest neighbors to be found and can be generated in a reasonable amount of time .we conducted a small study where we calculated the error rate while varying the number of random vectors from one to 25 for the qwerty keyboard and four random keyboard layouts .the results showed that the relative error rate of the random keyboards to qwerty remained roughly constant across the entire range of the number of random vectors .the relationship between the error rate and the number of input vectors is largely independent of the keyboard layout , which means the relative error rates remain approximately unchanged .when performing an optimization , where only relative error rates are important , this effect becomes largely negligible .when used in the error rate calculation this algorithm outperforms the standard monte carlo approach in terms of computational efficiency by two orders of magnitude when the full dictionary is used , as seen in figure [ fig : the - cpu - time ] .it is also obvious that the radix tree based monte carlo algorithm scales much more favorably with the size of the dictionary .a similar problem was faced by kristensson and zhai during the development of shark .however , they opted for a different filtering technique , which they called _template pruning_. this required that the start and end positions of the input vector and a perfect vector be less than a given threshold in order to be considered for a potential match [ ] . ]both of these approaches have the same goal : to drastically reduce the number of comparisons necessary to determine the intended input word .the main difference lies in the trade - off between the computational complexity of the filtering and the number of comparisons made during the matching stage .while computationally very efficient , template pruning has the consequence of passing many words to the comparison step that would be filtered using the radix tree approach . using kristensson s and zhai s method with a pruning threshold large enough to allow for consistent word reconstruction, we were able to prune the list of comparison words by 95.08% on average .the radix tree pruning technique with 20 random vectors was able to reduce the necessary list of comparison words by 99.92% . by removing so many additional candidate words that have nearby starting and ending keys but dissimilar input vectors , we were able to significantly reduce the time required for the combined filtering and matching stages .in order to demonstrate the utility of our methodology and the dodona open source framework we optimized permutations of a standard keyboard layout to the minimize gesture recognition error rate . the specific optimization algorithm we employed begins with generating a random keyboard layout and calculating its error rate . at each successive iterationa new keyboard is created by swapping pairs of keys in the previous keyboard .the error rate of the new keyboard is calculated for each interpolation method and averaged .the average error rate is then compared to the previous keyboard .if the error rate has decreased then the new keyboard is kept and the previous one is discarded , otherwise the new keyboard gets discarded .this process is repeated times where the number of key swaps , , is repeatedly decreased by one at set intervals .this results in successive keyboards differing by only one swap at the end of the optimization procedure . for our final analysiswe ran 256 separate optimizations , each running through 200 iterations , and starting with .the average error rate for all of the input models at each step in the optimization procedure is shown in figure [ fig : the - average - error ] . the minimum and maximum error rate at each step and the error rate of the qwerty keyboardare also shown in the figure .interestingly , we see that the qwerty keyboard error rate of % is less efficient for gesture input than the average randomly generated keyboard layout ( error rate : % ) .however , the optimization procedure quickly finds keyboard layouts with even lower error rates .after two hundred iterations the average error rate found in each trial is approximately 8.1% .this represents an improvement in the error rate of 47% over the qwerty keyboard .] the optimal keyboard for gesture input clarity found in the analysis is shown in figure [ fig : the - most - optimal ] and we will refer to it by its first four letters : _ dghp_. the keys are colored to represent their relative frequency in the lexicon .this keyboard is found to have an error rate of , which is a improvement compared to the qwerty keyboard .this is higher than the minimum shown in figure [ fig : the - average - error ] because of the limited resolution of the error rate measurement used in the optimization procedure. the quoted error rate of the optimal keyboard was determined by a final high precision measurement .it is important to note that the values of the error rates computed by our method depend heavily on the parameters of the input model .thus , the error rates themselves hold little general meaning . instead , it is more meaningful to speak of the relative change in error rate compared to the standard qwerty keyboard .we have found the ratios of the error rates produced from different keyboards to be largely independent of the input model parameters .this permits us to state , in general , whether the keyboard layout resulting from the above procedure is more efficient than the qwerty keyboard for swipe input and quantify the relative improvement . ]one major feature of the dghp keyboard is the appearance of the most frequently used keys along the edges .the qwerty keyboard has many of the most frequently used keys near its center , which results in large number of gesture input errors . in this gestureinput optimized keyboard the keys at the center of the layout are less frequently used .this makes sense because having the most frequently used keys at the edges will decrease the probability that the user passes over them without intending to . by removing the most common letters from appearing arbitrarily in gesture input patterns we naturally reduce the number of errorshowever , there are more subtle characteristics of the keyboard that arise due to the way words are structured in the english language . for example , the letters `` i '' , `` o '' , and `` u '' are no longer clustered together , which eliminates the ambiguity between words like `` pout '' , `` pit '' , and `` put '' .in addition , another notable feature is the separation of the letters `` s '' , `` f '' , and `` t '' .this helps to distinguish between the words `` is '' , `` if '' , and `` it '' , which are very common in the english language .it s interesting to try and understand some of the reasons why the keyboard has such a low error rate but , in reality , it is a finely tuned balance that depends on the structure and frequency of every word used in the analysis . out of curiositywe also decided to see what would happen if we optimized a keyboard to maximize swipe errors .we ran five similar optimization procedures through 100 iterations to find the least optimal keyboard layout for swipe input .the worst keyboard we could find is shown in figure [ fig : the - least - optimal ] and has an error rate of 27.2% , which is about 78% worse than the qwerty keyboard . in this keyboardthe most frequently used keys are all clustered together , making swipe patterns more ambiguous and resulting in more swipe errors . ] besides running optimization procedures , evaluating and comparing existing keyboard layouts is another important reason to have a robust framework for evaluating keyboards .we evaluated the gesture recognition error rate for a number of existing virtual keyboard layouts , similarly to what was done by when he compared the tapping speed for almost the same set of keyboard layouts .in addition to the ones presented in his paper we also included the four keyboards from ( gk - c , gk - s , gk - d , gk - t ) and the optimal keyboard found here that was presented in the previous section , dghp .the results are displayed in table [ table : keyboard_evaluations ] .the error rate and statistical uncertainty associated with each calculation are displayed for each keyboard .the error rates were calculated using 20 , 000 monte carlo iterations , 20 random vectors in the radix tree pruning step , and random input vectors containing 50 spatially ( linearly ) interpolated points . as has been noted previously ,the absolute value of the error rates is not meaningful unless training data is used to constrain the model .when only the permutations of keys on a single keyboard geometry are considered , however , then the relative error rates of the different layouts can be directly compared because the degree of randomness in the input model scales all of the error rates in a very similar way .the comparison of error rates becomes far more subtle when considering entirely different keyboard geometries , particularly if they included keys of different shapes ( e.g. squares and hexagons ) .when working with multiple keyboard geometries , there are several different possible approaches for adjusting the input model uncertainty and scaling the relative sizes of the keyboards . in the results shown here ,each keyboard was scaled such that the total area of its keys would be the same for all keyboards .we then set the input model uncertainties , and , to be the same for all keyboards with values equal to , where is the area of a single key .it is quite possible , if not likely , that the size and shape of different keys may have an affect on the magnitude of and in real usage . as an alternative approach, we also considered the case where and scale directly with the horizontal and vertical extent of each key , respectively .this resulted in a 10 - 25% increase in the error rates of the hexagonal keyboards relative to those with square keys and so this should be taken into account when considering the results in table [ table : keyboard_evaluations ] .l*2c _ keyboard layout _ & _ gesture recognition error rate _ & _ error rate uncertainty _ + gk - c & & + swrm & & + gk - d & & + gk - t & & + hexagon qwerty & & + atomik & & + metropolis ii & & + square alphabetic & & + gag i & & + wide alphabetic & & + metropolis i & & + sath - trapezoid & & + sath - rectangle & & + chubon & & + getschow et al . & & + opti i & & + square osk & & + square atomik & & + hexagon osk & & + qwerty & & + fitaly & & + quasi - qwerty & & + hooke & & + gk - s & & + gag ii & & + lewis et al . & & + opti ii & & + dvorak & & + dghp & & + we can see that the results vary widely , which is expected given the drastic differences between some of the keyboard layouts .the first thing to note is that the top two performing keyboards - dghp and gk - c - were the only two keyboards that were optimized for gesture input clarity or gesture recognition error rate and show a reduction in error rate compared to qwerty by and , respectively .the fact that dghp outperforms gk - c is also expected since dghp was optimized using the exact metric used in this evaluation .the swrm keyboard came in third even though this keyboard was optimized for input clarity with respect to tap input , not gesture input .the gk - d and gk - t keyboards were optimized for gesture clarity but they were simultaneously optimized for one and two other performance metrics , respectively , so it is not surprising that they came in fourth and fifth .although the difference between the two is not statistically significant , the order they appear is exactly what we would expect .the worst keyboard by far is dvorak which has an error rate that is higher than qwerty .we can also see that the hexagon keyboards tend to perform better than the square keyboards , although this comparison is not necessarily accurate given the subtleties mentioned in the previous paragraph .it also interesting to see that just using an alphabetic keyboard will give you a much lower error rate than the majority of the keyboards listed in table [ table : keyboard_evaluations ] .the results presented here show a clear demonstration of our proposed methodologies effectiveness with respect to calculating gesture recognition error rates and performing keyboard optimizations .in contrast to recent work , such as the gesture clarity optimization performed by smith , bi , and zhai , this new approach allows for the direct estimation of the error rate [ ] . this distinction may appear subtle , but it allows for ambiguities other than nearest neighbors to be taken into account .these secondary ambiguities have a sizable impact on the real - world performance of any particular keyboard .additionally , threshold effects are more realistically taken into account with the reconstruction error rate estimation .words that have distant nearest neighbors will all have reconstruction rates that are effectively . increasing the distance between these already distant words will increase the gesture clarity metric while having no effect on the actual reconstruction error rate . in an optimization setting , these ineffectual changes might be preferred at the expense of making words with closer nearest neighbors more ambiguous , resulting in keyboards that are less effective overall .although these changes are significant , we consider the primary advancement of this methodology to be its ability to extend even further in quantifying metrics that accurately reflect the performance of different keyboards . in this section, we will discuss some of the directions where this work and the dodona framework can be built upon in future research .one of the key advantages of the monte carlo evaluation approach is that any desired metric can be computed . in the smith ,bi , and zhai paper , linear combinations of gesture clarity and gesture speed are optimized and then pareto optimal configurations are found .this is effective for finding keyboards that have relatively high gesture clarity and gesture speed metrics but requires making assumptions about the relative importance of each metric when choosing a keyboard that is optimal overall . in past work, an optimal keyboard has typically been chosen from along the pareto front under the assumption that each metric has equal importance [ , , ] . while this is a reasonable approach ,there are certain cases where two metrics can be combined to more directly measure the overall performance .this is the case with gesture clarity / error rate and gesture speed since the overall goal of minimizing the error rate is to speed up gesture input by minimizing the time required to correct mistakes . with the monte carlo approach it is possible to do this by estimating the time it takes to input a gesture and then adding the correction time when a word is incorrectly reconstructed .for example , the gesture input time could be estimated with the model used in and the correction time could be estimated by adapting the approach used in for gesture input .this would allow for the direct estimation of how many words per minute can be entered on any given keyboard layout , leading to a metric that intuitively relates much more closely to how effective that layout would be in real - word usage .the ability to reduce gesture speed and error rate into a single metric has the additional benefit of reducing the problem to one of scalar , rather than vector , optimization . roughly speaking ,this allows you to focus the search in the direction that will directly maximize the underlying desired metric instead of optimizing in many different directions to extend the pareto front ( as was done in ) .this significantly reduces the computational requirements of the optimization procedure .extensions like this are another possible area for future work and can be made very easily with the dodona framework while adding virtually nothing to the overall computation time .another major strength of this approach is that it can be applied to an extremely wide variety of input methods with only the input models , and possibly the keyboards , needing to be modified .detailed discussion is beyond the scope of this paper , but the dodona framework includes a model for touch typing which can be used to evaluate autocorrect on traditional keyboards or disambiguation in text entry on a t9 touchpad ( on a touchpad keyboard which is also included in the framework ) . new keyboard designs that use any sort of touch or gesture typing , with or without disambiguation , can be evaluated using the same overall methodology and framework .this allows for extremely quick prototyping and testing of novel text input methods .reconstruction algorithms can also be easily modified to reflect more state of the art techniques . instead of considering individual word frequencies , word pair frequenciescould be used to generate pairs of words and the previous word could modify the prior probabilities of the reconstructed words .this would more closely mimic the behavior of commercially available gesture typing software and allow for more realistic estimation of metrics .the algorithms could also be extended to include forgiveness of certain types of mid - gesture mistakes or allowing for common spelling mistakes .this degree of flexibility is something that can only be achieved in a framework where actual reconstruction is taking place instead of relying on heuristic metrics .one of the disadvantages of the monte carlo methodology is that it depends on the accuracy of the model .a key improvement that could be made in future analyses is to utilize user data to tune and validate the input models .the input models could also be extended to include more subtle aspects of user behavior such as mid - gesture mistakes and misspellings . with well trained models , user behaviorcould be incorporated that would otherwise be impossible to take into account .this would additionally make the exact values of the metrics meaningful while the untrained models used in this paper can only be used for relative keyboard comparisons .one way this could be done is to incorporate touch accuracy into the gesture input models .the model presented in this paper assumes that the accuracy of each key is identical .however , henze , rukzio , and boll used a crowd sourced dataset to show that tap accuracy is systematically skewed on mobile devices [ ] .furthermore , they showed that the frequency of touch errors for a specific target is correlated with the absolute position of the target on the touchscreen .this could lead one to believe that certain keys are more susceptible to touch , and possibly gesture , inaccuracies than others .therefore , an obvious improvement to our user input model would be to systematically incorporate these results to adjust the accuracy at each location for a given keyboard geometry .in addition , language model personalization could be incorporated and compared to the overall effect of keyboard layout on the error rate .fowler et al showed that including language model personalization can dramatically reduce the word error rate for touch typing [ ] .we have described a new way to model gesture input and a procedure to evaluate the error rate of any touchscreen keyboard layout . using this methodwe have evaluated and compared the error rates for numerous existing virtual keyboards and , specifically , shown that the qwerty keyboard is far from optimal for using a gesture input mechanism .we have also described an optimization procedure for finding a keyboard that is better suited for gesture input .we presented the most optimal keyboard that was found in our analysis and showed that it decreased the error rate for gesture input by when compared to the qwerty keyboard .we thank duncan temple lang and paul baines of the statistics department at uc davis for allowing us access to the computing cluster which provided the necessary computational power to run the final optimizations .in addition , we thank daniel cebra and manuel calderon de la barca sanchez of the physics department at uc davis for allowing us to make use of the nuclear physics group computing resources while designing and testing the analysis software .dunlop:2012:mpo:2207676.2208659 mark dunlop and john levine .multidimensional pareto optimization of touchscreen keyboards for speed , familiarity and improved spell checking . in _ proceedings of the sigchi conference on human factors in computing systems _ _ ( chi 12)_. acm , new york , ny , usa , 26692678 . 978 - 1 - 4503 - 1015 - 4 http://dx.doi.org/10.1145/2207676.2208659 shark2 per - ola kristensson and shumin zhai .shark2 : a large vocabulary shorthand writing system for pen - based computers . in _ proceedings of the 17th annual acm symposium on user interface software and technology _ _ ( uist 04)_. acm , new york , ny , usa , 4352 . 1 - 58113 - 957 - 8 + http://dx.doi.org/10.1145/1029632.1029640 fastswiping jochen rick .performance optimizations of virtual keyboards for stroke - based text entry on a touch - based tabletop . in _ proceedings of the 23nd annual acm symposium on user interface software and technology _ _ ( uist 10)_. acm , new york , ny , usa , 7786 .978 - 1 - 4503 - 0271 - 5 http://dx.doi.org/10.1145/1866029.1866043 googlekeyboard brian a. smith , xiaojun bi , and shumin zhai .2015 . optimizing touchscreen keyboards for gesture typing . in _ proceedings of the 33rd annual acm conference on human factors in computing systems_ _ ( chi 15)_. acm , new york , ny , usa , 33653374 .978 - 1 - 4503 - 3145 - 6 http://dx.doi.org/10.1145/2702123.2702357 shark shumin zhai and per - ola kristensson .shorthand writing on stylus keyboard . in _ proceedings of the sigchi conference on human factors in computing systems_ _ ( chi 03)_. acm , new york , ny , usa , 97104 .1 - 58113 - 630 - 7 http://dx.doi.org/10.1145/642611.642630 shapewriter shumin zhai , per ola kristensson , pengjun gong , michael greiner , shilei allen peng , liang mico lui , and anthony dunnigan .shapewriter on the iphone - from the laboratory to the real world _( chi 2009)_. acm .http://dx.doi.org/10.1145/1520340.1520380 splines richard h. bartels , john c. beatty , and brian a. barsky .hermite and cubic spline interpolation .ch . 3 in _ an introduction to splines for use in computer graphics and geometric modeling . _( 1998 ) monotonicsplines randall l. dougherty , alan edelman , and james m. hyman .positivity- , monotonicity- , or convexity - preserving cubic and quintic hermite interpolation ._ mathematics of computation_. * 52 * ( 186 ) : 471 - 494 .http://www.ams.org/journals/mcom/1989-52-186/s0025-5718-1989-0962209-1/home.html touchaccuracy niels henze , enrico rukzio , and| susanne boll .100,000,000 taps : analysis and improvement of touch performance in the large . _mobilehci 11 - proceedings of the 13th international conference on human computer interaction with mobile devices and services_. pages 133 - 142 .http://doi.acm.org/10.1145/2037373.2037395 composition keith vertanen and per ola kristensson .complementing text entry evaluations with a composition task ._ acm transactions on computer - human interaction_. volume 21 issue 2 , february 2014 .miniqwerty edward clarkson , james clawson , kent lyons , and thad starner .an empirical study of typing rates on mini - qwerty keyboards . in _chi05 : extended abstracts on human factors in computing systems_. http://doi.acm.org/10.1145/1056808.1056898 twiddler kent lyons , thad starner , and brian gane . 2006 .experimental evaluations of the twiddler one - handed chording mobile keyboard . _human - computer interaction_. volume 21 , pp .343 - 392 .http://dx.doi.org/10.1207/s15327051hci2104_1 graffiti steven j. castellucci and scott i. mackenzie .graffiti vs. unistrokes : an empirical comparison . in _chi08 : proceedings of the sigchi conference on human factors in computing systems ._ http://doi.acm.org/10.1145/1357054.1357106joystick jacob o. wobbrock , duen horng chau , and brad a. myers . 2007 .an alternative to push , press , and tap - tap - tap : gesturing on an isometric joystick for mobile phone text entry . in _chi07 : proceedings of the sigchi conference on human factors in computing sytems_. http://doi.acm.org/10.1145/1240624.1240728 handwriting per ola kristensson and leif c. denby .text entry performance of state of the art unconstrained handwriting recognition : a longitudinal user study . in _chi09 : proceedings of the sigchi conference on human factors in computing systems_. http://doi.acm.org/10.1145/1518701.1518788 fowler andrew fowler , kurt partridge , ciprian chelba , xiaojun bi , tom ouyang , and shumin zhao .effects of language modeling and its personalization on touchscreen typing performance . in _chi15 : proceedings of the 33rd annual acm conference on human factors in computing systems_. http://doi.acm.org/10.1145/2702123.2702503 arif ahmed sabbir arif and wolfgang stuerzlinger .predicting the cost of error correction in character - based text entry technologies . in emchi10 : proceedings of the sigchi conference on human factors in computing systems .
gesture typing is a method of text entry that is ergonomically well - suited to the form factor of touchscreen devices and allows for much faster input than tapping each letter individually . the qwerty keyboard was , however , not designed with gesture input in mind and its particular layout results in a high frequency of gesture recognition errors . in this paper , we describe a new approach to quantifying the frequency of gesture input recognition errors through the use of modeling and simulating realistically imperfect user input . we introduce new methodologies for modeling randomized gesture inputs , efficiently reconstructing words from gestures on arbitrary keyboard layouts , and using these in conjunction with a frequency weighted lexicon to perform monte carlo evaluations of keyboard error rates or any other arbitrary metric . an open source framework , dodona , is also provided that allows for these techniques to be easily employed and customized in the evaluation of a wide spectrum of possible keyboards and input methods . finally , we perform an optimization procedure over permutations of the qwerty keyboard to demonstrate the effectiveness of this approach and describe ways that future analyses can build upon these results . touchscreen keyboards , gesture input , model - based design , monte carlo simulation
wireless sensor networks ( wsns ) can be utilized as target tracking systems that detect a moving target , localize it and report its location to the sink .so far , the wsn - based tracking systems have found various applications , such as battlefield monitoring , wildlife monitoring , intruder detection , and traffic control .this paper deals with the problem of target tracking by a mobile sink which uses information collected from sensor nodes to catch the target .main objective of the considered system is to minimize time to catch , i.e. , the number of time steps in which the sink reaches the moving target .moreover , due to the limited energy resources of wsn , also the minimization of data communication cost ( hop count ) is taken into consideration .it is assumed in this study that the communication between sensor nodes and the sink involves multi - hop data transfers .most of the state - of - the - art data collection methods assume that the current location of the target has to be reported to sink continuously with a predetermined precision .these continuous data collection approaches are not suitable for developing the wsn - based target tracking applications because the periodical transmissions of target location to the sink would consume energy of the sensor nodes in a short time .therefore , the target tracking task requires dedicated algorithms to ensure the amount of data transmitted in wsn is as low as possible .intuitively , there is a trade - off between the time to catch minimization and the minimization of data communication cost . in this studytwo algorithms are proposed that enable substantial reduction of the data collection cost without significant increase in time to catch .the introduced communication - aware algorithms optimize utilization of the sensor node energy by selecting necessary data readings ( target locations ) that have to be transmitted to the mobile sink .simulation experiments were conducted to evaluate the proposed algorithms against state - of - the - art methods .the experimental results show that the presented algorithms outperform the existing solutions .the paper is organized as follows .related works are discussed in section 2 .section 3 contains a detailed description of the proposed target tracking methods .the experimental setting , compared algorithms and simulation results are presented in section 4 .finally , conclusion is given in section 5 .in the literature , there is a variety of approaches available that address the problem of target tracking in wsns .however , only few publications report the use of wsn for chasing the target by a mobile sink . most of the previous works have focused on delivering the real - time information about trajectory of a tracked target to a stationary sink .this section gives references to the wsn - based tracking methods reported in the literature that deal explicitly with the problem of target chasing by a mobile sink .a thorough survey of the literature on wsn - based object tracking methods can be found in references .kosut et al . have formulated the target chasing problem , which assumes that the target performs a simple random walk in a two - dimensional lattice , moving to one of the four neighbouring lattice points with equal probability at each time step .the target chasing method presented in was intended for a system composed of static sensors that can detect the target , with no data transmission between them .each static sensor is able to deliver the information about the time of the last target detection to the mobile sink only when the sink arrives at the lattice point where the sensor is located .a more complex model of the wsn - based target tracking system was introduced by tsai et al .this model was used to develop the dynamical object tracking protocol ( dot ) which allows the wsn to detect the target and collect the information on target track .the target position data are transferred from sensor nodes to a beacon node , which guides the mobile sink towards the target .a similar method was proposed in , where the target tracking wsn with monitor and backup sensors additionally takes into account variable velocity and direction of the target . in this paper two target tracking methods are proposed that contribute to performance improvement of the above - mentioned target tracking approaches by reducing both the time to catch ( i.e. , the time in which mobile sink can reach the target ) and the data communication costs in wsn . in this study ,the total hop count is analysed to evaluate the overall cost of communications , however it should be noted that different metrics can also be also used , e.g. , number of data transfers to sink , number of queries , number of transmitted packets , and energy consumption in sensor nodes .the introduced algorithms provide decision rules to optimize the amount of data transfers from sensor nodes to sink during target chasing .the research reported in this paper is a continuation of previous works on target tracking in wsn , where the data collection was optimized by using heuristic rules and the uncertainty - based approach .the algorithms proposed in that works have to be executed by the mobile sink . in the present studythe data collection operations are managed by distributed sensor nodes . to reduce the number of active sensor nodes the proposed algorithms adopt the prediction - based tracking method . according to this methoda prediction model is applied , which forecasts the possible future positions of the target . on this basisonly the sensor nodes expected to detect the target are activated at each time step .in this section two methods are proposed that enable reduction of data transfers in wsn during target tracking .the wsn - based target tracking procedure is executed in discrete time steps . at each time step both the target and the sink move in one of the four directions : north , west , south or east .their maximum velocities ( in segments per time step ) are assumed to be known .movement direction of the target is random . for sink the directionis decided on the basis of information delivered from wsn . during one timestep the sink can reach the nearest segments that satisfy the maximum velocity constraint : , where coordinates describe previous position of the sink .sink moves into segment for which the euclidean distance ] that the move of sink in direction will minimize its distance to the segment in which the target will be caught .the target coordinates are transferred to the sink only if the difference -p[dir(x_d , y_d)] ], the target node determines an area where the target can be caught .this area is defined as a set of segments : where and are the minimum times required for target and sink to reach segment .let and denote the segments into which the sink will enter at the next time step if it will move in directions and respectively .in area two subsets of segments are distinguished : subset that consists of segments that are closer to than to and subset of segments that are closer to than to : < d[(x , y),(x_s , y_s)_d]\},\ ] ] < d[(x , y),(x_s , y_s)_c]\},\ ] ] on this basis the probabilities ] calculations , width=226 ] the operations discussed above are illustrated by the example in fig .1 , where the positions of target and sink are indicated by symbols and respectively .velocity of the target is 1 segment per time step . for sink the velocity equals 2 segments per time step .gray color indicates the area in which the sink will be able to catch the target .the direction is shown by the arrow with number 1 and is indicated by the arrow with number 2 , thus and .subset includes gray segments that are denoted by 1 .the segments with label 2 belong to . in the analyzed example , , and .according to eq .( 4 ) = 0.54 ] and the difference of these probabilities equals to 0.16 .if the first proposed method is applied for the analysed example then the data transfer to sink will be executed , since , as shown by the arrows in fig , 1 . in case of the second method, the target node will send the coordinates to the sink provided that the difference of probability ( 0.16 ) is higher than a predetermined threshold .the threshold value should be interpreted as a minimum required increase in the probability of selecting the optimal movement direction , which is expected to be obtained after transferring the target position data .experiments were performed in a simulation environment to compare performance of the proposed methods against state - of - the - art approaches .the comparison was made by taking into account two criteria : time to catch and hop count .the time to catch is defined as the number of time steps in which the sink reaches the moving target .hop count is used to evaluate the cost of data communication in wsn . in the experiments , it was assumed that the monitored area is a square of 200 x 200 segments .each segment is equipped with a sensor node that detects presence of the target .thus , the number of sensor nodes in the analysed wsn equals 40 000 .communication range of each node covers the eight nearest segments .maximum velocity equals 1 segment per time step for the target , and 2 segments per time step for the sink .experiments were performed using simulation software that was developed for this study .the results presented in sect .4.3 were registered for 10 random tracks of the target ( fig .each simulation run starts with the same location of both the sink ( 5 , 5 ) and the target ( 100 , 100 ) . during simulationthe hop counts are calculated assuming that the shortest path is used for each data transfer to sink , the time to catch is measured in time steps of the control procedure .the simulation stops when target is caught by the sink . in the present study ,the performance is analysed of four wsn - based target tracking algorithms .algorithms 1 and 2 are based on the approaches that are available in literature , i.e. the prediction - based tracking and the dynamical object tracking .these algorithms were selected as representative for the state - of - the - art solutions in the wsn - based systems that control the movement of a mobile sink which has to reach a moving target .the new proposed methods are implemented in algorithms 3 and 4 .the pseudocode in tab .1 shows the operations that are common for all the examined algorithms .each algorithm uses different condition to decide if current position of the target will be transmitted to the sink ( line 6 in the pseudocode ) .these conditions are specified in tab .2 . for all considered algorithms ,the prediction - based approach is used to select the sensor nodes that have to be activated at a given time step .prediction of the possible target locations is based on a simple movement model , which takes into account the assumptions on target movement directions and its maximum velocity .if for previous time step the target was detected in segment , then at time step the set of possible target locations can be determined as follows : where is the maximum velocity of target in segments per time step ..pseudocode for wsn - based target tracking algorithms [ cols= " < , < " , ] algorithm 2 is based on the tracking method which was proposed for the dynamical object tracking protocol . according to this approachsink moves toward location of so - called beacon node .a new beacon node is set if the sink enters segment . in such case , the sensor node which currently detects the target in segment , becomes new beacon node and its location is communicated to the sink .when using this approach , the cost of data communication in wsn can be reduced because the data transfers to sink are executed less frequently than for the prediction - based tracking method .the proposed communication - aware tracking methods are applied in algorithm 3 and algorithm 4 ( see tab .details of these methods were discussed in sect .simulation experiments were carried out in order to determine time to catch values and hop counts for the compared algorithms . as it was mentioned in sect .3 , the simulations were performed by taking into account ten different tracks of the target .average results of these simulations are shown in fig .it is evident that the best results were obtained for algorithm 4 , since the objective is to minimise both the time to catch and the hop count .it should be noted that fig .3 . presents the results of algorithm 4 for different threshold values .the relevant threshold values between 0.0 and 0.9 are indicated in the chart by the decimal numbers . according to these results , the average time to catch increases when the threshold is above 0.2 . for the threshold equal to or lower than 0.2the time to catch takes a constant minimal value . the same minimal time to catchis obtained when using algorithm 3 , however in that case the hop count is higher than for algorithm 4 . in comparison with algorithm 1 both proposed methods enables a considerable reduction of the data communication cost .the average hop count is reduced by 47% for algorithm 3 and by 87% for algorithm 4 with threshold 0.2 .algorithm 2 also reduces the hop count by about 87% but it requires much longer time to catch the target .the average time to catch for algorithm 2 is increased by 52% .detailed simulation results are presented in fig .these results demonstrate the performance of the four examined algorithms when applied to ten different tracks of the target .the threshold value in algorithm 4 was set to 0.2 .the shortest time to catch was obtained by algorithms 1 , 3 and 4 for all tracks except the 5th one . in case of track 5 , when using algorithm 4 slightly longer time was needed to catch the target . for the remaining tracksthe three above - mentioned algorithms have resulted in equal values of the time to catch . in comparison with algorithm 1 ,the proposed algorithms ( algorithm 3 and algorithm 4 ) significantly reduce the data communication cost ( hop count ) for all analysed cases . for each considered track algorithm 2needs significantly longer time to reach the moving target than the other algorithms .the hop counts for algorithm 2 are close to those observed in case of algorithm 4 . according to the presented results, it could be concluded that algorithm 4 , which is based on the proposed method , outperforms the compared algorithms .it enables a significant reduction of the data communication cost .this reduction is similar to that obtained for algorithm 2 .moreover , the time to catch for algorithm 4 is as short as in case of algorithm 1 , wherein the target position is communicated to the sink at each time step .the cost of data communication in wsns has to be taken into account when designing algorithms for wsn - based systems due to the finite energy resources and the bandwidth - limited communication medium . in order to reduce the utilization of wsn resources ,only necessary data shall be transmitted to the sink .this paper is devoted to the problem of transferring target coordinates from sensor nodes to a mobile sink which has to track and catch a moving target .the presented algorithms allow the sensor nodes to decide when data transfers to the sink are necessary for achieving the tracking objective . according to the proposed algorithms ,only selected data are transmitted that can be potentially useful for reducing the time in which the target will be reached by the sink .performance of the proposed algorithms was compared against state - of - the - art approaches , i.e. , the prediction - based tracking and the dynamical object tracking .the simulation results show that the introduced algorithms outperforms the existing solutions and enable substantial reduction in the data collection cost ( hop count ) without significant decrease in the tracking performance , which was measured as the time to catch .the present study considers an idealistic wsn model , where the information about current position of target is always successfully delivered through multi - hop paths to the sink and the transmission time is negligible . in order to take into account uncertainty of the delivered information ,the precise target coordinates should be replaced by a ( fuzzy ) set .relevant modifications of the presented algorithms will be considered in future experiments .although the proposed methods consider a simple case with a single sink and a single target , they can be also useful for the compound tracking tasks with multiple targets and multiple sinks .such tasks need an additional higher - level procedure for coordination of the sinks , which has to be implemented at a designated control node , e.g. , a base station or one of the sinks .the extension of the presented approach to tracking of multiple targets in complex environments is an interesting direction for future works .zheng , j. , yu , h. , zheng , m. , liang , w. , zeng , p. : coordination of multiple mobile robots with limited communication range in pursuit of single mobile target in cluttered environment .journal of control theory and applications , vol .441 - 446 ( 2010 )
this paper introduces algorithms for target tracking in wireless sensor networks ( wsns ) that enable reduction of data communication cost . the objective of the considered problem is to control movement of a mobile sink which has to reach a moving target in the shortest possible time . consumption of the wsn energy resources is reduced by transferring only necessary data readings ( target positions ) to the mobile sink . simulations were performed to evaluate the proposed algorithms against existing methods . the experimental results confirm that the introduced tracking algorithms allow the data communication cost to be considerably reduced without significant increase in the amount of time that the sink needs to catch the target .
after more than two decades of investigations , black hole thermodynamics is still one of the most puzzling subjects in theoretical physics .one approach to studying the thermodynamical aspects of a black hole involves considering the evolution of quantum matter fields propagating on a classical ( curved ) background spacetime .this gives rise to the phenomenon of black hole radiation that was discovered by hawking in 1974 .combining hawking s discovery of black hole radiance with the classical laws of black hole mechanics , leads to the laws of black hole thermodynamics .the entropy of a black hole obtained from this approach may be interpreted as resulting from averaging over the matter field degrees of freedom lying either inside the black hole or , equivalently , outside the black hole , as was first anticipated by bekenstein even before hawking s discovery .the above approach was further developed in the following years .a second route to black hole thermodynamics involves using the path - integral approach to quantum gravity to study _ vacuum _ spacetimes ( i.e. , spacetimes without matter fields ) . in this method ,the thermodynamical partition function is computed from the propagator in the saddle point approximation and it leads to the same laws of black hole thermodynamics as obtained by the first method .the second approach was further developed in the following years .the fact that the laws of black hole thermodynamics can be derived without considering matter fields , suggests that there may be a purely geometrical ( spacetime ) origin of these laws .however , a complete geometrical understanding of black hole thermodynamics is not yet present . in general , a basic understanding of the thermodynamical properties of a system requires a specification of the system s ( dynamical ) degrees of freedom ( d.o.f . ) .obtaining such a specification is a nontrivial matter in quantum gravity . in the path - integral approachone avoids the discussion of the dynamical d.o.f .. there , the dominant contribution to the partition function comes from a saddle point , which is a classical euclidean solution . calculating the contribution of such a solution to the partition functiondoes not require an identification of what the dynamical d.o.f.s of this solution are .though providing us with an elegant way of getting the laws of black hole thermodynamics , the path - integral approach does not give us the basic ( dynamical ) d.o.f .from which we can have a better geometrical understanding of the origin of black hole thermodynamics .it was only recently that the dynamical geometric d.o.f . for a spherically symmetric vacuum schwarzschild black holewere found under certain boundary conditions .in particular , by considering general foliations of the complete kruskal extension of the schrawzschild spacetime , kucha finds a reduced system of _ one _ pair of canonical variables that can be viewed as global geometric d.o.f .. one of these is the schwarzschild mass , while the other one , its conjugate momentum , is the difference between the parametrization times at right and left spatial infinities . using the approach of kucha ,recently louko and whiting ( henceforth referred to as lw ) studied black hole thermodynamics in the hamiltonian formulation .as shown in fig .2 , they considered a foliation in which the spatial hypersurfaces are restricted to lie in the right exterior region of the kruskal diagram and found the corresponding reduced phase space system .this enabled them to find the unconstrained hamiltonian ( which evolves these spatial hypersurfaces ) and canonically quantize this reduced theory .they then obtain the schrdinger time - evolution operator in terms of the reduced hamiltonian . the partition function is defined as the trace of the euclideanised time - evolution operator , namely , , where the hat denotes a quantum operator .this partition function has the same expression as the one obtained from the path - integral approach and expectedly yields the laws of black hole thermodynamics . in a standard thermodynamical systemit is not essential to consider _euclidean_-time action in order to study the thermodynamics .if is the lorentzian time - independent hamiltonian of the system , then the partition function is defined as where is the inverse temperature of the system in equilibrium .however , in many cases ( especially , in time- independent systems ) the euclidean time - evolution operator turns out to be the same as .nevertheless , there are cases where , as we will see in section [ subsec : lwham ] , the euclidean time - evolution operator is not the same as .this is the case for example in the lw approach , i.e. , , where is the reduced hamiltonian of the quantized lw system .there is a geometrical reason for this inequality and in this work we discuss it in detail . in this paper , we ask if there exists a hamiltonian ( which is associated with certain foliations of the schwarzschild spacetime ) appropriate for finding the partition function of a schwarzschild black hole enclosed inside a finite - sized box using ( [ partition - trace ] ). such a procedure will not resort to euclideanization . in our quest to obtain the hamiltonian that is appropriate for defining the partition function for ( [ partition - trace ] ), we also clarify the physical significance of the lw hamiltonian . by doingso we hope to achieve a better understanding of the geometrical origin of the thermodynamical aspects of a black hole spacetime . in a previous work , brown and york ( henceforth referred to as by ) found a general expression for the quasilocal energy on a timelike two - surface that bounds a spatial three - surface located in a spacetime region that can be decomposed as a product of a spatial three - surface and a real line interval representing time . from this expressionthey obtained the quasilocal energy inside a spherical box centered at the origin of a four - dimensional spherically symmetric spacetime .they argued that this expression also gives the correct quasilocal energy on a box in the schwarzschild spacetime . in this paperwe show that , although their expression for the quasilocal energy on a box in the schwarzschild spacetime is correct , the analysis they use to obtain it requires to be extended when applied to the case of schwarzschild spacetime . in this case, one needs to impose extra boundary conditions at the timelike boundary inside the hole ( see fig .3 ) . as mentioned above , in principle, one can use the hamiltonian so obtained to evaluate the partition function , .this partition function corresponds to the canonical ensemble and describes the thermodynamics of a system whose volume and temperature are fixed but whose energy content is permitted to vary .such a hamiltonian , would then lead to a description of black hole thermodynamics without any sort of euclideanisation .the only obstacle to this route to the partition function is that the trace can be evaluated only if one knows the density of the energy eigenstates .unfortunately , without knowing what the thermodynamical entropy of the system is , it is not clear how to find this density in terms of the reduced phase - space variables of kucha .so how can one derive the thermodynamical laws of the schwarzschild black hole using a lorentzian hamiltonian without knowing the density of states ?based on an observation that identifies the thermodynamical roles of the by and the lw hamiltonians we succeed in studying black hole thermodynamics within the hamiltonian formulation but without euclideanization . in section [ sec : thermoconsi ] we describe the thermodynamical roles of the by and the lw hamiltonians .identifying these roles allows us to immediately calculate the partition function and recover the thermodynamical properties of the schwarzschild black hole . in section [ sec : geoconsi ] we study the geometrical significance of these hamiltonians .in particular , we extend the work of brown and york to find the nature of the spatial slices that are evolved by the by hamiltonian in the full kruskal extension of the schwarzschild spacetime . in section [ geothermo ]we use the observations made in sections [ sec : thermoconsi ] and [ sec : geoconsi ] to ascribe geometrical basis to the thermodynamical parameters of the system , thus gaining insight into the geometrical nature of black hole thermodynamics .we conclude the paper in section [ sec : conclu ] by summarising our results and discussing the connection between the foliation geometry and equilibrium black hole thermodynamics . in appendixa we extend our results to the case of two - dimensional dilatonic black holes . in appendixb we discuss an alternative foliation ( see fig .4 ) , in which the spatial slices are again evolved by the by hamiltonian . this illustrates the non - uniqueness of the foliation associated with the by hamiltonian .we shall work throughout in `` geometrized - units '' in which .it was shown by brown and york that in 4d spherically symmetric einstein gravity , the quasilocal energy of a system that is enclosed inside a spherical box of finite surface area and which can be embedded in an asymptotically flat space is where is the adm mass of the spacetime and is the fixed curvature radius of the box with its origin at the center of symmetry .we will call the brown - york hamiltonian .the brown and york derivation of the quasilocal energy can be summarised as follows .the system they consider is a spatial three - surface bounded by a two - surface in a spacetime region that can be decomposed as a product of a spatial three - surface and a real line interval representing time ( see fig .the time evolution of the two - surface boundary is the timelike three - surface boundary .they then obtain a surface stress - tensor on the boundary by taking the functional derivative of the action with respect to the three - metric on .the energy surface density is the projection of the surface stress tensor normal to a family of spacelike two - surfaces like that foliate .the integral of the energy surface density over such a two - surface is the quasilocal energy associated with a spacelike three - surface whose _ orthogonal _ intersection with is the two - boundary . as argued by by , eq .( [ quasih ] ) also describes the total energy content of a box enclosing a schwarzschild hole .one would thus expect to obtain the corresponding partition function from it by the prescription . as mentioned above, the only obstacle to this calculation is the lack of knowledge about the density of states of the system , which is needed to evaluate the trace .however , as we discuss in the next subsection , there is another hamiltonian associated with the schwarzschild spacetime that allows us to obtain the relevant partition function without euclideanization .this is the louko - whiting hamiltonian . in their quest to obtain the partition function for the schwarzschild black hole in the hamiltonian formulation, lw found a hamiltonian that time - evolves spatial hypersurfaces in a schwarzschild spacetime of mass such that the hypersurfaces extend from the bifurcation 2-sphere to a timelike box - trajectory placed at a constant curvature radius of ( see fig .as we will show in the next subsection , the lw hamiltonian describes the correct free energy of a schwarzschild black hole enclosed inside a box in the thermodynamical picture .the lw hamiltonian is where generically and are functions of time , which labels the spatial hypersurfaces .physically , , where is the time - time component of the spacetime metric on the box . on the other hand , the physical meaning of is as follows . on a classical solution ,consider the future timelike unit normal to a constant hypersurface at the bifurcation two - sphere ( see fig .then is the rate at which the constant hypersurfaces are boosted at the bifurcation 2-sphere : where is the initial hypersurface and is the boosted hypersurface .if one is restricted to a foliation in which the spatial hypersurfaces approach the box along surfaces of constant proper time on the box , then .on classical solutions , the spatial hypersurfaces approach the bifurcation 2-sphere along constant killing - time hypersurfaces ( see the paragraph containing eqs .( [ ls - r ] ) in section [ subsec : lwgeoham ] ) . the lw fall - off conditions ( [ ls - r ] ) , which are imposed on the adm variables at the bifurcation 2-sphere , can be used to show that on solutions , , where is the killing time and is the surface gravity of a schwarzschild black hole . in the particular case where the label time is taken to be the proper time on the box , we have ^{-1 } \ \ .\ ] ] with such an identification of the label time , eq .( [ lwht ] ) shows that on classical solutions the time - evolution of these spatial hypersurfaces is given by the hamiltonian where is given by ( [ n0sol ] ) .one may now ask if one can use the lw hamiltonian ( [ lwh ] ) to obtain a partition function for the system and study its thermodynamical properties .unfortunately , one can not do so in a straightforward manner .first , one can not replace in ( [ partition - trace ] ) by , the quantum counterpart of ( [ lwh ] ) , to obtain the partition function .the reason is that classically does not give the correct energy of the system ; the by hamiltonian of ( [ quasih ] ) does . to avoid this problem , lw first construct the schrdinger time - evolution operator .they then euclideanize this operator and use it to obtain the partition function . the partition function so obtained does not equal , but rather it turns out to be the same as that obtained via the path integral approach of gibbons and hawking .however , apart from this end result , a justification at some fundamental level has been lacking as to why the lw hamiltonian ( [ lwh ] ) and not any other hamiltonian ( eg . , ( [ quasih ] ) ) should be used to obtain the partition function using the lw procedure . in the next subsection , we will find the thermodynamical roles played by the by and lw hamiltonians .we will also show how this helps us in obtaining the partition function without euclideanization . this way we will avoid the ambiguity mentioned above that arises in the lw - method of constructing the partition function .as argued by brown and york , on solutions , the by hamiltonian in eq .( [ quasih ] ) denotes the internal energy residing within the box : in fact eq .( [ e ] ) can be shown to yield the first law of black hole thermodynamics where is the surface pressure on the box - wall the first term on the rhs of ( [ de ] ) is negative of the amount of work done by the system on its surroundings and , with hindsight , the second term is the product , where is the temperature and is the entropy of the system .we will not assume the latter in the following analysis ; rather we will deduce the form of and from first principles .we next show that the lw hamiltonian of eq .( [ lwh ] ) plays the role of helmholtz free energy of the system .recall that the helmholtz free energy is defined as where is the internal energy .thus in an isothermal and reversible process , the first law of thermodynamics implies that the amount of mechanical work done by a system , , is equal to the decrease in its free energy , i.e. , as a corollary to this statement it follows that for a mechanically isolated system at a constant temperature , the state of equilibrium is the state of minimum free energy .we now show that under certain conditions on the foliation of the spacetime with spatial hypersurfaces , the lw hamiltonian in eq .( [ lwh ] ) plays the role of _free energy_. we choose a foliation such that on solutions the lapse obeys ( [ n0sol ] ) . using the expression ( [ e ] ) for , the hamiltonian in eq . ( [ lwh ] ) can be rewritten as now let us perturb about a solution by perturbing and such that itself is held fixed .then note that keeping fixed , i.e. , , does not necessarily imply through ( [ n0sol ] ) that and are not independent perturbations .this is because , in general , the perturbed may not correspond to a solution and hence the perturbed need not have the form ( [ n0sol ] ) .however , here we will assume that the perturbations do not take us off the space of static solutions and , therefore , the perturbed has the form ( [ n0sol ] ) .hence in our case and are not independent perturbations . using ( [ de ] ) and ( [ n0sol ] ) in ( [ dhitode ] ) yields finally , from ( [ dhdw ] ) and ( [ fw ] ) we get where is a constant independent of .to find , we take the limit . in this limitboth and vanish and , therefore , has to be zero .another way to see that should vanish is to identify the geometric quantity with the temperature , , of the system ( up to a multiplicative constant ). then the perturbation ( [ dhitode ] ) in , keeping ( and , therefore , ) fixed , describes an isothermal process .but eq . ( [ hfc ] ) shows that has to be an extensive function of thermodynamic invariants of the isothermal process since and are both extensive .the only thermodynamic quantity that we assume to be invariant in this isothermal process is the temperature .but since is not extensive , has to be zero .the fact that indeed determines the temperature of the system will be discussed in detail in a later section .the above proof of the lw hamiltonian being the helmholtz free energy immediately allows us to calculate the partition function for a canonical ensemble of such systems , by simply putting . in this way we recover the thermodynamical properties of a schwarzschild black hole without euclideanization .we will do so in detail in section [ geothermo ] but first we establish the geometrical significance of by and lw hamiltonians in the next section .a study of the geometrical roles of the by and lw hamiltonians provides the geometrical basis for the thermodynamical parameters associated with a black hole that were discussed in the preceeding section . in this sectionwe begin by setting up the hamiltonian formulation appropriate for the two sets of boundary conditions that lead to the by and lw hamiltonians as being the unconstrained hamiltonians that generate time - evolution of foliations in schwarzschild spacetime .the notation follows that of kucha and lw . a general spherically symmetric spacetime metric on the manifold can be written in the adm form as where , , and are functions of and only , and is the metric on the unit two - sphere .we will choose our boundary conditions in such a way that the radial proper distance on the constant surfaces is finite .this implies that the radial coordinate have a finite range , which we take to be ] in ( [ s - ham ] ) is well defined under the above conditions .consider the total action = s_\sigma [ \lambda , r , p_\lambda , p_r ; n , n^r ] + s_{\partial\sigma } [ \lambda , r , p_\lambda , p_r ; n , n^r ] \ \ , \label{s - total}\ ] ] where the boundary action is given by \nonumber \\ & & = \int dt { \biggl [ n r r ' \lambda^{-1 } - n^r \lambda p_\lambda - \case{1}{2 } r { \dot r } \ln \left| { n + \lambda n^r \over n - \lambda n^r } \right| \biggr]}_{r=1 } \ \ , \label{s - boundary}\end{aligned}\ ] ] where ] ( [ s2-total ] ) .we shall reduce the action to the true dynamical degrees of freedom by solving the constraints .the constraint ( [ eom2-mm ] ) implies that is independent of .we can therefore write substituting this and the constraint ( [ eom2-psfr ] ) back into ( [ s2-total ] ) yields the true hamiltonian action = \int dt \left ( { \bf p } { \dot { \bf m } } - { \bf h } \right ) \ \ , \label{s - red}\ ] ] where the reduced hamiltonian in ( [ s - red ] ) takes the form here and are the values of and at the timelike boundary , and . as mentioned before , and prescribed functions of time , satisfying and .note that is , in general , explicitly time - dependent .the variational principle associated with the reduced action ( [ s - red ] ) fixes the initial and final values of .the equations of motion are [ red - eom ] equation ( [ red - eom1 ] ) is readily understood in terms of the statement that is classically equal to the time - independent value of the schwarzschild mass . to interpret equation ( [ red - eom2 ] ) , recall from sec .[ subsec : transformation ] that equals classically the derivative of the killing time with respect to , and therefore equals by ( [ bfp ] ) the difference of the killing times at the left and right ends of the constant surface .as the constant surface evolves in the schwarzschild spacetime , ( [ red - eom2 ] ) gives the negative of the evolution rate of the killing time at the right end of the spatial surface where it terminates at the outer timelike boundary at .note that gets no contribution from the inner timelike boundary located at in the dynamical region .this is a consequence of the fall - off conditions ( [ s - r ] ) which ensure that on solutions , the rate of evolution of the killing time at is zero .the case of interest is when the radius of the ` outer ' boundary two - sphere does not change in time , i.e. , . in that casethe second term in ( [ bfhb ] ) vanishes , and in ( [ red - eom2 ] ) is readily understood in terms of the killing time of a static schwarzschild observer , expressed as a function of the proper time and the blueshift factor .the reduced hamiltonian is given by where is the time - independent value of .unfortunately , the above hamiltonian does not vanish as goes to zero .the situation is remedied by adding the term of gibbons and hawking to .physically , this added term arises from the extrinsic curvature of the ` outer ' boundary two - sphere when embedded in flat spacetime . with the added term ,the hamiltonian becomes this is the quasilocal energy of brown and york when .the choice of determines the choice of time in the above hamiltonian .setting geometrically means choosing a spacetime foliation in which the rate of evolution of the spatial hypersurface on the box is the same as that of the proper time .then the new hamiltonian is namely , the quasilocal energy ( [ quasih ] ) in schwarzschild spacetime . in the next section ,we discuss the geometric relevance of the lw hamiltonian that , as we showed earlier , yields the correct free energy of the system .we now summarize the lw choice of the foliation of the schwarzschild spacetime , state the corresponding boundary conditions they imposed , and briefly mention how they obtain their reduced hamiltonian .the main purpose of this section is to facilitate a comparison between the lw boundary conditions and our choice of the boundary conditions ( as discussed in the preceeding subsections ) that yield the by hamiltonian . as shown in fig .2 , lw considered a foliation in which the spatial hypersurfaces are restricted to lie in the right exterior region of the kruskal diagram .each spatial hypersurface in this region extends from the box at the right end up to the bifurcation 2-sphere on the left end .the boundary conditions imposed by lw are as follows . at , they adopt the fall - off conditions [ ls - r ] where and are positive , and . equations ( [ ls - r - lambda ] ) and ( [ ls - r - r ] ) imply that the classical solutions have a positive value of the schwarzschild mass , and that the constant slices at are asymptotic to surfaces of constant killing time in the right hand side exterior region in the kruskal diagram , all approaching the bifurcation two - sphere as .the spacetime metric has thus a coordinate singularity at , but this singularity is quite precisely controlled . in particular , on a classical solutionthe future unit normal to a constant surface defines at a future timelike unit vector at the bifurcation two - sphere of the schwarzschild spacetime , and the evolution of the constant surfaces boosts this vector at the rate given by at , we fix and to be prescribed positive - valued functions of .this means fixing the metric on the three - surface , and in particular fixing this metric to be timelike . in the classical solutions ,the surface is located in the right hand side exterior region of the kruskal diagram . to obtain an action principle appropriate for these boundary conditions ,consider the total action = s_\sigma [ \lambda , r , p_\lambda , p_r ; n , n^r ] + s_{\partial\sigma } [ \lambda , r , p_\lambda , p_r ; n , n^r ] \ \ , \label{ls - total}\ ] ] where the boundary action is given by \nonumber \\ & & = \case{1}{2 } \int dt \, { \left [ r^2 n ' \lambda^{-1 } \right]}_{r=0 } \ ; \ ; + \int dt { \biggl [ n r r ' \lambda^{-1 } - n^r \lambda p_\lambda - \case{1}{2 } r { \dot r } \ln \left| { n + \lambda n^r \over n - \lambda n^r } \right| \biggr]}_{r=1 } \ \ .\label{ls - boundary}\end{aligned}\ ] ] the variation of the total action ( [ ls - total ] ) can be written as a sum of a volume term proportional to the equations of motion , boundary terms from the initial and final spatial surfaces , and boundary terms from and . to make the action ( [ ls - total ] ) appropriate for a variational principle ,one fixes the initial and final three - metrics , the box - radius , and the three - metric on the timelike boundary at .these are similar to the boundary conditions that we imposed to obtain the by hamiltonian . however , for the lw boundary conditions , one has to also fix the quantity at the bifurcation 2-sphere .each classical solution is part of the right hand exterior region of a kruskal diagram , with the constant slices approaching the bifurcation two - sphere as , and giving via ( [ n - boost ] ) the rate of change of the unit normal to the constant surfaces at the bifurcation two - sphere .although we are here using geometrized units , the argument of the in ( [ n - boost ] ) is a truly dimensionless boost parameter " even in physical units . to obtain the ( reduced ) lw hamiltonian , one needs to solve the super - hamiltonian and the supermomentum constraints . just as in the case of the by hamiltonian ( see subsection [ subsec : transformation ] ) , it helps to first make a canonical transformation to the kucha variables ( see lw for details ) . after solving the constraints , one obtains the following reduced action = \int dt \left ( { \bf p } { \dot { \bf m } } - { \bf h } \right ) \ \ , \label{ls - red}\ ] ] where and the reduced hamiltonian in ( [ ls - red ] ) is which is the same as the one given in eq .( [ lwht ] ) . in obtaining the above reduced form , we have assumed that the box - radius is constant in time , , just as we did in obtaining the by hamiltonian ( [ byhq ] ) .having established the geometrical significance of the by and lw hamiltonians , the basis for their thermodynamical roles becomes apparent .we showed that the by hamiltonian evolves spatial hypersurfaces in such a way that they span the spacetime region both inside and outside the event horizon .this is what one would expect from the fact that it corresponds to the quasilocal energy of the complete spacetime region enclosed inside the box . on the other hand ,the lw hamiltonian evolves spatial slices that are restricted to lie outside the event horizon . with our choice of the boost parameter , this corresponds to the helmholtz free energy of the system , which is less than the quasilocal energy : this is expected since the lw slices span a smaller region of the spacetime compared to the by slices .also , the fact that the lw slices are limited to lie outside the event horizon implies that the energy on these slices can be harnessed by an observer located outside the box .this is consistent with the fact that it corresponds to the helmholtz free energy of the system which is the amount of energy in a system that is available for doing work by the system on its surroundings . using the thermodynamical roles played by the lw hamiltonian and the by hamiltonian ( see section [ sec : thermoconsi ] ) ,we now derive , at the classical level , many of the thermodynamical quantities associated with the schwarzschild black hole enclosed inside a box .we begin by finding the temperature on the box . from eq .( [ lwh ] ) , the helmholtz free energy is the above equation , along with eqs .( [ f ] ) and ( [ e ] ) , implies that or , where .equation ( [ s1 ] ) gives an expression for the entropy in terms of the geometrical quantity . on the other hand one can find also from the thermodynamic identity where is the partition function defined by eq .( [ partitionh ] ) and eq .( [ lwhf1 ] ) .( [ s2 ] ) gives the entropy to be the above equation gives another expression for the entropy , now in terms of the derivative of .comparing eqs .( [ s1 ] ) and ( [ s3 ] ) we find where is some undetermined quantity that is independent of . the exact form of as a function of is found by noting that the free energy should be a minimum at equilibrium . since in a canonical ensemble the box - radius and the temperature ( which is proportional to )are fixed , the only quantity in that can vary is . thus the question we ask is the following : for a fixed value of the curvature radius and the boost parameter , what is the value of that minimizes ?> from the expression for in eq .( [ lwhf1 ] ) one finds this value of , to be a function of .inverting this relation gives > from ( [ n0beta ] ) and ( [ n0 m ] ) we find that the equilibrium temperature on a box of radius enclosing a black hole of mass obeys in agreement with known results .significantly , eq . ( [ n0beta ] ) shows that the equilibrium temperature geometrically corresponds to a particular value of the boost parameter .> from eqs .( [ s1 ] ) and ( [ n0beta ] ) we find that the entropy of a schwarzschild black hole is quadratic in its mass .unfortunately , in this formalism one can not determine the correct constants of proportionality in and .however , notice that our derivation is purely classical .although simple mathematically , this derivation is incomplete due to the lack of the constant of proportionality in eq .( [ n0beta ] ) . the correct value for this constant , , can be obtained only from a quantum treatment .finally , we note that for the spatial slices that obey , the free energy can be obtained from ( [ lwh ] ) to be the above equation shows that if the radius of the box is kept fixed , then the free energy of the system is minimum for the configuration with a black hole of mass .in this work our goal was to seek a geometrical basis for the thermodynamical aspects of a black hole .we find that the value of the brown - york hamiltonian can be interpreted as the internal energy of a black hole inside a box . whereas the value of the louko - whiting hamiltonian gives the helmholtz free energy of the system . after finding these thermodynamical roles played by the by and lw hamiltonians, we ask what the geometrical significance of these hamiltonians is . in this regard the geometrical role of the lw hamiltonian was already known .it was recently shown by lw that their hamiltonian evolves spatial hypersurfaces in a special foliation of the kruskal diagram .the characteristic feature of this foliation is that it is limited to only the right exterior region of this spacetime ( see fig .2 ) and the spatial hypersurfaces are required to converge onto the bifurcation 2-sphere , which acts as their inner boundary ( the box itself being the outer boundary ) . on the other hand , the geometrical significance of the by hamiltonian as applied to the black hole case was not fully known , although it had been argued that its value is the energy of the schwarzschild spacetime region that is enclosed inside a spherical box . in this workwe establish the geometric role of the by hamiltonian by showing that it is the generator of time - evolution of spatial hypersurfaces in certain foliations of the schwarzschild spacetime . establishing the thermodynamic connection of the by and lw hamiltonians allowed us to obtain a geometrical interpretation for the equilibrium temperature of a black hole enclosed inside a box , i.e. , as measured by a stationary observer on the box .geometrically , the temperature turns out to be the rate at which the lw spatial hypersurfaces are boosted at the bifurcation 2-sphere .one could however ask what happens if the lw hypersurfaces are evolved at a different rate , i.e. , if the label time is chosen to be boosted with respect to the proper time of a stationary observer on the box . in that case , it can be shown that the by hamiltonian and the rate at which the lw hypersurfaces are evolved at the bifurcation 2-sphere get `` blue - shifted '' by the appropriate boost - factor . on the other hand , the entropy of the system can still be interpreted as the change in free energy per unit change in the temperature of the system .we thank abhay ashtekar , viqar husain , eric martinez , jorg pullin , lee smolin , and jim york for helpful discussions .we would especially like to thank jorma louko for critically reading the manuscript and making valuable comments .financial support from iucaa is gratefully acknowledged by one of us ( sb ) .this work was supported in part by nsf grant no .phy-95 - 07740 .the approach we describe above in studying the thermodynamics of 4d spherically symmetric einstein gravity can also be extended to the case of the 2d vacuum dilatonic black hole in an analogous fashion . in the case of a 2d black hole ,the event horizon is located at a curvature radius , where is a positive constant that sets the length - scale in the 2d models .the quasilocal energy of a system comprising of such a black hole in the presence of a timelike boundary situated at a curvature radius can be shown to be which strongly resembles the 4d counterpart in ( [ quasih ] ) . evolves constant spatial hypersurfaces that extend from an inner timelike boundary lying on a constant killing - time surface in the dynamical region up to a timelike boundary ( the box ) placed in the right exterior region ( see fig .3 ) . the hamiltonian that evolves the two - dimensional counterpart of the louko and whiting spatial slices that extend from the bifurcation point up to the box ( see fig .2 ) is where in general and are functions of time .the above hamiltonian was found in ref .there it was found that is the rate at which the spatial hypersurface are boosted at the bifurcation point . on the other hand , , being the time - time component of the spacetime metric on the box .if one restricts the spatial hypersurfaces to approach the box along constant proper - time hypersurfaces , then , as in 4d , . using fall - off conditions on the adm variables at the bifurcation point analogous to the lw fall - off conditions ( [ ls - r ] ) , it can be shown that on solutions where is the killing time , and is the surface gravity of a witten black hole .the time - evolution of these restricted spatial hypersurfaces is given by the hamiltonian like the 4d case , here too it can be shown that is analogous to the internal energy , whereas denotes the helmholtz free energy of the 2d system .a similar analysis also shows that and which is inverse of the blue - shifted temperature on the box .the temperature of a 2d black hole at infinity on the other hand is , which is independent of the black hole mass .in section [ sec : thermoconsi ] , we found a choice of spatial hypersurfaces that were evolved by the by hamiltonian under a specific set of boundary conditions . in this appendixwe find a different choice of spatial hypersurfaces , i.e. , with a different inner boundary , that is evolved by the by hamiltonian under a different set of boundary conditions .we begin by stating the boundary conditions and specifying the spacetime foliation they define . at the inner boundary ,we fix and to be prescribed positive - valued functions of .this means fixing the metric on the three - surface , and in particular fixing this metric to be spacelike there . on the other hand , at , we fix and to be prescribed positive - valued functions of .this means fixing the metric on the three - surface to be timelike . in the classical solutions ,the surface is located in the right exterior region of the kruskal diagram .we now wish to give an action principle appropriate for these boundary conditions .note that the surface action ] implies the difference in the values of the _ term _ evaluated at and at .the variation of the total action ( [ app : s - total ] ) can be written as a sum of a volume term proportional to the equations of motion , boundary terms from the initial and final spatial surfaces , and boundary terms from and .the boundary terms from the initial and final spatial surfaces take the usual form with the upper ( lower ) sign corresponding to the final ( initial ) surface .these terms vanish provided we fix the initial and final three - metrics .the boundary term from and read }_{r=0}^{r=1 } \ \ , \label{app : bt-1}\end{aligned}\ ] ] where ^b_a ] ( [ s2-total ] ) to the true dynamical degrees of freedom by solving the constraints ( [ eom2-mm ] ) and ( [ eom2-psfr ] ) as before .this gives the true hamiltonian action to be = \int dt \left ( { \bf p } { \dot { \bf m } } - { \bf h } \right ) \ \ , \label{app : s - red}\ ] ] where and are defined as in section [ subsec : reduction ] . the reduced hamiltonian in ( [ app : s - red ] ) takes the form with here ( ) and ( ) are the values of and at the timelike ( spacelike ) boundary ( ) , and . , , , and are considered to be prescribed functions of time , satisfying and .note that is , in general , explicitly time - dependent .the interpretation of ( [ app : red - eom1 ] ) remains unchanged .to interpret equation ( [ app : red - eom2 ] ) , note that equals by ( [ bfp ] ) the difference of the killing times at the left and right ends of the constant surface .as the constant surface evolves in the schwarzschild spacetime , the first term in ( [ app : red - eom2 ] ) gives the evolution rate of the killing time at the left end of the hypersurface , where the hypersurface terminates at a spacelike surface located completely in the future dynamical region ( see fig .4 ) . the second term in ( [ app : red - eom2 ] )gives the negative of the evolution rate of the killing time at the right end of the surface , where the surface terminates at the timelike boundary .the two terms are generated respectively by ( [ app : bfhs ] ) and ( [ app : bfhb ] ) .the case of interest is when the ` inner ' spacelike boundary lies on the schwarzschild singularity , i.e. , , and when the radius of the ` outer ' boundary two - sphere does not change in time , . in that case ( [ app : bfhs ] ) and the second term in ( [ app : bfhb ] ) vanish .one can also make the first term in ( [ app : red - eom2 ] ) vanish provided one restricts the slices to approach the surface at in such a way that vanishes faster than there . the second term in ( [ app : red - eom2 ] )is readily understood in terms of the killing time of a static schwarzschild observer , expressed as a function of the proper time and the blueshift factor .the reduced hamiltonian is given by where is the time - independent value of .following the same arguments as given in sec .[ subsec : reduction ] , we find that the appropriate hamiltonian under the new boundary conditions of this appendix is which is the by hamiltonian . similarly , from the time - reversal symmetry of the kruskal extension of the schwarzschild spacetime , the by hamiltonian could also be interpreted to evolve spatial slices that extend from the box upto an inner boundary that is the past white hole spacelike singularity .m. srednicki , phys .lett . * 71 * , 666 ( 1993 ) ; + g. t hooft , nucl .phys . * b256 * , 727 ( 1985 ) ; + l. susskind and j. uglum , phys .d * 50 * , 2700 ( 1994 ) ; + m. maggiore , nucl .* b429 * , 205 ( 1994 ) ; + c. callan and f. wilczek , phys .* b333 * , 55 ( 1994 ) ; + s. carlip and c. teitelboim , phys .d * 51 * , 622 ( 1995 ) ; + s. carlip , phys .d * 51 * , 632 ( 1995 ) ; + y. peleg , `` quantum dust black holes '' , brandeis university report no.brx-th-350 , hep - th/93077057 ( 1993 ) .figure 1 : a bounded spacetime region with boundary consisting of initial and final spatial hypersurfaces and and a timelike three - surface . itself is the time - evolution of the two - surface that is the boundary of an arbitrary spatial slice .figure 2 : the louko - whiting choice of a foliation of the schwarzschild spacetime .the spatial slices of this foliation extend from the bifurcation two - sphere to the box .the initial and final spatial hypersurfaces have label time and , respectively .figure 3 : a choice of foliating the schwarzschild spacetime that is different from the louko - whiting choice . herethe spatial slices extend from the box to a timelike inner boundary that is located completely inside the hole .
in this work , we extend the analysis of brown and york to find the quasilocal energy in a spherical box in the schwarzschild spacetime . quasilocal energy is the value of the hamiltonian that generates unit magnitude proper - time translations on the box orthogonal to the spatial hypersurfaces foliating the schwarzschild spacetime . we call this hamiltonian the brown - york hamiltonian . we find different classes of foliations that correspond to time - evolution by the brown - york hamiltonian . we show that although the brown - york expression for the quasilocal energy is correct , one needs to supplement their derivation with an extra set of boundary conditions on the interior end of the spatial hypersurfaces inside the hole in order to obtain it from an action principle . replacing this set of boundary conditions with another set yields the louko - whiting hamiltonian , which corresponds to time - evolution of spatial hypersurfaces in a different foliation of the schwarzschild spacetime . we argue that in the thermodynamical picture , the brown - york hamiltonian corresponds to the _ internal energy _ whereas the louko - whiting hamiltonian corresponds to the _ helmholtz free energy _ of the system . unlike what has been the usual route to black hole thermodynamics in the past , this observation immediately allows us to obtain the partition function of such a system without resorting to any kind of euclideanization of either the hamiltonian or the action . in the process , we obtain some interesting insights into the geometrical nature of black hole thermodynamics . = 10000 epsf
the distribution of wealth , income or company size is one of important issues not only in economics but also in econophysics . in these distributions ,a cumulative number obeys a power - law for which is larger than a certain threshold : this power - law and the exponent are called pareto s law and pareto index , respectively . here is wealth , income , profits , assets , sales , the number of employees and etc .the study of power - law distributions in the high range is quite significant . because a large part of the total wealth , income or profits is occupied by persons or companies in the high region ,although the number of them is a few percent .they have the possibility to influence economics .power - law distributions are observed in fractal systems which have self - similarity .this means that there is self - similarity in economic systems .the power - law distribution in the high region is well investigated by using various models in econophysics .recently , fujiwara et al . find that pareto s law ( and the reflection law ) can be derived kinematically form the law of detailed balance and gibrat s law which are observed in high region . in the proof , they assume no model and only use these two underlying laws in empirical data .the detailed balance is time - reversal symmetry : here and are two successive incomes , profits , assets , sales , etc , and is a joint probability distribution function ( pdf ) .gibrat s law states that the conditional probability distribution of growth rate is independent of the initial value : here growth rate is defined as the ratio and is defined by using the pdf and the joint pdf as in ref . , by using profits data of japanese companies in 2002 ( ) and 2003 ( ) , it is confirmed that the law of detailed balance ( [ detailed balance ] ) holds in all regions and ( fig .[ profit2002vsprofit2003 ] ) .we also find that gibrat s law still holds in the extended region and . from these observations, we have shown that pareto index is also induced from the growth rate distribution empirically . on the other hand , it is also well known that the power - law is not observed below the threshold .for instance , we show the profits distributions of japanese companies in 2002 and 2003 ( fig .[ profitdistribution ] ) .we find that pareto s law holds in the high profits region and it fails in the middle one .the study of distributions in the middle region is as important as the study of power - law those . because a large number of persons or companies is included in the middle region .furthermore , it is interesting to study the breaking of fractal . in order to obtain the distribution in the middle region ,we examine data below the threshold .it is reported that gibrat s law is valid only in the high region ( for instance ) .the recent works about the breakdown of gibrat s law is done by stanley s group .aoyama et al .also report that gibrat s law does not hold in the middle region by using data of japanese companies . in the analysis of gibrat s law in ref . , we concentrate our attention to the region and . in this paper , we examine the growth rate distribution in all regions and and identify the law which is the extension of gibrat s law . and non - gibrat s law . ]we employ profits data of japanese companies in 2002 and 2003 which are available on the database `` cd eyes '' published by tokyo shoko research , ltd . . by using the extended gibrat s law, we derive the distribution function in the high and middle profits region under the law of detailed balance .it explains empirical data with high accuracy .notice that the distribution function has no fitting parameter .the parameters of the function are already decided in the extended gibrat s law .in this section , we reconfirm gibrat s law in the high profits region and identify non - gibrat s law in the middle one .we divide the range of into logarithmically equal bins as $ ] thousand yen with .. ] in fig .[ profitgrowthratel ] , [ profitgrowthratem ] and [ profitgrowthrateh ] , the probability densities for are expressed in the case of , and , respectively . the number of the companies in fig. [ profitgrowthratel ] , [ profitgrowthratem ] and [ profitgrowthrateh ] is " , " and " , respectively .here we use the log profits growth rate . the probability density for defined by related to that for by from fig .[ profitgrowthratel ] , [ profitgrowthratem ] and [ profitgrowthrateh ] , we express the relation between and as follows : by the use of eq .( [ qandq ] ) , these relations are rewritten in terms of as with . by applying the expressions ( [ approximation1 ] ) and ( [ approximation2 ] ) to data in fig .[ profitgrowthratel ] , [ profitgrowthratem ] and [ profitgrowthrateh ] , the relation between and is obtained ( fig .[ x1vst ] ) . in fig .[ x1vst ] , hardly responds to for .this means that gibrat s law holds in the high profits region .on the other hand , ( ) increases ( decreases ) linearly with for . in the middle profits region , not gibrat s law but the other law( non - gibrat s law ) holds as follows : , we examine another type of non - gibrat s law . ] in fig .[ x1vst ] , and are estimated as where thousand yen ( million yen ) and thousand yen . in this paperwe call the combination of gibrat s law ( ( [ t+ ] ) , ( [ t- ] ) and ( [ alphah ] ) ) and non - gibrat s law ( ( [ t+ ] ) , ( [ t- ] ) and ( [ alpham ] ) ) extended gibrat s law .in refs . , pareto s law ( [ pareto ] ) and the pareto index can be derived from the detailed balance ( [ detailed balance ] ) and gibrat s law ( [ gibrat ] ) in the high profits region . in this section ,we derive profits distribution not only in the high profits region but also in the middle one by using the detailed balance and the extended gibrat s law . due to the relation of under the change of variables from to ,these two joint pdfs are related to each other , by the use of this relation , the detailed balance ( [ detailed balance ] ) is rewritten in terms of as follows : substituting the joint pdf for the conditional probability defined in eq . ( [ conditional ] ) ,the detailed balance is expressed as in the preceding section , the conditional probability is identified as ( [ rline1 ] ) or ( [ rline2 ] ) . under the change of variables ,the conditional probability is expressed as from the detailed balance and the extended gibrat s law , one finds the following : for . herewe assume that the dependence of is negligible in the derivation , the validity of which should be checked against the results . by expanding eq .( [ de0 ] ) around , the following differential equation is obtained p(x ) + x~ p'(x ) = 0,\end{aligned}\ ] ] where denotes .the same differential equation is obtained for .the solution is given by with .here we use the relation which is confirmed in ref .in the previous section , we derive the profits distribution function ( [ handm ] ) in the high and middle profits region from the detailed balance and the extended gibrat s law . in this section, we directly examine whether it fits with profits distribution data . in order to average the scattering of data points , we employ the cumulative number of companies ( fig .[ profitdistribution ] ) . for , by using eqs .( [ alphah ] ) and ( [ handm ] ) the cumulative distribution in the high profits region is expressed as on the other hand , by the use of eqs .( [ alpham ] ) and ( [ handm ] ) the cumulative distribution in the middle profits region is given by here is error function defined by . in fig .[ profitdistributionfit ] , the distribution functions ( [ cdh ] ) in the high profits region ( ) and ( [ cdm ] ) in the middle one ( ) explain empirical data in 2003 with high accuracy .this guarantees the validity of the assumption in the previous section .notice that there is no ambiguity in parameter fitting , because indices , and the bounds , is already given in the extended gibrat s law .is especially called zipf s law . ]in section [ sec - gibrat s law and non - gibrat s law ] , we present the non - gibrat s law ( [ t+ ] ) , ( [ t- ] ) and ( [ alpham ] ) as a linear approximation of in fig .[ x1vst ] . in this section ,we examine another linear approximation in fig .[ x1vst2 ] , the vertical axis of which is the logarithm . in fig .[ x1vst2 ] , ( ) increases ( decreases ) linearly with for .this relation is expressed as where here and are same values in section [ sec - gibrat s law and non - gibrat s law ] . from the detailed balance and this extended gibrat s law, one finds for . by expanding this equation around , the following differential equationis obtained p(x ) + x~ p'(x ) = 0~ , \label{de } \end{aligned}\ ] ] where denotes .the same differential equation is obtained for . in order to take limit into account ,we rewrite this as follows : p(x ) + x~ p'(x ) = 0~.\end{aligned}\ ] ] the solution is given by with . herewe use the expansion . if we neglect terms in eq .( [ handmanother ] ) , the profits distribution function ( [ handmanother ] ) can be identified with ( [ handm ] ) , because numerically . in other words, there is no essential difference between two expressions ( [ t+ ] ) , ( [ t- ] ) and ( [ t+another ] ) , ( [ t - another ] ) in the middle region .in this paper , we have kinematically derived the profits distribution function in the high and middle region from the law of detailed balance and the extended gibrat s law by employing profits data of japanese companies in 2002 and 2003 .firstly , we have reconfirmed gibrat s law in the high profits region ( ) .the value of is estimated to be about million yen in fig .[ x1vst ] or [ x1vst2 ] . at the same time, we have found that gibrat s law fails and another law holds in the middle profits region ( ) .we have identified the non - gibrat s law and the value of is estimated to be about thousand yen in fig .[ x1vst ] or [ x1vst2 ] .we have called the combination of gibrat s law and the non - gibrat s law the extended gibrat s law .secondly , we have derived not only pareto s law in the high profits region but also the distribution in the middle one from the detailed balance and the extended gibrat s law .the derivation has been described uniformly in terms of the extended gibrat s law .the profits distribution in the middle region is very similar to log - normal one : ~ , \label{log - normal}\end{aligned}\ ] ] where is mean value and is variance .the difference between two distributions ( [ handm ] ) and ( [ log - normal ] ) is only the power of .it brings a translation along the horizontal axis in the log - log plot . in this sense ,the distribution in the middle profits region obtained in this paper is essentially equivalent to the log - normal one .notice that it has no fitting parameter that the log - normal distribution , prepared only for data fitting , has .indies , and the bounds , is already given in the extended gibrat s law .in the derivation of the profits distribution , we have used the detailed balance and the extended gibrat s law .the detailed balance is observed in a relatively stable period in economy .the extended gibrat s law is interpreted as follows .companies , classified in small - scale profits category , have more ( less ) possibilities of increasing ( decreasing ) their profits than companies classified in large - scale one .in other words , it is probably difficult that companies gain large - scale profits in two successive years. this is the non - gibrat s law and it leads the distribution in the middle profits region .this phenomenon is not observed above the threshold . for ,companies , classified in small - scale profits category , have same possibilities of increasing ( decreasing ) their profits with companies classified in large - scale one .this is the gibrat s law and it leads the power distribution in the high profits region . in this paper, we can not mention the distribution in the low profits region , because no law is observed in the region ( fig .[ x1vst ] or [ x1vst2 ] ) .this may be caused by insufficient data in the low region .lastly , we speculate possible distributions in the region . if we do not take limit , we should solve the differential equation ( [ de ] ) .the solution is given by ~. \label{profitdistributionl}\end{aligned}\ ] ] for the case and , the distribution ( [ profitdistributionl ] ) takes the exponential form in refs . . on the other hand , for the case and , the distribution ( [ profitdistributionl ] ) takes weibull form in ref . can decide the distribution in the low profits region if sufficient data in the region are provided .we have showed that a distribution function is decided by underlying kinematics . for profits data we used ,the distribution is power in the high region and log - normal type in the middle one .this does not claim that all the distributions in the middle region are log - normal types .for instance , the personal income distribution may take a different form , if the extended gibrat s law changes . even in the case, the other extended gibrat s law will decide the other distribution by the use of the method in this paper .the author is grateful to professor h. aoyama for useful discussions about his lecture . 99 r.n .mategna and h.e .stanley , an introduction to econophysics , cambridge university press , uk , 2000 . v. pareto , cours deconomique politique , macmillan , london , 1897. h. aoyama , w. souma , y. nagahara , h.p .okazaki , h. takayasu and m. takayasu , cond - mat/0006038 , fractals 8 ( 2000 ) 293 ; + w. souma , cond - mat/0011373 , fractals 9 ( 2001 ) 463 .a. dr and v.m .yakovenko , cond - mat/0103544 , physica a299 ( 2001 ) 213 .y. fujiwara , w. souma , h. aoyama , t. kaizoji and m. aoki , cond - mat/0208398 , physica a321 ( 2003 ) 598 ; + h. aoyama , w. souma and y. fujiwara , physica a324 ( 2003 ) 352 ; + y. fujiwara , c.d .guilmi , h. aoyama , m. gallegati and w. souma , cond - mat/0310061 , physica a335 ( 2004 ) 197 ; + y. fujiwara , h. aoyama , c.d .guilmi , w. souma and m. gallegati , physica a344 ( 2004 ) 112 ; + h. aoyama , y. fujiwara and w. souma , physica a344 ( 2004 ) 117 .r. gibrat , les inegalites economiques , paris , sirey , 1932 .a. ishikawa , pareto index induced from the scale of companies , physics/0506066 . w.w .badger , in : b.j .west ( ed . ) , mathematical models as a tool for the social science , gordon and breach , new york , 1980 , p. 87 ; + e.w .montrll and m.f .shlesinger , j. stat .32 ( 1983 ) 209 .k. okuyama , m. takayasu and h. takayasu , physica a269 ( 1999 ) 125 .stanley , l.a.n .amaral , s.v .buldyrev , s. havlin , h. leschhorn , p. maass , m.a . salinger and h.e .stanley , nature 379 ( 1996 ) 804 ; + l.a.n .amaral , s.v .buldyrev , s. havlin , h. leschhorn , p. maass , m.a .salinger , h.e .stanley and m.h.r .stanley , j. phys .( france ) i7 ( 1997 ) 621 ; + s.v .buldyrev , l.a.n .amaral , s. havlin , h. leschhorn , p. maass , m.a .salinger , h.e .stanley and m.h.r .stanley , j. phys .( france ) i7 ( 1997 ) 635 ; + l.a.n .amaral , s.v .buldyrev , s. havlin , m.a . salinger and h.e .stanley , phys .80 ( 1998 ) 1385 ; + y. lee , l.a.n .amaral , d. canning , m. meyer and h.e .stanley , phys .81 ( 1998 ) 3275 ; + d. canning , l.a.n .amaral , y. lee , m. meyer and h.e .stanley , economics lett .60 ( 1998 ) 335 .9th annual workshop on economic heterogeneous interacting agents ( wehia 2004 ) ; + the physical society of japan 2004 autumn meeting .tokyo shoko research , ltd ., http://www.tsr - net.co.jp/. g.k .gipf , human behavior and the principle of least effort , addison - wesley , cambridge , 1949 .m. nirei and w. souma , sfi/0410029 .m. anazawa , a. ishikawa , t. suzuki and m. tomoyose , cond - mat/0307116 , physica a335 ( 2004 ) 616 ; + a. ishikawa and t. suzuki , cond - mat/0403070 , physica a343 ( 2004 ) 376 ; + a. ishikawa , cond - mat/0409145 , physica a349 ( 2005 ) 597 .
employing profits data of japanese companies in 2002 and 2003 , we identify the non - gibrat s law which holds in the middle profits region . from the law of detailed balance in all regions , gibrat s law in the high region and the non - gibrat s law in the middle region , we kinematically derive the profits distribution function in the high and middle range uniformly . the distribution function accurately fits with empirical data without any fitting parameter . pacs code : 04.60.nc + keywords : econophysics ; pareto law ; gibrat law ; detailed balance
sharpe ratio has become a `` gold standard '' for measuring performance of hedge funds and other institutional investors ( this note uses the generic term `` portfolio '' ) .it is sometimes argued that it is applicable only to i.i.d .gaussian returns , but we will follow a common practice of ignoring such assumptions . for simplicitywe assume that the benchmark return ( such as the risk - free rate ) is zero .the ( ex post ) _ sharpe ratio _ of a sequence of returns is defined as , where ( none of our results will be affected if we replace , assuming , by , as in , ( 6 ) . )intuitively , the sharpe ratio is the return per unit of risk .another way of measuring the performance of a portfolio whose sequence of returns is is to see how this sequence of returns would have affected an initial investment of 1 assuming no capital inflows and outflows after the initial investment .the final capital resulting from this sequence of returns is .we are interested in conditions under which the following anomaly is possible : the sharpe ratio is large while .( more generally , if we did not assume zero benchmark returns , we would replace by the condition that in the absence of capital inflows and outflows the returns underperform the benchmark portfolio . )suppose the return is over periods , and then it is in the period . as , and .therefore , making large enough , we can make the sharpe ratio as large as we want , despite losing all the money over the periods .if we want the sequence of returns to be i.i.d ., let the return in each period be with probability and with probability , for a large enough . with probability onethe sharpe ratio will tend to a large number as , despite all money being regularly lost .of course , in this example the returns are far from being gaussian ( strictly speaking , returns can not be gaussian unless they are constant , since they are bounded from below by ) .it is easy to see that our examples lead to the same conclusions when the sharpe ration is replaced by the _ sortino ratio _ , where examples of the previous section are somewhat unrealistic in that there is a period in which the portfolio loses almost all its money . in this sectionwe show that only in this way a high sharpe ratio can become compatible with losing money .for each ] ( left ) and ] ( left ) and ] and over ] the slope of is roughly 1 .we can see that even for a relatively large value of , the sharpe ratio of a losing portfolio never exceeds 0.5 ; according to table [ tab : f1 ] , ( much less than the conventional threshold of 1 for a good sharpe ratio ) ..[tab : f1]the approximate values of , , and for selected . [ cols="^,^,^,^,^,^,^,^",options="header " , ] the values of and for selected are shown in table [ tab : g ] , on the left and on the right .the meaning of is the same as in tables [ tab : f1 ] and [ tab : f2 ]. we do not give the values of ; they are huge on the left - hand side of the table and equal to on the right - hand side .the left - hand side suggests that , and this can be verified analytically .figures [ fig : f1][fig : g2 ] can be regarded as a sanity check for the sharpe and sortino ratio .not surprisingly , they survive it , despite the theoretical possibility of having a high sharpe and , _ a fortiori _ , sortino ratio while losing money . in the case of the sharpe ratio , such an abnormal behaviour can happen only when some one - period returns are very close to . in the case of the sortino ratio, such an abnormal behaviour can happen only when some one - period returns are very close to or when some one - period returns are huge .
a simple example shows that losing all money is compatible with a very high sharpe ratio ( as computed after losing all money ) . however , the only way that the sharpe ratio can be high while losing money is that there is a period in which all or almost all money is lost . this note explores the best achievable sharpe and sortino ratios for investors who lose money but whose one - period returns are bounded below ( or both below and above ) by a known constant .
the balance between excitatory and inhibitory elements is of great relevance for processes of self - organization in various neurological structures .synchronization is one prominent phenomenon that is critically influenced by the balance of excitation and inhibition and is itself an important mechanism involved in processes such diverse as learning and visual perception on the one hand and the occurrence of parkinson s disease and epilepsy on the other hand .exploring the preconditions for synchronization on a theoretical level can be effectively performed within the framework of complex dynamical networks a field that has gained much attention during the last decades .the influence of a particular network architecture can be rich and is widely studied .however , the interaction between the elements of the network can be modeled not only by the strength of the connecting links , but in addition , time - delayed coupling can be incorporated to take into account finite signal transmission times .a single constant delay time , referred to as discrete time - delay , already influences the system s dynamics and its synchronization properties dramatically . yet, more complex concepts of time - delay have been considered recently to achieve a better correspondence between the model and natural systems .a distribution of delay - times arises naturally , if a model is constructed from experimental data .for instance , the delay distribution in a specific avian neural feedback loop can be well approximated by a gamma distribution .additionally , time - dependent delay - times can as well be represented by distributed delay , if the delay - time varies rapidly compared to intrinsic time - scales of the system .recent theoretical studies of systems with distributed delay have addressed amplitude death and non - invasive stabilization of periodic orbits . in this work ,we study the effects of distributed time - delay on synchronization in a network model with inhibitory nodes .the stability of synchronization is investigated using the master stability function ( msf ) approach . with this techniquethe effect of the network topology can be separated from the dynamics of the network s constituents , which is described by the master stability function of a complex parameter .the eigenvalues of the coupling matrix representing the network structure then yield the stability for a specific network when they are inserted for the parameter .thus , distributed delay , which affects the dynamics , and inhibition , which alters the eigenvalue spectrum of the coupling matrix , can both be investigated independently .this work extends previous works on inhibition - induced desynchronization in time - delayed networks in two ways .first , distributed delay is considered instead of discrete delay .second , a more realistic network model based on inhibitory nodes instead of inhibitory links is proposed and studied. such a model might be more appropriate especially in the context of neuroscience , where for the majority of neurons it is established that the same set of neurotransmitters is released at each synapse , a rule which is often referred to as dale s law .thus , it is reasonable to assume that in most cases a neuron either excites all its neighbors or inhibits all its neighbors , and not a combination of both .in contrast to previous work where the inhibitory links have been added randomly , we develop a highly symmetric model , which has two main advantages . first , the eigenvalue spectrum of the coupling matrix is real and can be calculated analytically .second , in networks with randomly added inhibition , the increase of inhibition is accompanied by the increase of asymmetry .thus , it is not possible to study the pure effect of inhibition , but only a combined effect of asymmetry and inhibition .this difficulty does not arise in the network model proposed here . as the networks constituents we choose stuart landau oscillators .in contrast to simple phase - oscillators like the kuramoto oscillator , this paradigmatic non - linear oscillator has an additional radial degree of freedom , which leads to more complex dynamics , i.e. , coupled amplitude and phase dynamics .further , it can be used to describe generally any system close to a hopf bifurcation , which occurs in a variety of prominent systems used to model e.g. , semiconductor lasers , neural systems like the fitzhugh - nagumo model or the morris - lecar model .this paper is structured as follows . in section [ sec_balance ], we introduce the network model .we derive and characterize the eigenvalue spectrum of its coupling matrix .in section [ sec_sync ] , we investigate the existence of synchronous oscillations with respect to the distributed delay parameters .subsequently , we analyze their stability in section [ sec_stable ] . in section [ sec_trans ] , we discuss how the interplay of distributed delay and inhibitory nodes influences the stability for the specific network model proposed here .the robustness of our results against asymmetric perturbations of the network topology is investigated numerically in section [ sec_asym ] .we conclude our work in section [ sec_conc ] .the topology of a network is encoded in its coupling matrix . here , we consider networks without self - coupling and normalized row sum which allows for the existence of synchronous solutions . asthe stability of the synchronous state depends on the eigenvalue spectrum of , we are interested in the question how the balance between excitatory and inhibitory nodes influences the latter . it is known that inhibition corresponding to negative entries in the coupling matrix increases the spreading of the eigenvalue distribution in the complex plane . for a symmetric network with excitatory nodes only ,all eigenvalues are real and located within the interval ] .hence , networks below the critical inhibition ratio given in eq .( [ inhi ] ) exhibit stable synchronous oscillations .the same applies to mean delays exactly between multiples of half the eigenperiod , i.e. , multiples of a quarter eigenperiod ( cf .[ fig_msf_rect]c ) . here ,the unit interval is contained within the stability regions for any .additionally , a second stability island appears above a certain delay distribution width .the second stability island can induce stable synchronization for networks with an inhibition ratio above the critical one .this is the case if the eigenvalues outside the unit interval happen to lie inside the additional stability island .for any mean delay different from the special cases mentioned above , the msf shows a very rich structure ( fig . [ fig_msf_rect]b ) .depending on a single connected or two disconnected stability regions of various size can occur .further , the gap between the two islands can lie left or right to the line .note that similar behavior has been found in case of a discrete delay and different local dynamics .the stability gap in the master stability function immediately implies several interesting consequences for the stability of synchronous solutions in inhibitory networks , which are discussed in detail in the following .as the interval ] and is bounded by .thus , all networks up to the critical inhibition ratio exhibit stable synchronous solutions and above the critical inhibition ratio , the stability collapses and the network desynchronizes .\(ii ) for larger , i.e. , a second stability island appears in the msf for and its extension grows with .this causes a resynchronization at relatively high inhibition ratios , when the largest eigenvalue increasing with reenters the stability region . for is depicted in fig .[ fig_desync]c . since for stabilityall eigenvalues of the coupling matrix are required to lie inside the stability regions , the gap of the eigenvalue spectrum of is crucial . it must contain the stability gap of the msf in order to induce resynchronization above the critical inhibition ratio .\(iii ) in a narrow -range around , the two stability islands of the msf almost touch each other . here , synchronization is stable up to extremely high inhibitory ratios of about .\(iv ) for even larger , i.e. , , the stability gap of the msf lies within the interval 12 & 12#1212_12%12[1][0] * * , ( ) link:\doibase 10.1038/nature01616 [ * * , ( ) ] link:\doibase 10.1523/jneurosci.5297 - 05.2006 [ * * , ( ) ] link:\doibase 10.1038/nn.2276 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/0959-4388(95)80012-3 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.101.078102 [ * * , ( ) ] link:\doibase 10.1126/science.1216483 [ * * , ( ) ] , link:\doibase 10.1103/physrevlett.106.194101 [ * * , ( ) ] link:\doibase 10.1103/physreve.86.061903 [ * * , ( ) ] \doibase doi : 10.1016/s0896 - 6273(00)80821 - 1 [ * * , ( ) ] * * , ( ) _ _ , ed .( , , ) link:\doibase 10.1103/revmodphys.74.47 [ * * , ( ) ] \doibase doi : 10.1016/j.physrep.2005.10.009 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreve.79.056207 [ * * , ( ) ] link:\doibase 10.1103/physreve.81.025205 [ * * , ( ) ] link:\doibase 10.1209/0295 - 5075/96/60013 [ * * , ( ) ] link:\doibase 10.1140/epjb / e2012 - 30810-x [ * * , ( ) ] in _ _ , ( , , ) chap . , pp . * * , ( ) link:\doibase 10.1137/060673813 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.94.158104 [ * * , ( ) ] link:\doibase 10.1140/epjb / e2014 - 40985 - 7 [ * * , ( ) ] link:\doibase 10.1007/s00422 - 008 - 0239 - 8[ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreve.88.032912 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1140/epjb / e2011 - 20677 - 8 [ * * , ( ) ] link:\doibase 10.1098/rsta.2012.0466 [ * * , ( ) ] link:\doibase 10.1007/s40435 - 013 - 0049 - 2 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.80.2109 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.105.254101 [ * * , ( ) ] _ _ , springer theses ( , , ) _ _ , annals of mathematics studies ( , ) link:\doibase 10.1103/physreve.89.032915 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreve.84.046107 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.99.238106 [ * * , ( ) ] * * , ( ) * * , ( ) http://opac.inria.fr/record=b1049107[__ ] , ( , )
we investigate the combined effects of distributed delay and the balance between excitatory and inhibitory nodes on the stability of synchronous oscillations in a network of coupled stuart landau oscillators . to this end a symmetric network model is proposed for which the stability can be investigated analytically . it is found that beyond a critical inhibition ratio synchronization tends to be unstable . however , increasing distributional widths can counteract this trend leading to multiple resynchronization transitions at relatively high inhibition ratios . the extended applicability of the results is confirmed by numerical studies on asymmetrically perturbed network topologies . all investigations are performed on two distribution types , a uniform distribution and a gamma distribution .
imaging detector arrays based on transition edge sensors ( tes ) are a maturing technology for a range of both earth and space - base applications , such as infrared astronomy , x - ray astronomy , and material analysis .squid - based multiplexed readout is needed to obtain a detector noise limited readout chain .multiplexing in this context implies that multiple tes signals which overlap in time and phase space are made mathematically independent so that they can be transported through a shared readout chain without losing information , by multiplying each tes signal with an independent carrier and subsequently adding these modulated tes signals .the various multiplexing variants which are being developed can be ordered based on their modulating element .we distinguish the variants in which the squid is the multiplying element which mounts the signal on a carrier , and the variant in which the tes mounts the signal on the carrier ( fdm ) .for the first category the tes is biased with a direct voltage ( dc ) , and the modulated signals are separated in the time domain ( tdm) , or frequency domain ( microwave squid multiplexing or code domain multiplexing ) . for the second category the tes is biased with an alternating voltage ( ac ) , and the modulated signals are separated in the frequency domain ( fdm) . for both variants of multiplexing the signalshave to be bandwidth limited to prevent addition of wide - band noise by using an inductor as low - pass filter , or an inductor and capacitor in series as band - pass filter , respectively .an important difference between the two categories of multiplexing within the context of this paper , is the property that for case of squid - based modulation the bias sources of the teses are commonly shared between pixels to lower the total wire count , with the consequence that the set points of the pixels can not be chosen individually , whereas for the case of tes - based modulation the tes bias sources are independent and therefore multiplexed .this reduces the wire count and makes the working points of the pixels individually adjustable .the individual bias setting per pixel can also be used to change the impedance of the tes bias source for each pixel by applying feedback to the applied bias voltage based on the measured tes current .this type of feedback has been proposed in different incarnations and for different purposes .it has been shown that with this type of feedback the effective thermal decay time of x - ray detectors can be made shorter when the effective internal resistance of bias source is made negative , i.e. , where is the operating resistance of the tes , and the total series resistance in the tes bias circuit .the same feedback topology has also been applied to compensate parasitic resistances in the tes bias circuit , to increase the stiffness of the voltage bias and thereby the electro thermal feedback ( etf ) loop gain , leaving a ( slightly ) net positive resistance in the voltage source .recently , the authors of this paper proposed to use the regime of for bolometer operation , so that the tes resistance is kept constant under optical loading , and that the small signal parameters of the tes change minimally under optical loading .as a result , the linearity of the detector is optimised , and the dependence of the cross talk on the signal level is minimised . in this paper, we show the first experimental results which demonstrate that operation in the regime where is stable both for single pixel and for multiplexed operation , and that the signal - to - noise ratio of the tes - based detector is conserved under the rll with respect to voltage bias . for practical reasonsthe experiments have been divided over two setups .the comparison of the signal - to - noise ratio was performed in a bolometer pixel with a low nep of / .the multiplexing experiment was performed with x - ray micro calorimeters , because pulse - tube induced micro - vibrations allowed for stable operation of only one pixel in the bolometer multiplexer during this particular experiment .-curve of a bolometer pixel as taken under voltage bias .the pixel consists of a tes and absorber on a sin island , which is suspended by four sin legs of .the size of the tiau tes equals m , with mk , a bias power level of fw , and a measured dark nep of / .the bias points under which the pixel has been characterised using the rll are indicated with the crosses . ]the experiments were performed with a 3-pixel fdm setup with integrated lithographic 8-pixel bandpass filters , and a two - pixel x - ray fdm setup , both with baseband feedback ( bbfb ) electronics which provides squid linearisation and signal demodulation .the rll was integrated in the firmware of the bbfb electronics , following the block diagram as sketched in fig .[ fig : setup ] .the system is adjusted such that the dissipative tes current is aligned fully in the -channel of the readout .the -channel feedback is required to ensure stability of the bbfb system .the output of the -channel , i.e. the measured tes current , is multiplied with a constant ( ) , which is proportional to the resulting set point resistance of the tes , as the applied bias voltage .the switch toggles between the ( standard ) voltage bias condition and the rll mode .to demonstrate the stability of the rll , and to compare the signal - to - noise ratio of a bolometer pixel under voltage bias and the rll , a typical bolometer pixel under development for the sw band of safari was used , at a bias frequency of 1.46 mhz .the pixel consists of a tes and absorber on a sin island , which is suspended by four sin legs of .the size of the tiau tes equals m , with mk , a bias power level of fw , and a measured dark nep of / .the pixel has been characterised for a range of bias points in the transition under both voltage bias and the rll .the current - voltage characteristic of the pixel as obtained under voltage bias is shown in fig . [fig : ivcurve ] .the bias points which have been characterised using the rll are marked with crosses .the latter range is limited because of electrothermal stability constraints , which originate from the much higher electrothermal loop gain under the rll , in combination with the fact that the circuit was originally designed for operation under voltage bias only .before discussing the results , we need to consider that under voltage bias the observed current noise spectra density and the noise equivalent power ( nep ) scale linearly , as the bias voltage is kept constant . under the rllthe situation is different , as not the bias voltage is kept constant , but the tes resistance instead . as a result ,the detector nep is proportional to the tes bias current squared , as the dissipated power equals . in the small signal limitthis implies that to calculate the nep the observed current noise spectral density must be multiplied by 2 , as the square of the observed current equals in the small signal limit . from the rll small signal model it follows that the detector nep is equal under voltage bias and the rllthis implies that also the information bandwidth of the detector , i.e. the frequency at which the phonon noise and johnson noise of the detector are equal , is the same under rll and voltage bias .it also follows from the small signal model that the measured current noise spectral densities under voltage bias and the rll cross sect at the information bandwidth . for frequencies above the information bandwidth of the detector , but below the electrical bandwidth , the behaviours diverge . for that regime under voltage biaswe expect to observe the current noise spectral density corresponding to a resistor of , whereas under the rll this resistor value equals .the measured current noise power spectral densities for four different bias points ( 0.55 , 0.58 , 0.62 , and 0.74 ) in the transition are shown in fig .[ fig : rll - vb - noise ] . to make a direct comparison with the noise power spectral density under voltage bias possible ,the spectra as measured under the rll have been multiplied by a factor of two , as discussed above . as a result of this multiplication ,the information band is not found anymore at the cross section between the spectra , but at the frequency where the difference equals a factor of 2 .note that as we intend to do a relative comparison , and because we do nt have an optical signal source available to measure the responsivity of the detector , the observed current noise spectral density has not been converted into calibrated nep values .instead , we present the current noise spectral densities .it is clear that in the low frequency region , i.e. hz which is within the information bandwidth of the detector , the noise spectra overlap , as was to be expected as discussed above .we also observe that for higher frequencies the observed noise power is much higher for the rll than under voltage bias .this qualitatively scales with the expected factor , assuming that .unfortunately impedance data is lacking for this detector so that a quantitative comparison is not possible .however , the onset of electrothermal oscillations between 1 and 2 khz shows that the thermal bandwidth of the detector , which is approximately equal to the information bandwidth ( hz ) , has indeed shifted with a significant factor of with respect to the situation under voltage bias . to demonstrate that the rll can also be applied for multiple pixels simultaneously , we put two x - ray microcalorimeter pixels simultaneously under resistance locking .it was found that the pixels operated stable , and that the observed dynamic behaviour and nep did not change when the second pixel was added ( not shown ) .note that the observed currents are used for biasing the tess , to close the rll ( see fig .[ fig : setup ] ) . as an illustration of the resulting beating in the currentis shown in the time domain plot in fig .[ fig:2pix ] .frequency domain multiplexing makes it possible to create a resistance locked loop per pixel , as the pixel bias voltages are multiplexed and therefore individually tuneable .the rll improves detector speed , linearity and dynamic range for bolometer operation .it was shown that for fdm resistance locking can easily be integrated in the digital baseband feedback electronics which is used for demodulation and squid linearisation .it was found that within the information bandwidth of the detector the observed neps are equal , and that closing multiple loops simultaneously does not impair the stability of the system . when combined with the earlier observation that for tes - based bolometers the performance under ac bias is equal to under dc bias , we believe that this mode of operation is suitable for larger tes - based bolometer applications .o. noroozian , j. a. b. mates , d. a. bennett , j. a. brevik , j. w. fowler , j. gao , g. c. hilton , r. d. horansky , k. d. irwin , z. kang , d. r. schmidt , l. r. vale , and j. n. ullom , `` high - resolution gamma - ray spectroscopy with a microwave - multiplexed transition - edge sensor array , '' _ applied physics letters _ , vol .103 , no . 20 , pp . ,[ online ] .available : http://scitation.aip.org/content/aip/journal/apl/103/20/10.1063/1.4829156 m. a. dobbs , m. lueker , k. a. aird , a. n. bender , b. a. benson , l. e. bleem , j. e. carlstrom , c. l. chang , h .-cho , j. clarke , t. m. crawford , a. t. crites , d. i. flanigan , t. de haan , e. m. george , n. w. halverson , w. l. holzapfel , j. d. hrubes , b. r. johnson , j. joseph , r. keisler , j. kennedy , z. kermish , t. m. lanting , a. t. lee , e. m. leitch , d. luong - van , j. j. mcmahon , j. mehl , s. s. meyer , t. e. montroy , s. padin , t. plagge , c. pryke , p. l. richards , j. e. ruhl , k. k. schaffer , d. schwan , e. shirokoff , h. g. spieler , z. staniszewski , a. a. stark , k. vanderlinde , j. d. vieira , c. vu , b. westbrook , and r. williamson , `` frequency multiplexed superconducting quantum interference device readout of large bolometer arrays for cosmic microwave background measurements , '' _ review of scientific instruments _ , vol .83 , no . 7 , pp . ,[ online ] .available : http://scitation.aip.org/content/aip/journal/rsi/83/7/10.1063/1.4737629 j. van der kuur , j. beyer , m. bruijn , j. gao , r. den hartog , r. heijmering , h. hoevers , b. jackson , b. van leeuwen , m. lindeman , m. kiviranta , p. de korte , p. mauskopf , p. de korte , h. van weers , and s. withington , `` the spica - safari tes bolometer readout : developments towards a flight system , ''_ j. low temp . phys . _ , vol .167 , pp . 561567 , 2012 .s. w. nam , b. cabrera , p. colling , r. clarke , e. figueroa - feliciano , a. miller , and r. romani , `` a new biasing technique for transition edge sensor with electrothermal feedback , '' _ ieee trans ._ , vol . 9 , no . 2 ,pp . 42094212 , 1999 .j. van der kuur and m. kiviranta , `` operation of transition edge sensors in a resistance locked loop , '' _ applied physics letters _102 , no . 2 , p. 023505 , 2013 .[ online ] .available : http://link.aip.org/link/?apl/102/023505/1 m. bruijn , l. gottardi , r. den hartog , j. van der kuur , a. van der linden , and b. jackson , `` , '' _ _ , vol .176 , no . 3 - 4 ,pp . 421425 , 2014 .[ online ] .available : http://dx.doi.org/10.1007/s10909-013-1003-6 b. d. jackson , p. a. j. de korte , j. van der kuur , p. d. mauskopf , j. beyer , m. p. bruijn , a. cros , j .-gao , d. griffin , r. den hartog , m. kiviranta , g. de lange , b .-van leeuwen , c. macculi , l. ravera , n. trappe , h. van weers , and s. withington , `` the spica - safari detector system : tes detector arrays with frequency division multiplexed squid readout , '' _ ieee trans ._ , vol . 2 , no . 1 , pp . 1221 , 2012 .l. gottardi , h. akamatsu , m. bruijn , j .-gao , r. den hartog , r. hijmering , h. hoevers , p. khosropanah , a. kozorezov , j. van der kuur , a. van der linden , and m. ridder , `` weak - link phenomena in ac - biased transition edge sensors , '' _ journal of low temperature physics _ , vol .176 , pp . 279284 ,
tes - based bolometer and microcalorimeter arrays with thousands of pixels are under development for several space - based and ground - based applications . a linear detector response and low levels of cross talk facilitate the calibration of the instruments . in an effort to improve the properties of tes - based detectors , fixing the tes resistance in a resistance - locked loop ( rll ) under optical loading has recently been proposed . earlier theoretical work on this mode of operation has shown that the detector speed , linearity and dynamic range should improve with respect to voltage biased operation . this paper presents an experimental demonstration of multiplexed readout in this mode of operation in a tes - based detector array with noise equivalent power values ( nep ) of / . the measured noise and dynamic properties of the detector in the rll will be compared with the earlier modelling work . furthermore , the practical implementation routes for future fdm systems for the readout of bolometer and microcalorimeter arrays will be discussed .
critical phenomena in anisotropic systems without equivalent nearest neighbors constitute an interesting research topic . a universal formula for percolation thresholds , that involves the dimension of the anisotropic lattice and an arithmetic average of the coordination number for different anisotropic lattices , has been recently postulated in ref. .the extension of these studies to more complex problems , such as directed percolation ( dp ) , and more complex systems , such as anisotropic random systems , is yet to be addressed . in this context ,random systems are good candidates to model anisotropy since they do not have equivalent nearest neighbors nor equivalent sites at all lengths . in this workwe propose a simple simulation model to study the properties of dp in two - dimensional ( 2d ) anisotropic random media .the degree of anisotropy is computed by means of the ratio between the axes of a semi - ellipse enclosing the bonds that promote percolation in one direction , such that ( see fig.1 ) . as a function of the order parameter and at the percolation threshold , we measure the correlation length exponent and the fractal dimension of the largest percolating clusters ( in systems of up to 51200 random sites ) . in the present model ,the well - known scaling exponents of isotropic dp follow by simply setting . at percolation threshold ,our model shows that the average number of bonds per site for dp in anisotropic 2d random systems is an invariant ( ) independently of .this result suggests that the sinai theorem , proposed originally for isotropic percolation ( ip ) , is also valid for anisotropic dp problems .the new invariant also yields a constant for all , which corresponds to the value of isotropic dp .the paper is organized as follows . in the next sectionwe outline our model . in sec.iii , we present the results of our simulations and discuss the effects of on the scaling exponents .in order to simulate dp in 2d anisotropic random media we develop a simulation algorithm similar to the one used in ref. .the coordinates of sites are generated at random in a square box of size .the simulation length unit is chosen such that the density of sites , namely , in the box is always unity regardless of the total number of sites .the percolation is then checked over sites from the left edge towards the right edge of the simulation box ( _ i.e. _ , along the x - axis in fig.1 ) . a periodical boundary condition is applied in the vertical -direction . in fig.1 we show a ` particle ' that moves from to .the moving is allowed whenever the site is contained within the shaded elliptical area . in our simulations , the degree of anisotropy is given by the parameter , where is the longer and is the shorter axis of a semi - ellipse , _i.e. _ , is the ratio of the maximum ` hopping distances ' along the - and -axes . in the standard 2d isotropic dpthere are three possible equivalent directions to move : up , down and forward .this situation in our model is attained by setting = 1 . in the limit , the model tends to the one - dimensional ( 1d ) percolation problem .thus , simulation results using the present 2d percolation model will reveal features of the crossover from the standard ( say , isotropic ) dp to the 1d percolation problem . for intermediate values of model features anisotropic dp . for a given value of the anisotropy parameter and for a given realization of random site coordinates , in a sample of size , we study percolation from the left- to the right- simulation box edge . at the percolation threshold , we obtain the critical value of the semi - ellipse - axis : and the mass of the critical cluster : = `` _ total number of sites belonging to the largest cluster at percolation _ '' .these quantities , are then averaged over a great number of random realizations of site coordinates for the same sample size which result on the average quantities and , respectively . in general , the dependence of the averages and on the samples size is a consequence of the finite size effects of the percolation problem . in order to quantify these effects ,the present simulations were performed at different = 400 , 800 , 1600 , 3200 , 6400 , 12800 , 25600 and 51200 .accordingly , the number decreases from to such that the product of the numbers is approximately the same for all sample sizes in our study . along with these average quantities ,we also calculate the moments ^{2 } > ^{1/2 } \;\;\ ; , \;\;\ ; \label{eq : delta } \\\delta m(n ) & = & < [ m(n ) - { \cal m}(n)]^{2 } > ^{1/2 } \;\;\ ; , \end{aligned}\ ] ] and also the next - order moments , which are used to estimate the statistical errors of our simulation results . the present measurements are performed for various values of and . as can be seen from the results discussed in the next section ,the greater the value of , the stronger the finite size effects are .we verify that for simulations can only been carried out in samples of size . following the well - known finite - size scaling procedure suggested in ref., the critical exponent of the percolation problem is defined from the scaling expression where is given in eq.([eq : delta ] ) .note that in the present study percolation is checked by the longitudinal direction only ( the -axes in fig.1 ) , then the exponent in eq.([eq : mu ] ) should be identified with the parallel ( see ) .in fig.2(a ) the quantities are plotted versus for different values of the order parameter .the slopes of the fitting lines give the corresponding values for the exponent .thus we found that for the largest , and for we measured .other values are given in the figure . from these calculated moments and the linear fitting procedure, we estimate the statistical error to be less than for all values of shown in this figure .results for the dp limiting case has been previously reported by one of us . in this case , the value is known as the universal value of for a whole class of istropic dp models in 2d . as the amount of anisotropy increases , _i.e. _ , the correlation length exponent decreases .since this decrease is initially very fast to then become smoothly , it is not possible to obtain the whole crossover from 2d to 1d directed percolation for the behaviour of .that is , the decrease from in 2d - dp to the limit in 1d is limited by the size of our simulation box .the finite - size effects in correspondence to the different values of ( and , therefore , to the different degrees of anisotropy in the 2d random systems ) are in fact equivalent to those discussed in great detail in refs. for anisotropic percolation and isotropic dp . by using the values of in fig.2(a ) ,the critical ` _ radius _ ' is determined from the scaling expression are plotted versus . from these plotswe obtain by taking the asymptotic values for all studied .the estimated values of are also shown in this figure .very remarkably , our simulations show that for all considered the quantity is in fact a constant .since is the area of the critical semi - ellipse at percolation , then our results suggests that sinai s theorem , proposed originally for ip , is also valid for 2d anisotropic dp problems . in this respect, we emphasize again that our length unit should be taken as for a system with site concentration .thus , our simulations leads to the new invariance where is the site concentration ( _ e.g. _ , the donor concentration in doped semiconductors ) , is the area of the critical semi - ellipse and is the mean number of connected bonds per site at percolation .the invariance of eq.([eq : inv ] ) may be somehow related to the fractal behavior of the critical clusters as we shall discuss below .let us determine first the fractal dimension of the critical percolation cluster using a standard procedure based on the scaling expression in fig.2(b ) the quantities are plotted against for different values of the anisotropy parameter .very surprisingly we found that the fractal dimensions , as determined from the slopes of the fitting lines for various values of in fig.2(b ) , seem indeed to be constant and independent of within our simulation errors .we estimate for all , which corresponds to about the same value of the isotropic dp model with . at a first glancethis result might rise some doubts , but we believe it can be understood in connection with the invariant given in eq.([eq : inv ] ) .the invariance of , with respect to changes in the anisotropy parameter , implies that the average number of connected bonds at percolation is independent of .if we assume the percolation process within an elementary semi - ellipse ( as in fig.1 ) to be the ` originating percolation rule ' , then the invariance of eq.([eq : inv ] ) could mean that the law to generate percolation clusters remains unchanged as varies .if this conjecture is right , we could suggest here a more general statement for all types of percolation models which are related to each other by the sinai theorem ; in these cases , the fractal dimensions of the percolation clusters could all be the same .it should be noted that our simulations are limited to .it is in this range that we observed the invariance of and the constant value for .we believe that these features are maintained for a larger range of values .however , it is not feasible to increase and the sample size simultaneously and get to the point where the present 2d simulation model crosses to the 1d case ( _ i.e. _ , ) . to conclude ,we have suggested a model for anisotropic directed percolation ( adp ) and have presented the first simulation results for the main critical exponents of the model in 2d random systems .quite surprisingly , we have found a new invariance for the average number of connected bonds at percolation due to presence of a suitable external force ( _ e.g. _ , shear stress , magnetic field , _ etc _ ) .our simulations show that the product is a constant for all s considered .this invariance should be in close relation to the value of .we strongly believe the present model of adp could be important to describe some physical phenomena such as hopping conduction in anisotropic -ge and -si under strong electrical fields , where the impurity wave functions are anisotropic and the conduction band splits into one ellipsoid .our measurements could be useful , for instance , in the expressions for the hopping resistivity in 2d anisotropic random media .the new invariance could be used in these systems similarly to the invariance for ip in a circle problem .we hope the present model will stimulate further investigations on this direction .one of the authors ( n.v.l . ) would like to thank the condensed matter group at ictp , trieste , for financial support .99 for a review , see d. stauffer , phys .rep . * 54 * 1 ( 1979 ) ; physica a * 242 * 1 ( 1997 ) ; also j.w .essam , rep .prog . phys . * 43 * 833 ( 1980 ) .i.g . enting and j. oitmaa , j. phys . a * 10 * , 1151 ( 1977 ) .s. galam and a. mauger , phys .e * 56 * , 322 ( 1997 ) . v. lien nguyen and a. rubio , solid state commun . * 95 * 833 ( 1995 ) .levinstein , b.i .shklovskii , m.s .shur and a.l .efros , sov .jetp * 42 * 197 ( 1975 ) v. lien nguyen , b.i .shklovskii and a.l .efros , sov .* 13 * 1281 ( 1979 ) .ya.g . sinai ( _ theorem _ ) : `` if a surface can be obtained from the surface via a linear transformation of coordinates that involves both rotation and dilatation , then the values ( _ i.e. _ , the mean number of bonds per site ) for the two surfaces are identical '' , see page 116 .b. mandelbrot in _ fractal geometry of nature _ , freeman ( san francisco 1982 ) b.i .shklovskii and a.l .efros in _ electronic properties of doped semiconductors _ , springer - verlag ( berlin , 1984 ) chapters 5 and 6 .
we propose a simulation model to study the properties of directed percolation in two - dimensional ( 2d ) anisotropic random media . the degree of anisotropy in the model is given by the ratio between the axes of a semi - ellipse enclosing the bonds that promote percolation in one direction . at percolation , this simple model shows that the average number of bonds per site in 2d is an invariant equal to 2.8 independently of . this result suggests that sinai s theorem proposed originally for isotropic percolation is also valid for anisotropic directed percolation problems . the new invariant also yields a constant fractal dimension for all , which is the same value found in isotropic directed percolation ( _ i.e. _ , ) .
in many applications , in particular in the pricing of financial securities , we are interested in the effective computation by monte carlo methods of the quantity , where is a diffusion process and a given function .the monte carlo euler method consists of two steps .first , approximate the diffusion process by the euler scheme with time step . then approximate by , where is a sample of independent copies of .this approximation is affected , respectively , by a discretization error and a statistical error on one hand , talay and tubaro prove that if is sufficiently smooth , then with a given constant and in a more general context , kebaier proves that the rate of convergence of the discretization error can be for all values of ] ) , the optimal choice is obtained for , and . with this choice ,the complexity of the statistical romberg method is of order , which is lower than the classical complexity in the monte carlo method .more recently , giles generalized the statistical romberg method of kebaier and proposed the multilevel monte carlo algorithm , in a similar approach to heinrich s multilevel method for parametric integration ( see also creutzig et al . , dereich , giles , giles , higham and mao , giles and szpruch , heinrich , heinrich and sindambiwe and hutzenthaler , jentzen and kloeden for related results ) .the multilevel monte carlo method uses information from a sequence of computations with decreasing step sizes and approximates the quantity by where the fine discretization step is equal to thereby . for , processes , , are independent copies of whose components denote the euler schemes with time steps and .however , for fixed , the simulation of and has to be based on the same brownian path .concerning the first empirical mean , processes , , are independent copies of which denotes the euler scheme with time step . here , it is important to point out that all these monte carlo estimators have to be based on different independent samples . due to the above independence assumption on the paths , the variance of the multilevel estimator is given by where .assuming that the diffusion coefficients of and the function are lipschitz continuous , then it is easy to check , using properties of the euler scheme that for some positive constant ( see proposition [ p1 ] for more details ) .giles uses this computation in order to find the optimal choice of the multilevel monte carlo parameters .more precisely , to obtain a desired root mean squared error ( rmse ) , say of order , for the multilevel estimator , giles uses the above computation on to minimize the total complexity of the algorithm .it turns out that the optimal choice is obtained for ( see theorem 3.1 of ) hence , for an error , this optimal choice leads to a complexity for the multilevel monte carlo euler method proportional to .interesting numerical tests , comparing three methods ( crude monte carlo , statistical romberg and the multilevel monte carlo ) , were processed in korn , korn and kroisandt . in the present paper ,we focus on central limit theorems for the inferred error ; a question which has not been addressed in previous research .to do so , we use techniques adapted to this setting , based on a central limit theorem for triangular array ( see theorem [ lindeberg ] ) together with toeplitz lemma .it is worth to note that our approach improves techniques developed by kebaier in his study of the statistical romberg method ( see remark [ rem - keb ] for more details ) .hence , our main result is a lindeberg feller central limit theorem for the multilevel monte carlo euler algorithm ( see theorem [ cltsr ] ) .further , this allows us to prove a berry esseen - type bound on our central limit theorem . in order to show this central limit theorem, we first prove a stable law convergence theorem , for the euler scheme error on two consecutive levels and , of the type obtained in jacod and protter .indeed , we prove the following functional result ( see theorem [ th - acc ] ) : where is the same limit process given in theorem 3.2 of jacod and protter .our result uses standard tools developed in their paper but it can not be deduced without a specific and laborious study .further , their result , namely is neither sufficient nor appropriate to prove our theorem [ cltsr ] , since the multilevel monte carlo euler method involves the error process rather than .thanks to theorem [ cltsr ] , we obtain a precise description for the choice of the parameters to run the multilevel monte carlo euler method . afterward , by a complexity analysis we obtain the optimal choice for the multilevel monte carlo euler method .it turns out that for a total error of order the optimal parameters are given by this leads us to a complexity proportional to which is the same order obtained by giles . by comparing relations ( [ gilesparam ] ) and ( [ bakparam ] ), we note that our optimal sequence of sample sizes does not depend on any given constant , since our approach is based on proving a central limit theorem and not on obtaining an upper bound for the variance of the algorithm .however , some numerical tests comparing the runtime with respect to the root mean square error , show that we are in line with the original work of giles .nevertheless , the major advantage of our central limit theorem is that it fills the gap in the literature for the multilevel monte carlo euler method and allows to construct a more accurate confidence interval compared to the one obtained using chebyshev s inequality .all these results are stated and proved in section [ main ] .the next section is devoted to recall some useful stochastic limit theorems and to introduce our notation .let be a sequence of random variables with values in a polish space defined on a probability space .let be an extension of , and let be an -valued random variable on the extension .we say that converges in law to stably and write , if for all bounded continuous and all bounded random variable on .this convergence is obviously stronger than convergence in law that we will denote here by `` . ''according to section 2 of jacod and lemma 2.1 of jacod and protter , we have the following result .[ lemma ] let and be defined on with values in another metric space . conversely , if and generates the -field , we can realize this limit as with defined on an extension of and .now , we recall a result on the convergence of stochastic integrals formulated from theorem 2.3 in jacod and protter .this is a simplified version but it is sufficient for our study . let be a sequence of -valued continuous semimartingales with the decomposition where , for each and , is a predictable process with finite variation , null at and is a martingale null at .[ th - conv - int ] assume that the sequence is such that is tight .let and be a sequence of adapted , right - continuous and left - hand side limited processes all defined on the same filtered probability space . if then is a semimartingale with respect to the filtration generated by the limit process , and we have .we recall also the following lindeberg feller central limit theorem that will be used in the sequel ( see , e.g. , theorems 7.2 and 7.3 in ) .[ lindeberg ] let be a sequence such that as . for each , let be independent random variables with finite variance such that for all .suppose that the following conditions hold : .lindeberg s condition : for all , moreover , if the have moments of order , then the lindeberg s condition can be obtained by the following one : lyapunov s condition : .let be the process with values in , solution to where is a -dimensional brownian motion on some given filtered probability space with is the standard filtration , and are , respectively , and valued functions .we consider the continuous euler approximation with step given by \delta.\ ] ] it is well known that under the global lipschitz condition \\[-10pt ] \eqntext{x , y\in\mathbb r^d,}\end{aligned}\ ] ] the euler scheme satisfies the following property ( see , e.g. , bouleau and lpingle ) : \\[-8pt ] & & \mathbb { e } \bigl[{\sup_{0\leq t \leq t}}\bigl| x_t - { x}^n_t\bigr|^{p } \bigr ] \leq\frac{k_p(t)}{n^{p/2}},\qquad k_p(t)>0.\nonumber\end{aligned}\ ] ] note that according to theorem 3.1 of jacod and protter , under the weaker condition we have only the uniform convergence in probability , namely the property following the notation of jacod and protter , we rewrite diffusion ( [ 1 ] ) as follows : where is the column of the matrix , for , and . then the continuous euler approximation with time step becomes \delta.\ ] ]let denotes the euler scheme with time step for , where .noting that the multilevel method is to estimate independently by the monte carlo method each of the expectations on the right - hand side of the above relation .hence , we approximate by here , it is important to point out that all these monte carlo estimators have to be based on different , independent samples . for each samples are independent copies of whose components denote the euler schemes with time steps and and simulated with the same brownian path . concerning the first empirical mean , the samples are independent copies of .the following result gives us a first description of the asymptotic behavior of the variance in the multilevel monte carlo euler method .[ p1 ] assume that and satisfy condition ( [ hbsigma ] ) .for a lipschitz continuous function , we have we have {\mathrm{lip } } \sum_{\ell=1}^{l } n_{\ell}^{-1}\mathbbe \bigl [ \sup_{0 \leq t \leq t } \bigl{\vert}x^{m^{\ell } } _ t - x_t\bigr{\vert}^2 + \sup_{0\leq t \leq t}\bigl{\vert}x^{m^{\ell-1}}_t - x_t \bigr{\vert}^2 \bigr],\end{aligned}\ ] ] where {\mathrm{lip}}:=\sup_{u\neq v}\frac{|f(u)-f(v)|}{|u - v|} ] and we have - 1}\int_{(mk+\ell ) \delta /m}^{(mk+\ell+1)\delta / m } \bigl(\eta_{m n}(s)-\eta_n(s)\bigr ) \,ds+o\biggl ( \frac { 1}{n^2}\biggr ) \nonumber \\ & = & \sum_{\ell=0}^{m-1}\sum _{ k=0}^{[t/\delta]-1}\frac{\delta ^2}{m } \biggl ( \frac{mk+\ell}{m}-k \biggr)+o\biggl(\frac{1}{n^2}\biggr ) \\& = & \frac{(m-1)\delta ^2}{2m}[t/\delta ] + o\biggl(\frac{1}{n^2}\biggr ) \nonumber \\ & = & \frac{(m -1)t}{2 m n } t + o\biggl(\frac{1}{n^2}\biggr).\nonumber\end{aligned}\ ] ] having disposed of this preliminary evaluations , we can now study the stable convergence of . by virtue of theorem 2.1 in , we need to study the asymptotic behavior of both brackets and , for all ] , * for and , , for all ] . for the first point , we consider the convergence with \\[-8pt ] & & { } -\eta_n(s)\wedge \eta_{m n}(u)+ \eta_n(s)\wedge\eta_n(u).\nonumber\end{aligned}\ ] ] it is worthy to note that hence , , for , , for , and this yields the desired result . concerning the second point ,the norm is given by with the same function given in relation ( [ eq - defg ] ) .using the properties of function developed above , we have in the same manner which proves our claim . for the last point , that is the essential one , we use the development of given by relation ( [ eqn - expectation ] ) to get \\[-8pt]\nonumber & & \qquad = n^2 \mathbb e\bigl\langle m^{n , i , j } \bigr\rangle_t^2- \frac{(m -1)^2t^2}{4m^2 } t^2+o\biggl(\frac{1}{n}\biggr).\end{aligned}\ ] ] otherwise, we have with on one hand , for , using property ( [ eq - prop ] ) together with the independence of the increments and , yields on the other hand , in relation ( [ eq - defh ] ) we use the cauchy schwarz inequality to get and this yields now , noting that , relation ( [ eqn2-expectation ] ) becomes once again thanks to the development of given by relation ( [ eqn - expectation ] ) , we deduce that by ( [ eq - dev1 ] ) and ( [ eq - dev2 ] ) , we deduce the convergence in of toward . by theorem 2.1 in jacod , converges in law stably to a standard -dimensional brownian motion independent of .consequently , by lemma [ lemma ] and theorem [ th - conv - int ] , we obtain finally , we complete the proof using relations ( [ eq - doleansdade ] ) , ( [ eq1biserreur ] ) , lemma [ lem ] and once again lemma [ lemma ] to obtain proof of lemma [ lem ] at first , we prove the uniform convergence in probability toward zero of the normalized rest terms for .the convergence of is a straightforward consequence of the previous one .the main part of these rest terms can be represented as integrals with respect to three types of supermartingales that can be classified through the following three cases : where and ] has the following expression : so , the process converges to and is tight . in the second case , for ,the supermartingale is also of finite variation and its total variation on the interval ] .in fact , we have when , we have and by independence of the brownian motion increments , we deduce that the integrand term is equal to . otherwise , when , we apply the cauchy schwarz inequality to get it follows from all these that . in the last case , for , the process is a square integrable martingale and its bracket has the following expression : it is clear that , so we deduce the tightness of the process and the convergence .now thanks to property ( ) and relation ( [ eq - doleansdade ] ) , it is easy to check that the integrand processes and , introduced in relation ( [ eq1erreur ] ) , converge uniformly in probability to their respective limits and , where and . therefore , by theorem [ th - conv - int ] we deduce that the integral processes given by vanish .consequently , we conclude using relation ( [ eq - doleansdade ] ) that for .we now proceed to prove that .the convergence of the process toward is obviously obtained from the previous one .the main part of this rest term can be represented as a stochastic integral with respect to the martingale process given by with .it was proven in jacod and protter that where is a standard -dimensional brownian motion defined on an extension probability space of , which is independent of .thanks to property ( ) and relation ( [ eq - doleansdade ] ) , the integrand process and once again by theorem [ th - conv - int ] we deduce that the integral processes given by vanish. all this allows us to conclude using relation ( [ eq - doleansdade ] ) .let us recall that the multilevel monte carlo method uses information from a sequence of computations with decreasing step sizes and approximates the quantity by in the same way as in the case of a crude monte carlo estimation , let us assume that the discretization error is of order for any ] we have then , for the choice of , given by equation ( [ eq - size ] ) , we have with and denotes a normal distribution .the global lipschitz condition ( [ hbsigma ] ) seems to be essential to establish our result , since it ensures property ( [ pbold ] ) .otherwise , hutzenthaler , jentzen and kloeden prove that under weaker conditions on and the multilevel monte carlo euler method may diverges whereas the crude monte carlo method converges .proof of theorem [ cltsr ] to simplify our notation , we give the proof for , the case is a straightforward deduction . combining relations ( [ eq - biais ] ) and ( [ eq - estimateur ] ) together, we get where using assumption ( [ hen ] ) , we obviously obtain the term in the limit .taking , we can apply the classical central limit theorem to .then we have . finally , we have only to study the convergence of and we will conclude by establishing to do so, we plan to use theorem [ lindeberg ] with the lyapunov condition and we set \\[-8pt ] z^{m^{\ell},m^{\ell -1}}_{t , k}&:= & f\bigl ( x^{\ell , m^{\ell}}_{t , k } \bigr)-f\bigl ( x^{\ell , m^{\ell-1}}_{t , k}\bigr)-\mathbb e \bigl(f\bigl ( x^{\ell , m^{\ell}}_{t , k}\bigr)-f\bigl ( x^{\ell , m^{\ell-1}}_{t , k } \bigr ) \bigr).\nonumber\end{aligned}\ ] ] in other words , we will check the following conditions : * . *( lyapunov condition ) there exists such that . for the first one , we have otherwise , since ,applying the taylor expansion theorem twice we get the function is given by the taylor young expansion , so it satisfies and . by property ( [ pbold ] ) , we get the tightness of and and then we deduce so , according to lemma [ lemma ] and theorem [ th - acc ] we conclude that \\[-10pt ] \eqntext{\mbox{as } \ell\rightarrow\infty.}\end{aligned}\ ] ] using ( [ hf ] ) it follows from property ( [ pbold ] ) that we deduce using relation ( [ cle - euler ] ) that consequently , hence , combining this result together with relation ( [ tcl - lindebergfeller ] ) , we obtain the first condition using toeplitz lemma . concerning the second one , by burkhlder s inequality and elementary computations , we get for where is a numerical constant depending only on . otherwise , property ( [ pbold ] )ensures the existence of a constant such that therefore , \\[-8pt ] & \leq & \frac{\widetilde c_p } { ( \sum_{\ell=1}^{l}a_\ell ) ^{p/2}}\sum_{\ell = 1}^{l}a_\ell^{p/2 } \mathop{\longrightarrow}_{n\rightarrow\infty } 0.\nonumber\end{aligned}\ ] ] this completes the proof .[ rem ] from theorem 2 , page 544 in , we prove a berry esseen - type bound on our central limit theorem .this improves the relevance of the above result . indeed , take as in the proof , for and given by relation ( [ xnl - berryessen ] ) , with , put and denote by the distribution function of .then for all and where is the distribution function of a standard gaussian random variable . if we interpret the output of the above inequality as sum of independent individual path simulation , we get according to the above proof , it is clear that behaves like a constant but getting lower bounds for seems not to be a common result to our knowledge .concerning , taking in both inequalities ( [ eq - momentinequality ] ) and ( [ eq - momentinequalitybis ] ) gives us an upper bound .in fact , when is lipschitz , there exists a positive constant depending on , , and such that for the optimal choice , given in the below subsection , the obtained berry esseen - type bound is of order .[ rem - keb ] note that the above proof differs from the ones in kebaier .in fact , here our proof is based on the central limit theorem for triangular array which is adapted to the form of the multilevel estimator , whereas kebaier used another approach based on studying the associated characteristic function .further , this latter approach needs a control on the third moment , whereas we only need to control a moment strictly greater than two .also , it is worth to note that the limit variance in theorem [ cltsr ] is smaller than the limit variance in theorem 3.2 obtained by kebaier in . from a complexity analysis point of view , we can interpret theorem [ cltsr ] as follows .for a total error of order , the computational effort necessary to run the multilevel monte carlo euler method is given by the sequence of sample sizes specified by relation ( [ eq - size ] ) . the associated time complexityis given by the minimum of the second term of this complexity is reached for the choice of weights , , since the cauchy schwarz inequality ensures that , and the optimal complexity for the multilevel monte carlo euler method is given by it turns out that for a given discretization error to be achieved the complexity is given by . note that this optimal choice , , with taking corresponds to the sample sizes given by hence , our optimal choice is consistent with that proposed by giles . nevertheless , unlike the parameters obtained by giles for the same setting[ see relation ( [ gilesparam ] ) ] , our optimal choice of the sample sizes does not depend on any given constant , since our approach is based on proving a central limit theorem and not on getting upper bounds for the variance .otherwise , for the same error of order the optimal complexity of a monte carlo method is given by which is clearly larger than .so , we deduce that the multilevel method is more efficient .also , note that the optimal choice of the parameter is obtained for .otherwise , any choice , , leads to the same result .some numerical tests comparing original giles work with the one of us show that both error rates are in line . here in figure[ fig1 ] , we make a simple log log scale plot of cpu time with respect to the root mean square error , for european call and with .it is worth to note that the advantage of the central limit theorem is to construct a more accurate confidence interval .in fact , for a given root mean square error rmse , the radius of the -confidence interval constructed by the central limit theorem is .however , without this latter result , one can only use chebyshev s inequality which yields a radius equal to .finally , note that , taking still gives the optimal rate and allows us to cancel the bias in the central limit theorem due to the euler discretization .the multilevel monte carlo algorithm is a method that can be used in a general framework : as soon as we use a discretization scheme in order to compute quantities such as , we can implement the statistical multilevel algorithm . andthis is worth because it is an efficient method according to the original work by giles .the central limit theorems derived in this paper fill the gap in literature and confirm superiority of the multilevel method over the classical monte carlo approach .the authors are greatly indebted to the associate editor and the referees for their many suggestions and interesting comments which improve the paper .
this paper focuses on studying the multilevel monte carlo method recently introduced by giles [ _ oper . res . _ * 56 * ( 2008 ) 607617 ] which is significantly more efficient than the classical monte carlo one . our aim is to prove a central limit theorem of lindeberg feller type for the multilevel monte carlo method associated with the euler discretization scheme . to do so , we prove first a stable law convergence theorem , in the spirit of jacod and protter [ _ ann . probab . _ * 26 * ( 1998 ) 267307 ] , for the euler scheme error on two consecutive levels of the algorithm . this leads to an accurate description of the optimal choice of parameters and to an explicit characterization of the limiting variance in the central limit theorem of the algorithm . a complexity of the multilevel monte carlo algorithm is carried out . .
terrestrial planets and cores of giant planets are generally considered to have formed through accretion of many small bodies called planetesimals . to simulate accretion processes of planets ,two methods , which are complementary to each other , have been applied .the first one is -body simulations in which orbits of all bodies are numerically integrated and gravitational accelerations due to other bodies are calculated in every time step ( e.g. , kokubo and ida , 1996 ; chambers and wetherill , 1998 ; richardson et al . , 2000 ;morishima et al . , 2010 ) .-body simulations are accurate and can automatically handle any complicated phenomena , such as resonant interactions and spatially non - uniform distributions of planetesimals .gravity calculations are accelerated by such as tree - methods ( richardson et al . , 2000 ) or special hardwares ( kokubo and ida , 1996 ; grimm and stadel , 2014 ) , and a large time step can be used with sophisticated integrators , such as mixed variable symplectic ( mvs ) or wisdom - holman integrators ( duncan et al . , 1998 ; chambers , 1999 ) .even with these novel techniques , -body simulations are computationally intense and the number of particles or the number of time steps in a simulation is severely limited .the second approach is statistical calculations in which planetesimals are placed in two dimensional ( distance and mass ) eulerian grids , and the time evolutions of the number and the mean velocity of an ensemble of planetesimals in each grid are calculated using the phase - averaged collision and stirring rates ( e.g. , greenberg et al . , 1978 ; wetherill and stewart , 1989 , 1993 ; inaba et al . , 2001 ; morbidelli et al . , 2009 ; kobayashi et al . ,while this approach does not directly follow orbits of individual particles , it can handle many particles , even numerous collisional fragments .largest bodies , called planetary embryos , are handled in a different manner than small bodies , taking into account their orbital isolation . the mutual orbital separation between neighboring embryos is usually assumed to be 10 mutual hill radii .the last assumption is not always guaranteed , particularly in the late stage of planetary accretion ( e.g. , chambers and wetherill , 1998 ) . to handle orbital evolution of embryos more accurately , eulerian hybrid codes - body calculations . ]have been developed ( spaute et al . , 1991 ; weidenschilling et al ., 1997 ; bromley and kenyon , 2006 ; glaschke et al . , 2014 ) , in which small planetesimals are still handled by the eulerian approach whereas orbital evolutions of largest embryos are individually followed by such as -body integrations .gravitational and collisional interactions between embryos and small planetesimals are handled using analytic prescriptions .glaschke et al .( 2014 ) took into account radial diffusion of planetesimals due to gravitational scattering of embryos and their code can approximately handle gap opening around embryos orbits .a lagrangian hybrid method has also been introduced by levison et al .( 2012 ) ( ldt12 hereafter ) . in their lipad code , a large number of planetesimalsare represented by a small number of lagrangian tracers .this type of approach is called a super - particle approximation and is also employed in modeling of debris disks ( kral et al . , 2013 ; nesvold et al . , 2013 ) and planetesimal formation ( johansen et al. , 2007 ; rein et al . , 2010 ) .orbits of individual tracers are directly followed by numerical integrations , and interactions between planetesimals ( stirring and collisions ) in tracers are handled by a statistical routine .embryos are represented by single particles and the accelerations of any bodies due to gravity of embryos are handled in the -body routine .lagrangian hybrid methods have several advantages than eulerian hybrid methods .for example , lagrangian methods can accurately handle spatial inhomogeneity in a planetesimal disk ( e.g. , gap opening by embryos ) , planetesimal - driven migration , resonant interactions between embryos and small planetesimals , and eccentric ringlets .a drawback of lagrangian methods would be computational cost as orbits of all tracers need to be directly integrated .therefore , it is desirable that lagrangian hybrid methods can handle various problems accurately even with a moderately small number of tracers . in this paper , we develop a new lagrangian hybrid code for planet formation . while we follow the basic concept of particle classes introduced by ldt12 , recipes used in our code are different from those used in the lipad code in many places .the largest difference appears in the methods to handle viscous stirring and dynamical friction .the lipad code solves pair - wise interactions while our method gives the accelerations of tracers using the phase - averaged stirring and dynamical friction rates .while the lipad code conducts a series of three body integrations in the stirring routine ( in the shear dominant regime ) and in the routine of sub - embryo migration during a simulation , our code avoids them by using unique routines .the complete list of comparison with the lipad code turns out to be rather long and is given in appenedix g. in section 2 , we explain our method . in section 3 ,we show various tests of the new hybrid code and compare them with analytic estimates and pure -body simulations .the computational speed and limitations of our code are discussed in section 4 .the summary is given in section 5 .for the sake of clarity , specific derivations are deferred to appendices .. schematic illustration of our hybrid code . accelerations due to gravity of embryos are handled by the -body routine .tracer - tracer interactions and back reaction of tracers on sub - embryos including collision are handled by the statistical routine .the particle classes in our hybrid code are the same as those introduced by ldt12 .the code has three classes of particles : tracers , sub - embryos , and full - embryos ( fig . 1 ) . in the present paper, we do not consider fragmentation of planetesimals and dust - tracers are not introduced .a tracer represents a large number of equal - mass planetesimals on roughly the same orbits .the mass of a planetesimal and the number of planetesimals in the tracer ( for the index of the tracer ) are defined to be and .therefore , the tracer mass is given by . through collisional growth , increases and decreases while remains close to its original value .we allow mass exchanges between tracers through collisions so the tracer mass , , is not necessarily fixed but has the upper and lower limits , and ( ) .we employ and in this paper , where is the minimum mass of a sub - embryo .the mass is usually the initial mass of a tracer , which is the same for all tracers at the beginning of a simulation . if and , the tracer is promoted to a sub - embryo . if , the sub - embryo is further promoted to a full - embryo , where we employ the numerical factor of 100 , as recommended by ldt12 .the number is an integer in our model .orbits of any types of particles are directly integrated .we use a mixed variable symplectic integrator known as symba ( duncan et al . , 1998 ) , which can handle close encounters between particles .this integrator is also used by ldt12 .the collisional and gravitational interactions between full - embryos with tracers are directly handled in every time step of orbital integrations , as is the case of pure -body simulations .on the other hand , tracer - tracer interactions are handled in a statistical routine which is described in subsequent sections in great detail . while the time step for the orbital integration is of the orbital period , the time step for the statistical routine can be taken to be much larger as long as is sufficiently smaller than the stirring and collisional timescales . in this paper , we employ for all test simulations .we confirmed that almost the same outcome is obtained even with a smaller for all the test simulations . in interactions between a tracer and a full - embryo in the -body routine ,the tracer is assumed to be a single particle with the mass equivalent to the total mass of planetesimals in the tracer .this is not a problem as long as the embryo is sufficiently massive compared with tracers .however , if the embryo mass is similar to tracer masses as is the case immediately after its promotion , and if embryo - tracer interactions are handled by the direct -body routine , the embryo suffers artificially strong kicks from tracers . to avoid this issue , sub - embryos are introduced .accelerations of sub - embryos due to gravitational interactions with tracers are handled by the statistical routine whereas accelerations of tracers due to gravitational interactions with sub - embryos are handled by the direct -body routine ( fig . 1 ) .collisions of planetesimals in tracers with sub - embryos are also handled in the statistical routine to avoid artificially large mass jumps in sub - embryos , contrary to ldt12 ( see appendix g for more discussion ) .fig . 2 .schematic illustration of neighboring tracer search .the filled circle is the target tracer and open circles are interloping tracers ( interlopers ) .the surface number density of planetesimals in an interloper is simply given by the number of planetesimal in the interloper divided by the area of the region between two arcs .our statistical routine uses the surface number density of nearby tracers .we first describe how we determine the surface number density for tracer - tracer interactions .the surface number density of tracers for interactions between a sub - embryo and tracers is derived in a similar way , but slightly modified so that close and distant encounters are treated differently ( section 2.5 ) .consider a target tracer , surrounded by nearby interloping tracers .the cylindrical co - ordinate system is introduced and we consider a curved region , called the region , located at and centered at the position of the tracer ( fig . 2 ) . the radial half width and the angular half width in the direction of the region are given by and , respectively .the area of this region is given by for neighboring tracer search , tracers are sorted into 2d grids and only nearby cells of the target tracer are checked whether other tracers are in the designated region . if the interloper is in the region , the surface number density of planetesimals in the interloper around the target is simply expressed as the density is an instantaneous density . at other times , the interloper may not be in the region but other interlopers having planetesimal masses similar to may be in the region .after averaging over time , the encounter rate with planetesimals in a certain mass range is expected to converge regardless of the size of .a small is favorable to capture a small radial structure accurately .however , it is not reasonable to use smaller than a typical radial length of gravitational influence of a planetesimal , which is usually about . here, is the hill radius given by where is the semi - major axis of the tracer and is the mass of the central star .since the mass of a possible largest planetesimal is , we employ for tracer - tracer interactions . for interactions between a sub - embryo and tracers ,a larger needs to be used and we employ 10 hill radii of a possible largest sub - embryo with the mass of . an advantage of the choice of given by eq .( [ eq : drt0 ] ) is that we can omit the procedure for orbital isolation of planetesimals .orbital isolation of the largest bodies occurs if the sum of orbital spaces of these bodies is less than the width of the region of interest ( wetherill and stewart , 1993 ) . if a planetesimal is assumed to occupy an orbital space of 10 hill radii ( kokubo and ida , 1998 ) and if the tracer masses are , the sum of the orbital spaces of planetesimals in a pair of tracers are always larger than .since our model allows a tracer mass lower than , isolation can potentially occur if the tracer masses and the numbers of planetesimals in the tracers are both small .such cases are rather rare and we can ignore them .therefore , orbital isolation can occur only for embryos and that is handled by the -body routine .an additional discussion is given in appendix g. a small is favorable to handle non - axisymmetric structures , in which the surface density is not azimuthally uniform .such structures usually appear if external massive bodies exist ( kortenkamp and wetherill , 2000 ; nagasawa et al ., 2005 ; levison and morbidelli , 2007 ; queck et al . , 2007 ) .a small can also be chosen , if we want to suppress the number of interlopers to save computational costs ( section 4.1 ) . in this paper , we usually adopt , but dependency on is also examined . even if the tracer is in the region ,the tracer is occasionally outside the region used in neighboring search around the tracer . to make sure that the tracer and the tracer are mutually counted ( or excluded ) as interlopers, the number density is evaluated using eq .( [ eq : nj ] ) only if or for .otherwise and only if the tracer is in the region , is given by , where is the area of the region .although we only considered encounters with interlopers , planetesimals in the target may encounter with other planetesimals in the same target if .appendix a describes how we correct this effect .this effect is found to be too small to be identified .let the semimajor axis , the orbital eccentricity , the orbital inclination , the longitudes of pericenter and ascending node , and the mean longitude of the target be , , , , , and , and those for the interloper be , , , , , and .the difference between the semimajor axes is defined as .when the target encounters with the interloper , they may collide with each other at a certain probability .their orbital elements are also modified by mutual gravitational scattering . to handle these processes in the statistical routine in a simple manner , we make following assumptions . 1. we use the collision probability and the mean change rates of orbital elements derived by previous studies which usually solved hill s equations .the equations of motion are reduced to hill s equations if hill s approximation is applied ( hill , 1878 ; nakazawa et al . , 1989 ) .the conditions to apply hill s approximation are 1 . , , , , 2 . , , and 3 .2 . the semimajor axis differences and the phases of the interlopers , , , are uniformly distributed .namely , if a certain interloper in the region is found , we assume that other interlopers with the same , , but different , , , and are distributed uniformly over the region .3 . the longitudes , and , of the target change randomly and large enough on the evolution timescales of and .this approximation is unnecessary if approximation 2 is valid and either or , because the collision and stirring rates do not depend on and in such a situation .assumption 1 ( hill s approximation ) seems to work well in most of cases of planetary accretion .greenzweig and lissauer ( 1990 ) showed that the collision probability of a planet with planetesimals on circular and co - planar orbits agrees with that based on hill s approximation within as long as .our test simulations for late stage planetary accretion ( section 3.5 ) indicate that hill s approximation also works well even at moderately high orbital eccentricities ( ) .assumption 2 is acceptable if many planetesimals mutually interact via gravitational scattering and collisions without orbital isolation .these interactions lead to randomization of spatial positions and phases .we extensively adopt these uniformities even if the target is a sub - embryo isolated from other embryos , but with some approximated cares ( section 2.6 ) .assumption 3 is likely to be reasonable since the timescale of phase change is comparable to the stirring timescale in general ( e.g , tanaka and ida , 1996 ) . with assumptions 2 and 3, we can assume that the time averaged stirring and collision rates of an individual target are equivalent to the stirring and collision rates averaged over phase space .while this equivalence resembles the so - called ergodic hypothesis , the difference is that we assume the equivalence on a short timescale .although assumption 3 may not be strictly correct , it greatly simplifies the problem and it is worthwhile to examine its validity or usefulness .fig . 3 .definitions of the eccentricity vectors .the inclination vectors are defined in a similar manner .integrations over distributions of orbital elements of interlopers can be transformed to integrations over relative orbital elements of interlopers with respect to the target ( nakazawa et al . , 1989 ) .the eccentricity and inclination vectors of the target are given as the same characters but with the subscript represent the eccentricity and inclination vectors for the interloper .the relative eccentricity and inclination vectors are given as ( fig .3 ) where and are the longitudes for the relative vectors .nakazawa et al .( 1989 ) adopted assumptions 1 - 3 and derived the number of planetesimals colliding with the target during a time interval as [ their eq .( 49 ) ] where and are the mean squares of and of interlopers , is the surface number density of interlopers , is the mean semi - major axis , is the orbital frequency at , is the distribution function of and normalized as , and is the non - dimensional collision rate averaged over , , and ( appendix b.1 ) . in the above , the reduced mutual hill radius , , is given as the expression of for the case of the rayleigh distributions of and is given by eq .( 35 ) of nakazawa et al .( 1989 ) . in our method , on the other hand , for a certain interloper is given by the dirac delta function .thus , the expected number of planetesimals in the interloper colliding with the target during is given as where is given by eq .( [ eq : nj ] ) .the stirring rates can be derived in a similar manner using the non - dimensional stirring rates ( section 2.5 ) . in our method , the averaging over distributions of eccentricities and inclinations of interlopers will be made by averaging over multiple interlopers .although it is not necessary to assume the distribution functions of eccentricities and inclinations of interlopers in calculations of the collision and stirring rates , our stirring routine naturally reproduces rayleigh distributions ( section 3.2 ) . if the system is perturbed by a massive external body and the forced eccentricity is much larger than free eccentricities , the phases , and , may not be uniformly distributed .even in such a case , our approach may still be applied by replacing the eccentricity and inclination vectors by the free eccentricity and inclination vectors whose directions are uniformly distributed , provided that the forced eccentricity vectors are roughly the same between the target and interlopers .thus , while uniformities of and are assumed , we do not necessarily assume uniformities of the relative longitudes and . the averaging over and is done by averaging over interlopers [ . while this relaxing of the assumption does not alter eq .( [ eq : dndt ] ) , it is important for dynamical friction ( section 2.5 ). the forced eccentricity vectors are not necessarily the same between the target and interlopers , for example , if the mass - dependent dissipative force works .in such a situation , our approach is not valid , although it is still worthwhile to examine how inaccurate it is .the expected change rate of the mass of a planetesimal in the tracer due to collisions with planetesimals in the interloper is simply given by multiplying the planetesimal mass to eq .( [ eq : dndt ] ) ( ida and nakazawa , 1989 ; greenzweig and lissauer , 1990 ; inaba et al . ,2001 ) to avoid double counting , we only consider a case for or for .using the mass change rate , the probability that the mass of a planetesimal in the target increases by due to a collision with a planetesimal in the interloper during the time step is given as .if this probability is larger than a random number which takes between 0 and 1 , we assume that all planetesimals in the target collide with planetesimals in the interloper .the same procedure is applied to all other interlopers in the region to check whether collisions occur between planetesimals in the target and those in interlopers .if a collision occurs and mutually colliding bodies merge , the changes in the masses and the numbers of planetesimals in the tracers are given as follows : where the characters with the additional subscript mean those after the merging .note that the total mass is conserved in the collisions : . before determining whether a collision occurs or not , we need to choose an appropriate .the mass is usually the mass of the interloping planetesimal , but can take different values for convenience as follows : & \mbox{case ( a ) ; } , \\ k_j m_{j}/k_i & \mbox{case ( b ) ; } , \end{array}\right .\label{eq : delm}\ ] ] where int ( ) means the decimals are dropped off for the number in the parentheses .the form in the upper row of eq .( [ eq : delm ] ) means that if is smaller than the threshold mass , we do not consider individual collisions but the accumulated mass change of the target planetesimal .we adopt the threshold mass of .we first estimate the post - collision mass of the interloper using eq .( [ eq : inte ] ) and for case ( a ) of eq .( [ eq : delm ] ) for any interlopers .if the resulting mass , , is found to be smaller than the lower limit of the tracer mass , , we use of case ( b ) given by the lower row in eq .( [ eq : delm ] ) instead of that of case ( a ) .if a collision occurs for case ( b ) , all the planetesimals in the interloper merge with those in the target and the interloper is deleted .therefore , the mass of any tracer is kept to be lager than .while we adopt , this value can be arbitrarily chosen : a small lower limit may be able to give accurate mass evolution while increase of the number of tracers can be suppressed with a large lower limit .the procedure to change masses and orbital elements of planetesimals due to collisional merging is schematically summarized in fig . 4 . for case ( a ) in eq .( [ eq : delm ] ) , in which and , only some of planetesimals in the interloper collide with planetesimals in the target . in this case, we split the interloper into two tracers : the interloper and the impactor .the number of planetesimals in the interloper after the split is given by in eq .( [ eq : inte ] ) and the position and velocity of the post - split interloper are unchanged during the collision between the target and the impactor . for case( b ) , all the planetesimal in the interloper are involved in the collision and we call the interloper the impactor . in both cases , the planetesimal mass and the number of planetesimals in the impactorare given by and , respectively , and the collision process is almost the same except some small differences .schematic illustration of the procedure to handle collisional merging between planetesimals in two tracers ( the target and the interloper ) .the figure shows how the masses of planetesimal ( and ) , the numbers of planetesimals in the tracers ( and ) , the semimajor axes ( and ) , and the orbital eccentricities ( and ) change during the procedure .when the target and the impactor are judged to collide , they do not physically contact each other .even their stream lines do not usually cross each other . since we assume uniformity in semimajor axis of an interloper , it is possible to make two stream lines cross each other by changing the semi - major axis of the impactor without changing other orbital elements .the crossing point of the stream lines is the location where the impact takes place .this impact point can be found as follows .first , and of the target and the impactor are aligned by changing the true anomaly of the target by and that for the impactor by .the angular changes , and , are derived by solving the following set of equations : where is the longitude of the impacting position measured from the reference direction .( [ eq : xx0 ] ) and ( [ eq : xx ] ) give two solutions for mutually shifted by , and we choose one with a smaller absolute value .then , the true anomalies are changed by moving the colliding pair along their keplerian orbits .second , the semimajor axis of the impactor is scaled by .this is done by scaling the impactor position by and the velocity by .the positions of the target and the impactor become almost identical through these procedures without changing their eccentricity and inclination vectors .then , the merging between the target and the impactor takes place . the position and velocity of the target after the merging with the impactorare given by the mass weighted mean values ( i.e. , the mass center for the position ) .after that , the semi - major axis of the target is adjusted to by scaling the position and the velocity so that the -component of the total orbital angular momentum of the target and the interloper is conserved during the merging : where and are the orbital eccentricity and inclination of the target after the merging . if the target tracer mass is equal to or exceeds the upper limit , , we split it into two tracers .if and , we also split the tracer to avoid simultaneous emerging of two sub - embryos .the numbers of planetesimals in the split tracers are given by if is even , and if is odd .we give weak velocity changes , symmetrically in momenta , to the two new tracers so that their relative eccentricity and inclination are about .this will avoid an encounter velocity artificially too low between them in the next encounter .the semi - major axes of the two tracers are adjusted so that the orbital separation between two tracers is randomly chosen between 0 to but the -component of the total orbital angular momentum is conserved . for case( b ) , we set the semi - major axis of one of the new tracers be the one for the deleted interloper instead of choosing the orbital separation randomly . finally ,the true anomalies of the two tracers ( or one if no splitting is made ) are changed by and . through the processes described above , tracer masses remain between and .the number of tracers remains unchanged or increases for case ( a ) of eq .( [ eq : delm ] ) whereas it remains unchanged or decreases for case ( b ) . as a result , the total number of tracers remains similar to the initial value even after many collisions between planetesimals .our several experiments show that increase of the tracer number from the initial value is suppressed less than 30 in most cases .our current code also has an option of hit - and - run collisions and the procedure is described in detail in appendix e. in this case , neither the number of planetesimals nor that of tracers changes .the orbital elements of planetesimals evolve due to gravitational interactions between them .the interactions are separated into viscous stirring ( vs ) and dynamical friction ( df ) .viscous stirring increases eccentricities and inclinations in expense of the tidal potential ( i.e. , radial diffusion ) and dynamical friction leads to energy equipartition between particles ( ida , 1990 , stewart and ida , 2000 ; ohtsuki et al . , 2002 ) .dynamical friction can be separated into stirring and damping parts ( dfs and dfd ) and sometimes the latter part only is called dynamical friction ( binney and tremaine , 1987 ; rafikov , 2003 ; ldt12 ) .we handle the stirring and damping parts of dynamical friction separately as shown below .the evolution of eccentricity and inclination of the target due to viscous stirring and dynamical friction caused by interactions with the interloper is given as ( ida , 1990 ; tanaka and ida , 1996 ) where , , , , and are the non - dimensional rates of viscous stirring and dynamical friction .the expressions of these rates are given in appendix b.2 .since we do not assume uniformities of and to handle eccentric and/or inclined ringlets , as discussed in section 2.3 , the terms with these relative angles remain in the expressions for dynamical friction .the way to split dynamical friction into the two terms is explained in detail in appendix c. the evolutions of due to viscous stirring and dynamical friction are then given as similar expressions can be obtained for .mutual encounters between planetesimals causes random walk in their semimajor axes .the radial diffusion coefficient for the target is approximately given by ( appendix d ) .\label{eq : di0}\ ] ] ignoring the effect of curvature , the probability function of the change in after the time step is given by where the function is normalized as .we choose at random with a weight given by . if , we set .this is rare but can occur if .the stirring and damping rates of orbital elements need to be converted into the acceleration of the tracer . since the time step for the statistical routine is usually much smaller than the stirring time , , the absolute values of the changes in the orbital elements during are much smaller than the magnitudes of the orbital elements themselves .consider a situation that encounters between planetesimals occur randomly in directions of velocities .a certain acceleration occurs to a planetesimal in an encounter , and in another encounter an acceleration with the same magnitude to the first encounter but in the opposite direction can occur . in first order ,the changes of orbital elements cancel out in these two encounters . however , in second order , the orbital eccentricity and inclination increase a little on average , because the orbital elements are slightly changed after the first encounter .this is the way how viscous stirring works , and to handle it , we introduce second order gauss planetary equations .the stirring part of dynamical friction is also handled by these equations .on the other hand , the damping part of dynamical friction is handled by the standard gauss planetary equations of first order ( e.g. , murray and dermott , 1999 ) .if we use the standard gauss planetary equations for the stirring part of dynamical friction , the standard deviations of orbital eccentricity and inclination become unphysically small for equal - mass planetesimals or mass - dependent segregation of these orbital elements occurs for a case of a mass distribution .we assume that and in the following formulation .note that these assumptions are already used in hill s approximation . in this subsectiononly , we drop the subscript for the tracer to avoid long , confusing subscripts .the expected changes of orbital elements during the time step are , , , , and , where , , , and similar expressions are obtained for the inclination .the change of the velocity vector is split into three components as where and are the components of the velocity change in the radial and tangential directions in the orbital plane , is the velocity change normal to the orbital plane , , , and are unit vectors .the velocity vector of the tracer itself is given as , where and are the radial and tangential components .all the components of the velocity change are further split into the stirring and damping terms with subscripts s and d , respectively : in our stirring routine , all the components of the stirring part are chosen using random numbers , but their averaged functional forms follow gaussian distributions with the zero mean values : , where the angle brackets in this subsection mean averaging over multiple velocity changes . the acceleration due to an individual encounter may not necessarily follow gaussian . on the other hand , since we handle the time averaged accelerations of tracers due to multiple encounters that occur in a random fashion , their distributions are limited to gaussian .first we consider the velocity changes in the orbital plane .the energy conservation gives the following equation , if we take up to the second order terms where .if we ignore the second order terms and the first term in the r.h.s . of eq .( [ eq : enec ] ) as , this gives as note again that is given by eq .( [ eq : pda ] ) and thus is gaussian .the angular momentum conservation gives where is the change of and is the distance from the central star .inserting eq .( [ eq : angc ] ) into eq .( [ eq : enec ] ) , we have averaging eq .( [ eq : angc2 ] ) over time , the first order terms vanish and we have only the second order terms . using , we have where we used and [ ; eq .( [ eq : di0 ] ) or eq .( [ eq : db2 ] ) ] , and ignored the term proportional to as it is small enough as compared with other terms for . inserting into eq .( [ eq : de2 ] ) , we have we give assuming it is gaussian .if the r.h.s side of eq . ( [ eq : vrs ] ) is negative , we set , add the r.h.s divided by to , and re - calculate , which will be handled as shown below .this occurs in the shear dominant regime [ . to give the velocity changes and for the damping parts of dynamical friction , we use the standard first order gauss planetary equation for ( murray and dermott , 1999 ) that can also be derived from eq .( [ eq : angc2 ] ) : , \label{eq : dedf}\ ] ] where and are the true and eccentric anomalies .we employ the following forms setting in eq .( [ eq : dedf ] ) , we have where we assumed .next , we consider the velocity change perpendicular to the orbital plane .the specific angular momentum vector after the velocity change in the inertial cartesian coordinate system are given as where , , , and are the rotation matrices given by eqs .( 2.119 ) and ( 2.120 ) of murray and dermott ( 1999 ) .using , the -component of eq .( [ eq : ang4 ] ) is given as where is the change of and is the argument of pericenter .this equation leads to the standard gauss planetary equation for ( murray and dermott , 1999 ) , if we ignore the second order terms .averaging eq .( [ eq : ang5 ] ) over time , the first term in the r.h.s . vanishes . setting the mean change of as and using , we have we give , assuming it is gaussian . for the damping part of dynamical friction, we adopt the form ignoring the second order terms in eq .( [ eq : ang5 ] ) , and setting , we have where we took averaging over the orbital period and used .if the planetesimal mass is equal to or exceeds , the tracer is promoted to a sub - embryo .the acceleration of a sub - embryo due to gravitational interactions with a single planetesimal in a tracer is taken into account in the -body routine to hold the correct mutual hill radius .on the other hand , the acceleration of the sub - embryo due to interactions with other planetesimals in the tracer is handled by the statistical routine .collisions of planetesimals with sub - embryos are also handled in the statistical routine .a large embryo tends to form a gap around in its feeding zone ( tanaka and ida , 1997 ) , although other nearby embryos , if they exit , tend to fill the gap .the surface density in the feeding zone usually decreases through this process .we take into account a possible difference of surface densities inside and outside of the feeding zone whereas spatial variation of the surface density inside of the feeding zone is not considered .this assumption allows us to still use the stirring and collision rates averaged over phases and semi - major axis for interactions between a sub - embryo and tracers .a collision between the sub - embryo and the tracer occurs only if , where is the jacobi integral of hill s equations given by where and is the half width of the feeding zone given by even if , collisions do not occur with trojans or if is too small ( see fig . 2 of ohtsuki and ida , 1998 ) .ida and nakazawa ( 1989 ) showed that collisions occur only if ^{1/2}a_ih_{ij},\ ] ] where is the minimum orbital separation for occurrence of collision .we do not consider interactions with trojans in the statistical routine .planetesimals in the feeding zone ( and ) can potentially enter the hill sphere of the embryo and orbits of planetesimals are strongly deflected by close encounters with the embryo . for tracer - tracer interactions ,the number density is calculated by eqs .( [ eq : ss ] ) and ( [ eq : nj ] ) .if a sub - embryo is the target , however , we search for planetesimals in the feeding zone .their surface number density is given by we miss some of tracers in the feeding zone if they are radially outside of the region of neighboring search .the factor in eq .( [ eq : nj2 ] ) is introduced to correct this effect , where is the orbital period of the tracer , and is the period in which the tracer is radially in the region during provided that the tracer is azimuthally in the range of the region .the expression of is given in appendix f. in the stirring routine , we use in eq .( [ eq : nj2 ] ) instead of as the back reaction from a single planetesimal is taken into account in the -body routine .using eq .( [ eq : nj2 ] ) , collisional growth and velocity changes of sub - embryos are handled in the statistical routine in the same manner as tracer - tracer interactions .if there is no spatial density variation between the inside and outside of the feeding zone , the same collision and stirring rates are derived whether the target is a tracer or a sub - embryo .even if the mutual distance between the centers of a tracer and a sub - embryo is less than the sum of the radii of the constituent planetesimal and the sub - embryo , orbital integrations of both bodies are continued in the -body routine using softened forces that avoid singularity .we employ the k1 kernel of dehnen ( 2001 ) for the softening .gravitational interactions occur even if due to distant encounters ( weidenschilling 1989 , hasegawa and nakazawa , 1990 ) .we evaluate the change rates of the orbital elements of the sub - embryo due to individual distant encounters without assuming uniformity in .let the changes of the squared relative orbital eccentricity and inclination due to a single distant encounter be and . hasegawa and nakazawa ( 1990 ) derived the changes and averaged over and [ their eqs .( 38 ) and ( 39 ) ] .the averaged changes of and of the sub - embryo during a single encounter with a planetesimal in the interloper are given by and ( ida , 1990 ) .using the keplerian shear velocity given by ( bromley and kenyon , 2011 ) the frequency of encounters is given as , where we used eq .( [ eq : nj ] ) and in this case is the radial extent of planetesimals in the tracer and is assumed to be infinitesimally small .the change rates of and due to distant encounters with the interloper are then given as where and .the rates given by eqs .( [ eq : dedis ] ) and ( [ eq : didis ] ) are added to eqs .( [ eq : devs ] ) and ( [ eq : divs ] ) if .dynamical friction due to distant encounters is negligible ( ida , 1990 ) .an embryo gravitationally scatters surrounding planetesimals .back reaction on the embryo causes its radial migration .individual gravitational encounters usually cause a random walk of the embryo .however , if the accumulated torque is not cancelled out , the embryo migrates over a long distance .this is called planetesimal - driven migration ( ida et al . , 2000 ;kirsh et al . ,planetesimal - driven migration of full - embryos is automatically taken into account in the -body routine .we developed a unique method to handle planetesimal - driven migration of sub - embryos explained in the following . in our code , the angular momentum change of the sub - embryo due to gravitational interactions with tracers during is stored in the -body routine . if is released instantaneously by giving the tangential acceleration , the sub - embryo may migrate unrealistically too fast due to strong kicks from tracers that have masses similar to the sub - embryo .the direction of migration may change rapidly in such a situation .this problem can be avoided by limiting the angular momentum released during to the theoretical prediction , , derived from the statistical routine . if , is fully released .if , on the other hand , the remaining portion is added to stored during the next time step interval .what is done by the scheme is basically time averaging of the torque on the sub - embryo over a timescale much longer than so that strong spikes of the torque due to close encounters are smoothed out .more explanations are given in section 3.3 , together with test results .the predicted angular momentum change , , is determined from the surface density and velocities of planetesimals in neighboring tracers .bromley and kenyon ( 2011 , 2013 ) and ormel et al . ( 2012 ) showed three different modes of planetesimal - driven migration , referred to as type i iii migration analogous to gas - driven migration .type i and iii migration occurs when an embryo interacts with planetesimals in close encounters .type i migration is considered to occur when the velocity dispersion of planetesimals is large , and its migration rate is very slow as is determined by the difference between torques from outer and inner disks . on the other hand , type iii migration is fast self - sustained migration that occurs when the velocity dispersion of planetesimals is relatively small , and the embryo migrates due to the torque from the one side ( inner or outer side ) of the disk , leaving a relatively enhanced disk in the other side ( ida et al . , 2000 ;kirsh et al ., 2009 ; levison et al . , 2010 ; capobianco et al . , 2011 ) .we ignore type i migration in the evaluation below , as the migration rate is well represented by the formula for type iii migration even at large velocity dispersions ( kirsh et al ., 2009 ; bromley and kenyon , 2011 ) .type ii migration occurs when an embryo opens up a gap in the planetesimal disk so the torque is primarily from planetesimals in distant encounters .this torque is particularly important if the asymmetries of the surface density and the velocity dispersion between the inner and outer disks are large .such a situation occurs if an embryo in a gaseous disk shepherds outer small planetesimals that rapidly spiral toward the central star without the embryo .the radial migration rate of type iii planetesimal - driven migration due to an interaction with the tracer is ( ida et al . , 2000 ) where the sign is positive if and vice versa , and is the non - dimensional migration rate given in appendix b.3 . if stored in the -body routine is positive / negative , we count only tracers with semimajor axes larger / smaller than because planetesimals in close encounters cause attractive forces . in the evaluation of in eq .( [ eq : dat3j ] ) , we use instead of used in eq .( [ eq : nj ] ) .the type iii migration rate is then given as where is the factor that represents slowing down for massive embryos .the slowing down occurs when the embryo can not migrate over the feeding zone during the synodic period for that width , as cancellation of angular momentum exchange occurs in the second encounter ( ida et al . , 2000 , kirsh et al . , 2009 ) .bromley and kenyon ( 2011 ) showed that the slowing down factor is given by a migration length during the synodic period for the half - width of the feeding zone relative to , if this ratio is less than unity : .\label{eq : slow}\ ] ] we employ after bromley and kenyon ( 2011 ) .this length is somewhat shorter than the actual half - length of the feeding zone , [ eq . ( [ eq : de0 ] ) ] , and the reason for this value being applied is because the strong attractive force comes from a relatively narrow region . if the embryo mass is small enough , ; in this regime , the migration rate is independent of the embryo mass . one may think that the migration rate can be simply estimated by summing up the torques from both sides in eqs .( [ eq : dat3j ] ) and ( [ eq : dat3 ] ) rather than summing up for one side and that calculated in the -body routine is unnecessary .however , we find that such an approach can work only if the embryo is massive enough so that . in this case, the all tracers in the feeding zone strongly interact with the embryo and this produces large velocity asymmetry between the unperturbed side and the other side that the embryo migrated through . on the other hand ,if the embryo is small , it migrates over the feeding zone through interactions with a small fraction of tracers in the feeding zone , and only small velocity asymmetry is produced .in such a situation , the evaluated torques from both sides roughly cancel out each other and the fast migration can not be reproduced .if we evaluate the torques from tracers in an azimuthally limited region , where velocity asymmetry exists , summation of both sides may work .however , an appropriate choice of the azimuthal width is unclear to us and the number of tracers in a narrow region is too few to give a reliable migration rate .this is why we evaluate the migration rate due to the one - sided torque as an upper limit , while the actual angular momentum change is evaluated in the -body routine . for type ii migration, we evaluate the torques from tracers in both sides of the disk as done for the stirring rate due to distant encounters . the change in the semimajor axis difference , , during a single encounter for given as ( bromley and kenyon , 2011 ) where and .this formula is expected to be independent of and after averaged over and ( hasegawa and nakazawa , 1990 ) . in a similar manner of eq .( [ eq : dedis ] ) , the change rate of the semimajor axis of the embryo is given as the type ii migration rate is then given as if the surface density is symmetric for the inner and outer disk , the first term in the parentheses in the r.h.s .( [ eq : dat20 ] ) vanishes after summation . in this case, the embryo slowly migrate inward .if density asymmetry exists , the first term remains and relatively fast migration takes place in either inward or outward direction .the migration rate of the embryo suggested from the statistical routine is given as the predicted angular momentum change during is given as where is the angular momentum of the sub - embryo . to produce smooth type iii migration , at least a few , preferably , tracers need to exist constantly in the feeding zone in the region . if we choose a very small for the region , that makes sub - embryo migration noisy .we find that a large sometimes remains unreleased even when an embryo reaches a disk edge .this usually causes the embryo migrate artificially further away from the disk edge . to avoid this problem ,we remove quickly on the timescale of orbital periods at the disk edge , although this violates conservation of the total angular momentum of the system .we regard the embryo being at the disk edge if the total mass of tracers in the feeding zone ahead of the embryo in the migrating direction is less than the embryo mass . for a small embryo ,sometimes no tracer exists in the feeding zone due to statistical fluctuations even if the embryo is not at the disk edge . to avoid this issue , we also count tracers within ahead of the embryo even if they are not in the feeding zone .the global gravitational forces of tracers are added to the equations of motion for tracers and sub - embryos . to calculate the forces , we assume that the disk is axisymmetric and the vertical distribution is given by gaussian for tracers . for simplicity , at each radial location, we consider only one scale height of tracers with various planetesimal masses whereas in reality different mass groups have different scale heights .the scale height of planetesimals in a radial grid at is given by where the summation is done for each radial bin , and the total mass of the tracers in the grid is the surface density of tracers are given by , where is the radial grid width .the spatial density of tracers is given as the and components of the tracers global gravitational force are given by ( nagasawa et al ., 2000 ) dr'dz ' \nonumber \\ f_{{\rm tr } , z}(r , z ) & = & -4 g \int^{\infty}_{-\infty } \int_{r ' } \frac{\rho_{\rm tr}(r',z')r'(z - z')}{\sqrt{(r+r')^2 + ( z - z')^2 } } \nonumber \\ & & \times \left[\frac{e(\xi)}{(r - r')^2 + ( z - z')^2}\right]dr'dz',\end{aligned}\ ] ] where is the gravitational constant , and are the first and second kind of elliptic integrals , and is given by these components of the force are tabulated in ( ) grids and interpolated for use in simulations . for the accelerations on sub - embryos, is used instead of in eqs .( [ eq : htr ] ) and ( [ eq : mtr ] ) .it is not necessary to calculate the global gravitational force components frequently and we usually update them every times the shortest orbital period of particles .the global gravitational force modifies the precession rates of the longitudes of pericenter and ascending node ; it usually accelerates the absolute precession rates .the precession rates do not usually matter to dynamics of tracers in terms of such as encounter velocities .these rates are , however , likely to be important for stability of the system , when multiple sub - embryos appear in the system .the embryos mutually interact through secular resonances which cause oscillations in eccentricities and inclinations of embryos .the global gravitational force reduces these oscillation periods . as a result ,the maximum eccentricities and inclinations of sub - embryos during the oscillations are suppressed to relatively small values .we identify this effect in simulations in section 3.5 ( results without the global gravitational force are not shown there , though ) .although its effect is much smaller than dynamical friction , it does not seem to be negligible .[ cols="^,^,^,^,^,^,^ " , ] table 1 .parameters used for hybrid simulations .theme is a very brief expression of what experiments are performed . is the initial total mass of planetesimals .region represents the initial inner and outer edges of the planetesimal disk . is the power - law exponent for the surface density of planetesimals ( ) . is the initial number of planetesimals . is the initial total number of tracers . is the initial number of embryos ( means that no embryo is introduced in the initial state but embryos appear as a result of collisional growth ) . is the initial root - mean - square eccentricity of tracers . is the time step for orbital integrations . is the angular half - width of the region used in neighboring search ( fig . 2 ) .collision represents collisional outcome .gas disk represents whether damping effects due to a gas disk are taken into account . in all the simulations , the physical density of any body is 2.0 g .the initial distributions of and are given by the rayleigh distributions with = 2 .we show six different types of tests of the hybrid code : collisional damping and stirring ( section 3.1 ) , gravitational scattering and radial diffusion ( section 3.2 ) , planetesimal - driven migration of sub - embryos ( section 3.3 ) , runaway to oligarchic growth ( section 3.4 ) , accretion of terrestrial planets from narrow annuli ( section 3.5 ) , and accretion of cores of jovian planets ( section 3.6 ) .the parameters used in the test simulations are summarized in table 1 .time evolution of and of planetesimals due to mutual collisions only for ( a ) = 0 and ( b ) = 0.8 .all effects related to mutual gravity are ignored . the total number of planetesimals is represented by 2000 ( solid curves ) and 500 ( dotted curves ) tracers .planetesimals are placed between 0.95 au and 1.05 au and the total mass is 0.1 . in the hybrid simulations, we only use tracers between 0.96 and 1.04 au in calculations of and to avoid the effect due to radial expansion .evolution curves calculated using the analytic formula ( ohtsuki , 1999 ) are also shown by dashed curves for comparison . in the panel ( b ) , = 0.72 is used for the analytic curves . the first test we present is velocity evolution of tracers due to mutual collisions only to check whether the collision probability and the velocity changes at collisions are calculated accurately .all mutual - gravity related effects are ignored in the test : gravitational scattering , gravitational focusing in the collisional probability [ the second term in the bracket of the r.h.s of eq .( [ eq : pcol ] ) ] , and enhancement of the impact velocity ( appendix e.2 ) .collisions are assumed to result in inelastic re - bounds ( hit - and - run collisions ) , and post collision velocities are characterized by the normal and tangential restitution coefficients , and .appendix e.2 explains how to handle hit - and - run collisions between planetesimals in two tracers , with assumptions of and [ see eq .( [ eq : vnd ] ) for more detail ] .these coefficients , however , can be modified to any values . in the tests , we assume no friction ( i.e. , ) . collisional velocity evolution is a well - studied problem for planetary rings and several authors have developed analytical formulae ( goldreich and tremaine , 1978 ; hmeen - antilla , 1978 ; ohtsuki , 1992 , 1999 ) .we compare our simulation results with analytic estimates of ohtsuki ( 1999 ) because of the easiness of use .figure 5 shows the time evolution of and of planetesimals for ( a ) = 0 and ( b ) = 0.8 .the radius of all planetesimals is 0.4 km .the planetesimal disk extends from 0.95 to 1.05 au and its total mass is 0.1 .these parameters give the disk normal optical depth of .two simulations are performed to see resolution effects for both fig .5a and 5b : one with and and another one with and , where is the total number of tracers . the total angular momentum , , is well conserved for all test simulations shown in fig . 5 ; . for ( fig .5a ) , we find excellent agreements between our results and the analytic estimates of ohtsuki ( 1999 ) , in particular for the case of .however , the decreases in and stall at certain values in hybrid simulations for ( 500 ) whereas in the analytic curves , and decrease down to , where is the particle radius . in the hybrid simulations , eccentric ringlets form , in which the longitudes of pericenter of all particles are nearly aligned . this issue is probably related to the initial condition . in creation of the initial positions and velocities , the longitudes of pericenterare randomly chosen , and the sum of the eccentricity vectors of all tracers is not exactly zero .the eccentricities of the ringlets in the end state are , in fact , close to the expected values of the residual eccentricity , . since the residual eccentricity decreases with increasing the number of tracers , the eccentricity of the ringlet in the end state decreases .even if the residual eccentricity is fixed to be zero , the same problem is likely to occur unless all particles effectively interact with each other .the ringlets are not only eccentric but also inclined because a similar discussion is applied to inclination .the evolution curves of and also weakly depends on ; a small gives a better agreement with the analytic curves .it is found that relative velocities are slightly underestimated if is large and radial excursions of particles are larger than .this leads a slightly lower damping rate .collisions cause not only damping but also stirring as the collisional viscosity converts the tidal potential to the kinetic energy .if , where is the critical restitution coefficient , the stirring rate exceeds the damping rate and and increase with collisions ( goldreich and tremaine , 1978 ; ohtsuki , 1999 ) .we find that our method is slightly more dissipative than the analytic estimate and gives for reasons that we were not able to figure out .figure 5b shows the same as fig .5a but for the case of .it is found that our result for and ( 500 ) well agrees with the analytic estimate with ( 0.70 ) .overall , our method can reproduce analytic results using a higher . in simulations of planetary accretion , we use for hit - and - run collisions andthe offset in is not a problem .note that orbital alignments do not occur for the case of fig . 5b , because collisional stirring randomizes orbital phases .fig . 6 . fig .time evolution of and of planetesimals due to gravitational interactions : ( a ) , , , ( b ) , , , and ( c ) , , ( see table 1 for the characters ) .the panel ( c ) is the case of the bimodal size distribution with , , , and , where the characters with the subscripts 1 and 2 are those for small and large planetesimals , and the mass ratio of a large planetesimal to a small one is 5 . in all cases ,planetesimals are placed between 0.95 au and 1.05 au , and and are calculated using tracers between 0.96 au and 1.04 au .evolution curves calculated using the analytic formula ( ohtsuki et al . , 2002 ) and from the pure -body simulations ( only for the panels ( a ) and ( c ) ) are also shown for comparison .fig . 7 .snapshots on the plane for the simulations shown in fig .6a : ( a ) the hybrid simulation and ( b ) the pure -body simulation .time evolutions of the standard deviation of for the simulations shown in fig . 7 .results for six individual hybrid simulations are shown by colored dotted curves , whereas their mean value is shown by a black solid curve .the result for the hybrid simulation shown in fig .7a is a black dotted curve . for comparison , the result from the pure n - body simulation ( fig .7b ) is shown by a black dashed curve .the next test we present is velocity evolution of tracers due to mutual gravity only in the absence of any collisional effects .figure 6 shows evolution of and for a planetesimal disk extending from 0.95 and 1.05 au .the panels ( a ) and ( b ) are for mono - sized planetesimals represented by 200 tracers and the numbers of planetesimals are and , respectively .the panel ( c ) is for a bimodal size distribution case ; 1500 small planetesimals and 200 large planetesimals are placed in the disk .the mass ratio of a large planetesimal to a small one is 5 .the number of tracers for small planetesimals is 150 and that for large planetesimals is 100 .the results are compared with analytic estimates of ohtsuki et al .for the panels ( a ) and ( c ) , we also performed pure -body simulations using the parallel - tree code pkdgrav2 ( morishima et al . , 2010 ) with the same initial conditions .the agreements between the analytic estimates and pure -body simulation results are excellent , so results from both methods are trustable as benchmarks .overall , we find good agreements between the results from the hybrid code and the analytic estimates , although our model gives slightly lower and than the analytic estimates , except fig .6b shows a good agreement .we also perform the same hybrid simulations with different values of and , and the absolute levels of and at a certain time are found to be almost independent of these parameters , unless and are too small .the underestimates of and of our calculations is probably caused by the underestimates of and that we employ .the analytic calculations of tanaka and ida ( 1996 ) give slightly lower and for and than those from direct three - body orbital integrations ( ida , 1990 , rafikov and slepian , 2010 ) .the influence of this difference is less important for large and and that is why the hybrid simulation in fig .6b shows an excellent agreement with the analytic result , contrary to fig .figure 7 shows snapshots on the plane for the simulation of fig .snapshots of the corresponding pure -body simulation are also shown for comparison .the hybrid code can reproduce not only and but also the distributions of and .ida and makino ( 1992 ) carried out -body simulations and found that the distributions of and can be well approximated by a rayleigh distribution .we perform the kolmogorov - smirnov ( ks ) test to check whether the distributions of and in the hybrid simulations resemble a rayleigh distribution .the test gives the significance level , ( ) ( press et al . ,a large means that the likelihood that two functions are the same is high .the time - averaged ( yr ) values of for the distributions of and are 0.44 and 0.57 , respectively .we also perform the same ks test using the outputs of the pure -body simulation , and the values of for and are 0.62 and 0.19 .thus , the distributions of and from our hybrid simulations resemble a rayleigh distribution at a level similar to those from pure -body simulations .figure 7 also shows that a planetesimal disk radially expands with time due to viscous diffusion at a rate similar to that for the pure -body simulation . to compare more quantitatively, we plot the time evolution of the standard deviation of ( the normalized orbital energy ) of the disk in fig .8 . since the radial diffusion of a low resolution hybrid simulationis found to be noisy , we take average over several simulations .the off - sets between simulations at are caused by random numbers that are used for choosing initial semimajor axes of tracers .we find a good agreement in the increasing rates of the standard deviations of between the hybrid simulations and the pure -body simulation .this comparison indicates that the diffusion coefficient given by eq .( [ eq : di0 ] ) is a good approximation .contrary to collisional velocity evolution shown in section 3.1 , conservation of the angular momentum is violated by gravitational scattering as we do not explicitly solve pair - wise gravitational interactions in the statistical routine .provided that the radial migration distance of a tracer due to random walk is about , the error in the total angular momentum is estimated as , which we found consistent with the actual errors in the test simulations .fig . 9 .self - sustained ( type iii ) migration of a sub - embryo .we employ ( where is the embryo mass ) , , and .the embryo is displayed as a red filled circle with a horizontal bar with a half length of 10 hill radii .fig . 10 .time evolution of the semi - major axis of a sub - embryo due to self - sustained migration : ( a ) and , ( b ) and , and ( c ) and . for all cases , and .analytic migration rates ( bromley and kenyon , 2011 ) are shown by dashed lines . in section 2.6.2, we described how we handle migration of sub - embryos . to test the routine of sub - embryo migration, we perform a suite of hybrid simulations starting with initial conditions similar to those in kirsh et al .( 2009 ) and bromley and kenyon ( 2011 ) .the initial planetesimal disk extends from 14.5 au to 35.5 au and its total mass is 230 represented by 230 tracers ( thus ) . a sub - embryo with a mass of is initially placed at 25 au and growth of the embryo is turned off . in this subsection , the index is given to a sub - embryo .each tracer has 10000 planetesimals and tracer - tracer interactions are quite small compared with gravitational effects of the embryo .an example of migration of an earth - mass embryo ( ) is shown in fig .9 using four snapshots on the - plane . for this example , the migration is in the mass independent regime [ ; eq .( [ eq : slow ] ) ] .the embryo first has some strong encounters with tracers and a stirred region is produced .the embryo tends to migrate away from the stirred region due to strong torques exerted by tracers in the unperturbed region .this process trigers fast self - sustained migration . during the migration, the embryo continuously encounters with unperturbed tracers on the leading side while leaving the stirred tracers in the trailing side .the fast migration continues until the embryo reaches the disk edge .embryo migrations for three cases with different embryo masses and initial velocity dispersions are shown in fig .10 . in each case ,six simulations are performed , using different random number seeds to generate initial orbital elements of tracers .the migration rates suggested by bromley and kenyon ( 2011 ) are shown by dotted lines .they confirmed good agreements between their formula and -body simulations .overall , our method can reproduce migration rates suggested by bromley and kenyon ( 2011 ) .however , our method gives almost the same probabilities of inward and outward migrations if , whereas -body simulations show that inward migration is predominant ( kirsh et al ., 2009 ; bromley and kenyon , 2011 ) .if the embryo is small , the direction of migration is determined by first few strong encounters , and once migration starts it is self - sustained regardless of its direction .in realistic planetary accretion , even if inward migration is predominant , the embryo encounters with inner massive embryos and then turns its migration direction outward ( minton and levison , 2014 ) . therefore , we believe that the artificially high occurrence of outward migration is not a problem . for a large sub - embryo ,inward migration is clearly predominant even in our hybrid simulations ( fig .10c ) . to understand these trends and why our method works , we consider conditions necessary to produce smooth self - sustained migration. if planetesimals are small enough compared with the embryo , smooth self - sustained migration of the embryo takes place ( ordered migration ) .if planetesimals are large , on the other hand , the embryo suffers strong kicks and the direction of its migration may change rapidly in a random fashion ( stochastic migration ) . whether migration is ordered or stochastic is determined by the size distribution of planetesimals .this argument is similar to that discussed for planetary spins ( dones and tremaine , 1993 ) .the relative contributions of the ordered and stochastic components are estimated as follows .consider that an embryo migrates over a distance through encounters with surrounding planetesimals : where again and is the change of $ ] during a single encounter . in the above ,the angle brackets mean the averaging over encountered planetesimals .the expected value of is given as , \label{eq : stoch } \end{aligned}\ ] ] where we assume that and that there is no correlation between and to transform to the second row .if the first term in the r.h.s of eq .( [ eq : stoch ] ) is larger than the second term , the migration is stochastic rather than ordered .this means that the embryo is likely to migrate over via random walk rather than self - sustained migration .if we assume that the ordered torque exerted on the embryo is the one - sided ( type iii ) torque , is determined by the non - dimensional migration rate ( appendices b.3 ) . the value of is estimated using the non - dimensional viscous stirring rates , and ( appendix d ) , and we consider contribution of planetesimals in one side assuming that large velocity asymmetry exists .using these non - dimensional rates , we have .this ratio is for the shear dominant regime [ and for the dispersion dominant regime ( we assume ) .therefore , even with planetesimals as massive as the embryo ( ) , the migration is likely to be self - sustained migration rather than random walk if the embryo migrates longer than .this length is comparable to the feeding zone width . in the above discussion, the planetesimal mass can be replaced by the tracer mass to discuss migration calculated in our -body routine .therefore , our -body routine can reproduce self - sustained migration of the embryo even with tracers as massive as the embryo .this is true only if the ordered torque exerted on the embryo is as strong as the one - sided torque .this is not always the case .the torque becomes nearly one - sided if a strong velocity asymmetry of planetesimals exists around the embryo .if the embryo jumps over its feeding zone due to a strong kick from a tracer that is as massive as the embryo , the embryo can not see velocity asymmetry at that radial location .thus , the condition to apply the one - sided torque is that radial migration distance of the embryo due to a single encounter is much smaller than its feeding zone .this condition generally turns out to be .it is likely that the condition or less recommended by various authors ( kirsh et al .2009 ; ldt12 ; minton and levison , 2014 ) comes from this criterion . in our method , alternatively , instantaneous large changes of of the sub - embryo are suppressed by limiting the migration distance of the sub - embryo to the theoretical prediction that is derived assuming that planetesimals are much less massive than the embryo .therefore , our method can reproduce a long distance self - sustained migration .the condition to reproduce predominant inward migration is more severe because we need to consider migration of an embryo due to torques from both sides of the disk . in an unperturbed state ,the total torque from both sides is order of of the one - sided torque in the shear dominant regime ( bromley and kenyon , 2011 ) and is order of of the one - sided torque in the dispersion dominant regime ( ormel et al . , 2012 ) .thus , ( ) is multiplied to the second term in the r.h.s .( [ eq : stoch ] ) .then , the condition that the migration is ordered when the embryo migrates over the feeding zone is roughly given by .the initial condition of fig .10c barely satisfies this criterion provided that is the tracer mass instead of the planetesimal mass .our method indeed reproduces predominant inward migration .once the embryo migrates over the feeding zone , velocity asymmetry is obvious and self - sustained migration immerses .snapshots on the plane for the hybrid simulations ; ( a ) = 400 and ( b ) = 4000 .tracers are displayed as colored circles and their colors represent the mass of a planetesimal as shown by the color bar .sub - embryos are displayed as red filled circles with a horizontal bar with a half length of 10 hill radii [ for the panel ( b ) , only sub - embryos more massive than have a horizontal bar for consistency with the panel ( a ) ] .a radius of a circle is proportional to .snapshots of the cumulative number and the horizontal velocity ( , where is the keplerian velocity at 1au ) for ( a ) = 400 and ( b ) = 4000 .the diagonal dashed line is the number of planetesimals represented by a single tracer with a mass of for each run .the vertical dotted line represents . for comparison ,the results from ( c ) wetherill and stewart ( 1993 ) and ( d ) inaba et al .( 2001 ) are shown ; both from fig .10 of inaba et al .( 2001 ) .fig . 12 . continued .most of the authors who developed statistical planetary accretion codes ( weidenschilling et al . , 1997 ;kenyon and luu , 1998 ; inaba et al ., 2001 ; morbidelli et al . , 2009 ; ormel et al . ,2010a ; ldt12 ; amaro - seoane et al . , 2014 ) tested their codes using fig .12 of wetherill and stewart ( 1993 ) as a benchmark .we also perform simulations starting with the same condition of wetherill and stewart ( 1993 ) .inaba et al .( 2001 ) performed the same statistical simulations using their own coagulation code and the code of wetherill and stewart ( 1993 ) after modifying settings of these two codes as close as possible .both results are shown in fig .10 of inaba et al .( 2001 ) and we compare them with our results . as the initial condition , planetesimals are placed between 0.915 au and 1.085 au .individual planetesimals have an equal - mass of g. the nebular gas is included and the gas / solid ratio is 44.4 .see table 1 of wetherill and stewart ( 1993 ) for more details .we take into account aerodynamic gas drag ( adachi et el . , 1976 ) instead of 2.0 used in morishima et al .( 2010 ) . ] and tidal damping ( papaloizou and larwood , 2000 ) , but not type i gas - driven migration .the gas disk model is described in morishima et al .( 2010 ) in detail . in our simulations , the planetesimal disk radially expands with time , while the solid surface density remains constant in the previous statistical simulations .however , the effect of radial expansion is small on the timescale of the simulation ( yr ) .figure 11 shows snapshots on the plane for the two hybrid simulations with different initial numbers of tracers , and 4000 . in the early stage of evolution ( = 10,000 yr ) ,large embryos grow very quickly and is enhanced locally where large embryos exist .this stage is called the runaway growth stage .once large embryos dominantly control the velocity of the entire planetesimal disk ( yr ) , growth of embryos slows down ( ida and makino , 1993 ) .this stage is called the oligarchic growth stage . even though large embryos tend to open up gaps around their orbits , the gaps are not cleared out because the embryos migrate quickly due to occasional short - distance planetesimal - driven migrations and embryo - embryo interactions and because other embryos tend to fill gaps . during the oligarchic growth stage ,the orbital separation between neighboring embryos is typically 10 hill radii as found by kokubo and ida ( 1998 , 2000 ) .tanaka and ida ( 1997 ) showed that gaps open up only if the orbital separation of neighboring embryos is larger than 20 hill radii .thus , formation of clear gaps is unlikely to occur in this stage .figure 12 shows the mass and velocity distributions at different times for two hybrid simulations shown in fig .the mass spectra of the hybrid simulations are smooth at the transition mass at which a tracer is promoted to a sub - embryo ( see also appendix g ) . for comparison , the results from wetherill and stewart ( 1993 ) and inaba et al .( 2001 ) are also shown .the hybrid simulations agree well with these statistical simulations , although some small differences are seen . in the hybrid simulations ,the number of planetesimals represented by tracers has a lower limit shown by a dashed line in fig .thus , the tail of the mass spectrum in the massive side during the runaway growth is not correctly described .this causes a systematic delay in growth of large bodies , as already discussed by ldt12 .the delay is particularly clear in the low - resolution hybrid simulation at 10,000 yr , although the delay is small .this resolution effect should be seen even in the high resolution hybrid simulation while we see a good agreement with the statistical simulations at 10,000 yr .this is probably because our stirring routine slightly underestimates eccentricities and inclinations ( section 3.2 ) and this effect cancels out with the resolution effect to some degree .once the growth mode reaches oligarchic growth ( 25,000 yr ) , the masses of the largest bodies in the low resolution simulation almost catch up with those in the high resolution simulation and those in the statistical simulations .13 . snapshots on the plane for the simulations starting from equal - mass 2000 planetesimals in a narrow annulus between 0.7 au and 1.0 au : ( a ) the hybrid simulation starting with 200 tracers and ( b ) the pure -body simulation . in the panel ( a ), tracers are displayed as colored circles , and the colors represent the mass of planetesimal .sub - embryos are displayed as red filled circles .sub - embryos more massive than have a horizontal blue bar with a half length of 10 hill radii .the same color code is used in the pure -body simulation .a radius of a circle is proportional to ( for the pure -body simulation , for all particles ) .fig . 14 . fig . 14 .time evolutions of various quantities for three hybrid simulations ( orange , brown , and red solid curves ) and three pure -body simulations ( blue , purple , and black dashed curves ) : ( a ) the mass - weighted root - mean - square eccentricity and inclination , and , for embryos and planetesimals , ( b ) the total number of all bodies , the total number of embryos , the total number of tracers , ( c ) the total mass of embryos , the effective mass of the system , and the mass of the largest body .the last section focused only on the early stages of planetary accretion .further , the damping effects due to the gas disk make it difficult to judge whether the code works appropriately . herewe test our code for the entire stages of accretion without the gas disk .we perform hybrid simulations of accretion of terrestrial planets starting from a narrow annulus , similarly to those adopted in morishima et al .( 2008 ) and hansen ( 2009 ) , until completion of accretion ( myr ) . as the initial condition , 2000 equal - mass planetesimals ( )are placed between 0.7 au to 1.0 au , and the total disk mass is .the initial number of tracers is 200 . for comparison, we also perform pure -body simulations with the same initial condition using pkdgrav2 .a hybrid simulation takes about three days using a single core whereas a pure -body simulation takes a few weeks using eight cores . as found in morishima et al .( 2008 ) , the evolution is almost deterministic due to efficient dynamical friction if the initial number of planetesimals is large enough ( ) .if the initial is small while keeping the same total mass , the evolution becomes rather stochastic and the final systems have relatively large diversity ( hansen , 2009 ) .an example of the hybrid simulations is shown in fig .13 together with an example of the pure -body simulations . through the runaway growth stage , tens of embryos form ( myr ). with increasing masses of embryos , mutual orbital crossings and impacts occur , and a few distinctively large embryos emerge ( = 1 myr ) . in the late stage , the largest embryos sweep up small remnant planetesimals ( myr ) . at the end of accretion ,the largest three embryos have orbital eccentricities and inclinations as small as those for the present - day earth and venus due to dynamical friction of remnant bodies ( myr ) . in the hybrid simulation ,all the largest embryos are sub - embryos until the end of the simulation .we perform three hybrid simulations and three pure -body simulations in total , and all simulations show similar results , except one hybrid simulation produces only two distinctively large embryos . to see the accretion history more in detail , we plot various quantities as a function of time in fig . 14 . overall agreements between the hybrid simulations and the pure -body simulations are excellent . good agreements between the hybrid and pure -body simulations suggest that our statistical routine works well even in the late stages of planetary accretion .figure 14a shows the evolution of the mass - weighted root - mean - square eccentricity and inclination , and , for embryos and planetesimals . withincreasing time , and for both embryos and planetesimals increase primarily due to viscous stirring of largest bodies .after 10 myr , and of embryos starts to decrease because small embryos with large and collide with large embryos with low and .figure 14b shows the evolution of the total number of all bodies and the total number of embryos .the number has a peak around 1 myr both in the hybrid and pure -body simulations .the number of bodies which orbits are numerically integrated , , increases by at maximum and the maximum value is taken around 1 myr ( fig .figure 14c shows the total mass of embryos , , the effective mass of the system , which is sensitive to masses of largest embryos , and the mass of the largest body , .it is found that the values of and in the hybrid simulations are slightly lower than those in the pure -body simulations at myr . during the giant impact stage roughly between = 0.2 myr and a few myr ,the contribution of embryo - embryo impacts to mass gain of embryos is significant , comparably to embryo - planetesimal impacts .the embryo - embryo collision rate sensitively depends on and of embryos .the hybrid simulations give slightly higher and ( fig .14a ) and thus lower growth rates of embryos than those in the pure -body simulations during the giant impact stage .this is likely to be a resolution effect . in the low resolution hybrid simulations, the number of nearby tracers around embryos fluctuates and that makes dynamical friction somewhat noisy .we also perform the same simulations but starting with a larger number of tracers ( = 400 ) and find better agreements in with the pure -body simulations .fig . 15 .snapshots on the plane for the hybrid simulation in the jovian planet region .the simulation starts with equal - mass planetesimals .tracers are displayed as colored circles and colors represent the mass of a planetesimal .sub - embryos ( ) are displayed as red filled circles and full - embryos ( ) are displayed as blue filled circles .a radius of a circle is proportional to , but enhanced by a factor of 6 for embryos for good visibility .fig . 16 .time evolutions of ( a ) semimajor axes and ( b ) masses of embryos that are more massive than 0.4 at 2 myr .the same color is used for the same embryo in both panels . the embryos represented by blue , red , and purple curvesare called embryo 1 , 2 , and 3 , respectively , in the main text .the final test we present is accretion of cores of jovian planets starting with a wide planetesimal disk .levison et al . ( 2010 ) and capobianco et al . ( 2011 ) showed importance of self - sustained outward migration of embryos , as they can grow rapidly via collisions with little - perturbed outer bodies .however , these authors placed large embryos in the disk of small planetesimals in the initial conditions , and it is not clarified if outward migration plays a significant role in more realistic accretion scenarios .we set the initial condition similar to that employed by levison et al .( 2010 ) , but without any embryos .we place equal - mass planetesimals between 4 au and 16 au represented by 10000 tracers .the initial planetesimal mass is g and the radius is 2.4 km .the total mass of planetesimals is 200 and this gives the solid surface density about seven times the minimum - mass solar nebula ( hayashi , 1981 ) .the simulation is performed up to 2 myr and it takes roughly five days to complete using a single core .the nebula gas effects are included in the same manner as the simulations in section 3.4 , and the gas - to - solid ratio is 56.7 , which is the same as the minimum - mass solar nebula beyond the snow line ( hayashi , 1981 ) . we do not consider enhanced cross sections of embryos due to their atmospheres ( inaba and ikoma , 2003 ) . the nebular gas is assumed to dissipate exponentially on the timescale of 2 myr .capobianco et al .( 2011 ) showed that outward migration of an embryo is usually triggered in the nebular gas , if the planetesimal radius is in a certain range .the planetesimal radius we employ is in this range in the region inside of au [ see their eq . ( 21 ) ] .note that , however , planetesimal radii increase with time in our simulation unlike those in levison et al .( 2010 ) and capobianco et al .( 2011 ) .the snapshots on the plane are shown in fig .15 and the time evolutions of the masses and the semimajor axes of six largest embryos at the end of the simulation are shown in fig .several embryos appear near the inner edge of the disk after 0.1 myr .after some embryo - embryo interactions , one of the embryos ( embryo 1 ) starts self - sustained outward migration . since embryo 1 encounters with unperturbed planetesimals on its way , it grows very rapidly from 0.1 to 3 during only 50,000 yr .after embryo 1 reaches the outer edge , it turns its migration direction inward .however , it encounters with another smaller embryo ( embryo 2 ) and inward migration is halted .there , embryo 1 opens up a gap and starts slow outward migration due to distant encounters with inner planetesimals .embryo 1 is promoted to a full - embryo at 0.2 myr during its inward migration .embryo 2 and another embryo ( embryo 3 ) also migrate outward during the outward migration of embryo 1 , and they also grow more rapidly than inner embryos , but not as rapidly as embryo 1 .they eventually change their migration directions inward and come back to near the inner edge .embryo 2 occasionally attempts to migrate outward even after that , although its outward migration is not sustained .two outward migrations at 0.55 and 0.65 myr are halted by encounters with another embryo at 7 au , which merges with embryo 2 at 0.65 myr .embryo 2 is promoted to a full - embryo at this time .we do not clearly understand what prevents embryo 2 from subsequent outward migrations .the radial profile of the surface density of planetesimals is no longer smooth due to occasional gap formation by embryos , and the outward migrations of embryo 2 seem to be halted at local low density regions .another possibility is that migrations of full - embryos are halted by stochastic effects since they are not massive enough as compared with tracers .this possibility may be tested using different transition masses from a sub - embryo to a full embryo .the masses of embryo 2 and embryo 3 respectively reach 3.6 and 2.5 at 2 myr .their growth is slow at this time as they open up a gap in the planetesimal disk .obviously , we need to perform more simulations with different parameter sets and conduct more careful analysis .nevertheless , the example we showed here is a good demonstration that our code can produce rapid growth of embryos during their outward migrations .ngo ( 2012 ) performed simulations of accretion of cores of jovian planets using the lipad code .the initial conditions of his simulations are similar to ours . in his simulations without fragmentation ( his figs .5.14 and 5.15 ) , sub - embryos are almost radially stationary even though the routine of sub - embryo migration is included .once sub - embryos are promoted to full - embryos , they suddenly start rapid radial migrations .it is unclear to us why such large differences between sub - embryos and full - embryos are seen in his simulations . in spite of the large difference of embryo migration betweenhis and our simulations , the final masses of the largest embryos are close to each other .the cpu time per timestep of orbital integration as a function of the number of embryos and the number of tracers is roughly given as where , , and are the cpu times for a single orbital integration ( a kepler drift ) , for a single direct gravity calculation of a pair in the -body routine , and for a single set of calculations for a pair in the statistical routine , and is the typical number of interlopers in the neighboring search region ( fig . 2 ) .for a standard latest computer , s. from some of simulations , we find that and . as seen in eq .( [ eq : tcpu ] ) , the load on the statistical routine is controllable by modifying and . since the radial width of the neighboring search regionis given by eq .( [ eq : drt0 ] ) , for a fixed surface density .if we choose an azimuthal width as , becomes independent of and the cpu time for the statistical routine can be linearly scaled with .practically , one may choose a so that the load on the statistical routine does not exceed the load on the -body routine .the actual computational load on the statistical routine in some of our simulations is as follows .recall that in the present paper , we call the statistical routine every 30 steps of orbital integrations ( ) . if the time step of the statistical routine is halved , its load is doubled . in the hybrid simulations shown in fig . 5 ( and ) , the load on the statistical routine is only 10 % relative to the total load .the rest of 90 % is on the -body routine and more than half of it is occupied by the kepler equation solver .there is no direct gravity calculation in these simulations , as no embryo is included . in the hybrid simulation shown in fig .15 ( and ) , the load on the statistical routine is about 60 % before embryos appear .if the number of embryos is larger than about 10 , the load on the direct gravity calculation exceeds that for the kepler equation solver , and the relative load on the statistical routine decreases . at 1.6 myr of fig .15 , at which 21 embryos exist , the load on the statistical routine is about 45 % , that on the direct gravity calculation is 35 % , and that on the orbital integration is 20 % .overall , the load of the statistical routine is comparable to or less than that on the -body routine. we also comment on parallelization .the current version of our code is serial .however , the direct -body routine is already parallelized and we test how much simulations are accelerated by parallelization in case without the statistical routine . in the tests , we fix the number of embryos to be 10 and vary the number of tracers .we find that simulations are accelerated only if the number of tracers is more than .in contrast , the pure -body simulations using the tree code , pkdgrav2 , are accelerated by parallelization if ( morishima et al . , 2010 ) .the difference comes from the cpu times per step . to be benefitted by parallelization, the cpu time needs to be somewhat long to ignore latency .the computational cost per step for hybrid simulations is as small as those for -body simulations without particle - particle interactions .that ironically makes parallelization inefficient .we consider only random walk of tracers in tracer - tracer interactions .thus , self - sustained migration occurs only to embryos in our code although small bodies ( ) may migrate in reality . to examineif migrations of small bodies play an important role for overall accretion history of embryos , high resolution simulations ( with smaller ) are necessary .minton and levison ( 2014 ) showed that self - sustained migration occurs only if the embryo is much more massive than surrounding planetesimals and sufficiently separated from other embryos .such conditions are usually fulfilled near the transition from the runaway growth stage to the oligarchic growth stage .the mass of the largest embryo at the transition is roughly given by ( ormel et al ., 2010b ; morishima et al . ,2013 ) , where is the isolation mass and is the initial mass of planetesimals .therefore , if is very small ( e.g. , weidenschilling , 2011 ) , a very large number of tracers is necessary to represent an embryo at the transition by a single particle in our code .however , minton and levison ( 2014 ) also pointed out that the growth time scale needs to be longer than the migration time for an embryo to avoid encounters with other embryos during migration .if is very small , this criterion is not generally fulfilled for the largest embryo at the transition due to a very short growth time scale .thus , masses of candidate embryos for self - sustained migration are probably not too small to be practically resolved by our code even if is very small . the argument here , of course , depends on disk parameters and radial location and needs more careful estimates .our code can probably handle eccentric / inclined ringlets that are usually produced by external massive bodies ( see section 2.3 ) .however , we have not tested such simulations in this paper , and they remain for future work . a potential issue of our code in handling eccentric / inclined ringlets is accuracy of the global gravity forces ( section 2.7 ) . only for this calculation, we assume that a ring is axisymmetric and symmetric with respect to the invariant plane .these assumptions may not be appropriate if the radial / vertical thickness or the ringlet is much smaller than the radial / vertical excursion of individual particles .this is the case of some of narrow ringlets seen around solar system giant planets .if the global self - gravity plays an important role in structure evolution of narrow ringlets , we need a more advanced method to calculate it .the current code includes an option of hit - and - run bouncing ( appendix e ) but not fragmentation .we plan to implement collisional fragmentation to our code in subsequent work .since gas drag damps and of small fragments effectively , growth of embryos is accelerated by fragmentation of planetesimals , particularly in the oligarchic growth stage . on the other hand , since the total solid mass available for planetary accretion decreases due to rapid inward migration of fragments in the gaseous disk , the final masses of planets also decrease , unless planetary atmospheres effectively capture fragments .these conclusions were derived by simulations using statistical accretion codes ( e.g. , inaba et al . , 2003 ; chambers , 2008 ; kobayashi et al . , 2010 , 2011 ; amaro - seoane et al . ,2014 ) . in these studies ,the torques on embryos exerted by small fragments were not taken into account .levison et al . ( 2010 ) showed that small fragments shepherded by embryos push bash the embryos toward the central star , leading to rapid inward migration of the embryos . on the other hand , kobayashi et al .( 2010 ) pointed out that shepherded fragments are quickly ground down due to mutual collisions and very small ground fragments collide with or migrate past the embryos before pushing back the embryos .however , this has not been confirmed by direct lagrangian type simulations and is of interest in future work .we developed a new particle - based hybrid code for planetary accretion .the code uses an -body routine for interactions with embryos while it can handle a large number of planetesimals using a super - particle approximation , in which a large number of small planetesimals are represented by a small number of tracers .tracer - tracer interactions are handled by a statistical routine .if the embryo mass is similar to tracer masses and if embryo - tracer interactions are handled by the direct -body routine , the embryo suffers artificially strong kicks from tracers . to avoid this issue ,sub - embryos are introduced .accelerations of sub - embryos due to gravitational interactions with tracers are handled by the statistical routine whereas accelerations of tracers due to gravitational interactions with sub - embryos are handled by the direct -body routine .our statistical routine first calculates the surface number densities and the orbital elements of interloping planetesimals around each target tracer ( section 2.2 ) . using the phase - averaged collision probability , whether a collision between the interloper and the target occurs is determined ( section 2.4 ) .if a collision occurs , the velocity changes are accurately calculated by matching the positions of the interloper and the target without changing their eccentricity and inclination vectors . using the phase - averaged stirring and dynamical friction rates ,the change rates of the orbital elements of the tracers are calculated ( section 2.5 ) .these rates are converted into the accelerations of the tracers using both first and second order gauss planetary equations .planetesimal - driven migration of sub - embryos is basically handled by the -body routine but the migration rate is limited to the theoretical prediction derived by the statistical routine ( section 2.6 ) .this unique routine can reproduce smooth , long - distance migrations of sub - embryos .we performed various tests using the new hybrid code : velocity evolution due to collisions , and that due to gravitational stirring and dynamical friction , self - sustained migration of sub - embryos , formation of terrestrial planets both in the presence and absence of the gaseous disk , and formation of cores of jovian planets .all the test simulations showed good agreements with analytic predictions and/or pure -body simulations for all cases , except that the last test did not have a robust benchmark .the computational load on the portion of the statistical routine is comparable to or less than that for the -body routine .the current code includes an option of hit - and - run bouncing but not fragmentation that remains for future work .we are grateful to the reviewers for their many valuable comments , which greatly improved the manuscript .we also thank satoshi inaba for kindly giving us his simulation data .this research was carried out in part at the jet propulsion laboratory , california institute of technology , under contract with nasa .government sponsorship acknowledged .simulations were performed using jpl supercomputers , aurora and halo .in section 2.2 , we only consider encounters with interlopers .however , planetesimals in the target may encounter with other planetesimals in the same target if . to take into account this effect, we modify apparent planetesimal numbers in interlopers , if masses of interloping planetesimals are similar to those in the target tracer . for this purpose only , we assume axisymmetric distribution of planetesimals and introduce the mass bin for planetesimals .we also use the radial bin used in the neighboring search .we first calculate the total mass of tracers , , in the radial and mass bins where the target locates .the summation is done for tracers in any azimuthal locations , excluding the target . since the actual total mass seen from an individual planetesimal in the target traceris , we use a corrected for the interloper , if it is in the same radial and mass bins with the target . if there is no other tracer in the same radial and mass bins , the correction is not applied .we find that this self - encounter effect is very small at all as we do not identify any difference in simulations with and without the correction .the effect can also be checked by simulations with a same total number of planetesimals but using different numbers of tracers .although a simulation with a small number of tracers has larger statistical fluctuations in various physical quantities , such as the mean eccentricity , any differences in time - averaged quantities are not well identified .thus , we can omit this correction , and mass bins are unnecessary in our code .with hill s approximation , the equations of motion for the relative motion between the target and the interloper are reduced to hill s equations ( hill 1878 ; nakazawa et al . , 1989 ) .the time and the distance of hill s equations can be normalized by the inverse of orbital frequency , , and , respectively , where is the mean semimajor axis and is the reduced mutual hill radius [ eq .( [ eq : rhill ] ) ] .the solution of the relative motion in the absence of mutual gravity is expressed by non - dimensional relative orbital elements , , , , and .the elements , and are given as where is the difference of the semimajor axes , the relative eccentricity and the relative inclination are defined in eq .( [ eq : relei ] ) together with the phases and .the non - dimensional collision rate is defined as where is unity for collisional orbits and zero otherwise . in the dispersion dominant regime [ , given as ( greenzweig and lissauer , 1990 ; inaba et al ., 2001 ) where with the physical radii and , and are the complete elliptic integrals of the first and second kinds , respectively , and . in the shear dominant regime [ ,if the inclination is small enough ( ; see goldreich et al .( 2004 ) for this criterion ) all pairs of particles entering the mutual hill sphere collide , and becomes independent of and as ( ida and nakazawa , 1989 ) in the shear dominant regime with moderate inclinations ( ) , depends only on as ( ida and nakazawa , 1989 ) following inaba et al .( 2001 ) , we connect the collision rates in different regimes as .\ ] ] the collision rate given by this expression well agrees with that derived by numerical integrations ( ida and nakazawa , 1989 ; greenzweig and lissauer , 1990 ) . the non - dimensional stirring and dynamical friction rates are defined as where , , and are the changes of , , and [ eq . ( [ eq : relei ] ) ] during a single encounter .the non - dimensional dynamical friction rate for inclination is obtained by replacing by in eq .( [ eq : pdfd ] ) and is known to be the same with ( tanaka and ida , 1996 ) .the rates for the shear dominant regime are ( ida 1990 , ohtsuki et al . , 2002 ) for , we adopt a functional form that is similar to the one with the rayleigh distribution ( ohtsuki et al . ,2002 ) but the factors are modified .the rates for the dispersion dominant regime are given as ( tanaka and ida , 1996 ) \nonumber \\ & & \times \ln{(\lambda^2 + 1)},\\ q_{\rm vs , high } & = & \frac{36}{\pi \tilde{i}_{\rm r } \sqrt{\tilde{e}_{\rm r}^2 + \tilde{i}_{\rm r}^2 } } \left[k(\zeta)- \frac{12\tilde{i}_{\rm r}^2}{\tilde{e}_{\rm r}^2 + 4\tilde{i}_{\rm r}^2}e(\zeta)\right ] \nonumber \\ & & \times \ln{(\lambda^2 + 1)},\\ p_{\rm df , high } & = & \frac{288}{\pi \tilde{i}_{\rm r } ( \tilde{e}_{\rm r}^2 + 4\tilde{i}_{\rm r}^2 ) \sqrt{\tilde{e}_{\rm r}^2 + \tilde{i}_{\rm r}^2 } } e(\zeta ) \ln{(\lambda^2 + 1)},\end{aligned}\ ] ] where in and , we omit the term as it is negligible compared with in the dispersion dominant regime .we connect the stirring and dynamical friction rate at the shear and dispersion dominant regimes in a similar manner of ohtsuki et al .( 2002 ) : the coefficients are the approximated formulae well agree with the rates derived by three - body orbital integrations [ fig . 8 of ida ( 1990 ) and fig . 1 of ohtsuki et al .( 2002 ) ] , except for the region with and ( see section 3.2 ) .the non - dimensional migration rate for one - sided torque is defined as where is the change of during a single encounter and the negative sign represents the back reaction on the embryo . at , [ ida et al . , 2000; their eq .it is expected that in the shear dominant regime is about the same value as the migration rate is almost constant in this regime ( kirsh et al . , 2009 ) : for the dispersion dominant regime , ida et al .( 2000 ) derived the migration rate averaged over and as [ their eq .( 18 ) ] inserting this equation into eq .( [ eq : rmig ] ) , we have where the integration range of is limited between and , and is approximately treated as a constant in the integration .analogous to the stirring rates , we connect the migration rates at the shear and dispersion dominant regimes but the value is assumed not to exceed .\ ] ] while this expression must be correct in the low and high velocity limits , its accuracy at the intermediate regime needs to be confirmed by comparing with the rate derived from three - body orbital integrations .when the non - dimensional collisional and stirring rates are derived , uniformities of and are assumed .however , we do not assume uniformities of the relative longitudes , and .alignments of the longitudes occur , for example , if the system is perturbed by a giant planet and the forced eccentricity is much larger than the free eccentricities .ida ( 1990 ) showed that the eccentricity evolution of a particle due to dynamical friction of particles is {\rm df } \label{eq : dfj0 } \\ & = & \frac{c_1}{\nu_{j}h_{ij}^2}\left[\nu_{j}{e}_{j}^2 - \nu_{i}e^2_{i } + ( \nu_{i}-\nu_{j})e_{i}e_{j}\cos{\varpi_{ij } } \right]p_{\rm df } , \label{eq : dfj1 } \end{aligned}\ ] ] where the characters appeared in these equations are defined around eqs .( [ eq : dedfs ] ) and ( [ eq : dedfd ] ) .if s of interlopers are randomly distributed and the distribution of is the rayleigh distribution , the term with becomes small enough after averaging over interlopers ( ida , 1990 ) .the equilibrium state for this case leads to the well - known energy equipartitioning ( ) with dynamical friction only . with viscous stirring ,however , the energy equipartitioning is achieved only if the mass distribution is very steep ( rafikov , 2003 ) . if the distribution of is not random , as may be the case of a forced system , the term with in eq .( [ eq : dfj1 ] ) can not be ignored . in a forced system ,the eccentricity vectors may be split as and , where is the forced eccentricity vector , and are the free eccentricity vectors , respectively ( murray and dermott , 1999 ) . inserting these forms into eq .( [ eq : dfj0 ] ) and setting due to the uniformity of , we can find an equation similar to eq .( [ eq : dfj1 ] ) but and replaced by the free eccentricities and and replaced by the relative longitude of the free eccentricities . since there is no correlation between the directions of and , the third term in the square bracket of the r.h.s of eq .( [ eq : dfj1 ] ) can be neglected .thus , dynamical friction tends to lead the energy equipartitioning of the free eccentricities , if is shared for all particles . a similar conclusion can be derived for inclinations .now we consider splitting eq .( [ eq : dfj1 ] ) into two terms : a stirring term that is preferably larger than the viscous stirring term [ as required in eq .( [ eq : vrs ] ) ] and a damping term that decreases the free eccentricity .we find the forms that can fulfill such conditions as the same equations are given in eqs .( [ eq : dedfs ] ) and ( [ eq : dedfd ] ) . in a forced system , the terms in the parenthesis of the r.h.s . of eq .( [ eq : dedfdap ] ) are reduced to using and due to the uniformity of . the l.h.s . of eq .( [ eq : dedfdap ] ) is also reduced to because the orientation of changes randomly due to gravitational interactions with other tracers [ . thus , the damping term [ eq .( [ eq : dedfdap ] ) ] decreases the free eccentricity , as required . usually in astrophysics, only this damping term is called dynamical friction ( binney and tremaine , 1987 ) . a similar splitting is possible for inclinations , and the split termsare given by eqs .( [ eq : didfs ] ) and ( [ eq : didfd ] ) .the classical theory of random walk shows that the diffusion coefficient for the one dimensional problem is , where is the change in the semimajor axis for the tracer during the time step and the angle brackets mean averaging over multiple changes .the change of due to a single encounter with other planetesimal is defined to be , where is again the change of during an encounter .thus , the diffusion coefficient for the tracer due to encounters with planetesimals in the tracer is given as where is the change of during and the subscript for the angle brackets means averaging over planetesimals in the tracer .petit and hnon ( 1987 ) performed three - body orbital integrations and showed that for .although it is unclear if this factor is applicable for , we use the relationship for any cases .this gives /2 , using .the averaged change is given by where we used that is derived from the jacobi integral of hill s equations [ eq .( [ eq : jaco ] ) ] , and and are defined in eqs .( [ eq : pvs3 ] ) and ( [ eq : qvs3 ] ) . inserting eq .( [ eq : db2 ] ) into eq .( [ eq : did ] ) , the diffusion coefficient is given as finally , the diffusion coefficient due to encounters with all interlopers is given by .ohtsuki and tanaka ( 2003 ) derived the viscosity for the equal - mass planetesimals disk and their viscosity is .the diffusion coefficients derived by ormel et al .( 2012 ) and glaschke et al .( 2014 ) are similar to ours .in section 2.4 , we show how the total number of tracers changes due to merging between planetesimals . on the other hand , in hit - and - run collisions, there should be no change in the number . before describing how to handle hit - and - run collisions between planetesimals in two tracers ,we first show how to handle a hit - and - run collision between embryos or between a full embryo and a tracer .consider an impact between the target with mass and the impactor with mass ( ) .let the impact velocity and the impact angle be and ( for a head - on collision ) .if , where is the critical velocity , the collision results in escape of the projectile after bouncing on the target . based on thousands of sph simulations , genda et al . ( 2012 ) derived the formula of as where is the mutual escape velocity , , and .the coefficients are , , , , and . if the impact is judged as a hit - and - run collision ( ) , the post - impact velocities of the projectile and the target are determined as follows .we do not consider any mass exchanges between the target and the impactor for simplicity following kokubo and genda ( 2010 ) .the impact velocity vector is decomposed to the normal and tangential components ( and ) relative to the vector pointing toward the center of the target from the center of the impactor .the tangential component of the post - impact relative velocity is assumed to be the same as .the normal component of the post - impact relative velocity is given as the velocity change described here is similar to but slightly different from that of kokubo and genda ( 2010 ) , who always set and adjust .the normal and restitution coefficients , and , are defined as and . while the above case basically assumes and , these coefficientscan be changed to any values in simulations ( in section 3.1 , we use and 0.8 ) .now we consider hit - and - run collisions between planetesimals in two tracers .we will handle them as consistent as possible with those between embryos .if a collision is judged to occur in the statistical routine , the target and the impactor are moved to the impact position without changing their eccentricity and inclination vectors , as described in section 2.4.2 .the relative velocity , , at infinity ( without acceleration due to mutual gravity ) is calculated at this impact position . if , only some fraction of planetesimals in the tracer are involved in the collision ( fig .the impact velocity is given as .we ignore deflection of the relative orbit during acceleration due to mutual gravity so that the direction of the impact velocity vector is identical to that of the relative velocity vector .the problem is that we can not directly use the relative position vector of the tracers because their mutual separation is not exactly the sum of the radii of a colliding pair after matching their positions . instead of using the actual relative position vector, we make a fictitious relative position vector . the fictitious impact angle is given by .assuming there is no correlation between and , the probability that the impact angle is between and is given as ( e.g. , shoemaker and wolfe , 1982 ) .we choose an arbitrary between 0 and 90 degrees following this distribution and also choose the tangential direction of relative to at random .then , we judge whether the collision results in merging or bouncing using eq .( [ eq : gen ] ) in which is replaced by . if it is merging , we follow the process described in section 2.4.2 , using as the impact velocity vector .if it turns out to be a hit - and - run collision , we calculate the post impact velocities using and from eq .( [ eq : vnd ] ) .then their velocities are modified so that their relative velocity at infinity is given as , again ignoring deflection of the orbits due to mutual gravity . if the interloper is split into two tracers [ case ( a ) in eq .( [ eq : delm ] ) ] before the collision , we merge them by summing their momenta after matching their positions .the merged tracer has the same and as those of the original interloper before splitting .the semimajor axis of the interloper is adjusted so that the -component of the total angular momentum of is conserved .finally , the mean longitudes of the target and interloper are placed to the original values .here we explain how to derive the correction factor appeared in eq .( [ eq : nj2 ] ) .let us define the inner and outer radii of the region for neighboring search around the embryo be and ( fig . 2 ) .the interloper has the semimajor axis , the eccentricity , and the inclination .the maximum and minimum distances of the interloper , and , from the central star projected on the invariant plane ( ) are given by if , we define the eccentric anomaly ( ) at as and the mean anomaly is also defined as . if , we set . in a similar manner ,if , we define the eccentric anomaly ( ) at as and the mean anomaly . if , we set . then , the period in which the interloper is radially in the region during the orbital period is given as we summarize the differences between the lipad code ( ldt12 ) and our new code . in the lipad code , to derive the spatial density of planetesimals ,tracers are placed in mass and radial grids and the vertical distribution is assumed to be gaussian .our code has neither these fixed grids nor the assumption for the vertical distribution ( except for the global gravity calculation described in section 2.7 ) .thus , our code has lagrangian natures more than the lipad code , although these differences are probably not fundamental for the test simulations shown in this paper .our scheme is more beneficial for systems with non - axisymmetric and/or inclined structures .the advantage of the assumption of the vertical distribution in the lipad code is that changes in collision and stirring rates at different are explicitly taken into account .changes in collision rates with z are taken into account in our model too , although implicitly , as we move both the target and the interloper to the collisional point ( section 2.4 ) ; collisions occur more often near the mid - plane than at high . in the stirring routine ( section 2.5 ), our method ignores this effect and this probably introduces some inaccuracies in orbital evolutions of individual particles .nevertheless , the velocity evolution of the system as a whole is reproduced reasonably well by our method as shown in the test simulations ( see more discussion for viscous stirring and dynamical friction given below ) .the number of planetesimals in a tracer , , is an integer in our model ( section 2.1 ) , contrary to ldt12 .if a tracer with a non - integer number of planetesimals is promoted to a single sub - embryo , the extra mass , ( if ) , needs to be incorporated into the sub - embryo or transferred to other tracers which may have very different planetesimal masses . both are not accurate although it is not clear to us how it is handled in the lipad code . in our code , orbital isolation between planetesimalsdoes not occur as we choose a narrow radial half width of the neighboring search region given by eq .( [ eq : drt0 ] ) .if we adopt a larger , orbital isolation can potentially occur . in this casewe may need to consider how isolated bodies interact through distant perturbations .in general , however , the radial resolution is sacrificed with a large . in our current scheme ,interactions between runaway bodies are approximately represented by occasional close encounters rather than distant perturbations .the lipad code explicitly handles orbital isolation and distant perturbations ( e.g. , their fig .this means that they use the radial grid size larger than ours .even with a large , the exitisting lagrangian codes can not handle orbital isolation of planetesimals that are much less massive than , because a small number of runaway bodies can not be represented by tracers with nearly fixed masses .collisions between tracers and sub - embryos are handled in the -body routine in the lipad code .thus , after the promotion the first impact of a tracer doubles the mass of a sub - embryo .this makes artificial kinks in mass spectra at the transition mass , .the influence of this effect on evolution of the overall system is small as discussed by ldt12 , particularly if the masses of the largest embryos are much larger than . in our method ,collisions of planetesimals with sub - embryos are handled by the statistical routine ( secs .2.4 and 2.6 ) .the mass spectra are smooth at ( see fig .12 ) , because only some of planetesimals in a tracer collide with a sub - embryo .note that we assume uniformities in phase space ( section 2.3 ) that seem to be valid at least in test simulations shown in this paper , while no assumption for phase - space is necessary for the -body routine .the impact velocity between tracers is calculated more accurately in our code than the lipad code .as described in section 2.4 , we estimate the impact velocity between tracers by matching their positions without changing their eccentricity and inclination vectors . in the lipad code , only the interloper s longitude is aligned to that of the target by rotating the interloper s coordinates , not letting the interloper move along its keplerian orbit .this changes the longitude of pericenter of the interloper by the rotation angle .ldt12 managed to minimize this issue by choosing the azimuthally closest interloper for the collision with the target , as originally done in levison and morbidelli ( 2007 ) . on the other hand , our method allows any interlopers to collide with the target .the timescale of collisional damping shown in fig .2 of ldt12 is clearly too short , probably because they described the initial condition of a wrong simulation .the largest difference between our code and the lipad code probably appears in the methods to handle viscous stirring and dynamical friction . in the lipad code ,the probability that an interloper enters within a mutual hill radius of the target is calculated .if an encounter occurs , the acceleration of a smaller body of the pair is given by solving a pair - wise gravitational interaction .the acceleration of a lager body of the pair is not calculated in this routine , but the damping forces are given by the chandrasekhar s formula of dynamical friction .instead of taking into account stirring on the larger body , they set lower limits of and of the larger body based on energy equipartitioning .some inaccuracies seem to be introduced in their approach .first , energy equipartitioning is not usually achieved except with a very steep mass spectrum ( rafikov , 2003 ) .second , the chandrasekhar s formula of dynamical friction is not applicable to the shear - dominant regime ( e.g. , ida , 1990 ; ohtsuki et al . , 2002 ) .in our method , we do not solve individual pair - wise gravitational interactions .instead , we calculate the time averaged changes of the orbital elements due to multiple encounters with tracers , using the phase - averaged stirring rates ( section 2.5.1 ). these changes of orbital elements are then converted into the accelerations of tracers ( section 2.5.2 ) .this approach allows us to handle viscous stirring , the stirring and damping parts of dynamical friction due to interactions between all pairs in the same manners regardless of their mass ratios .our approach is based on uniformities of the phases ( section 2.3 ) , and methods which explicitly solves pair - wise gravitational interactions ( i.e. , the acceleration of a smaller body in the lipad code ) are probably more accurate than ours .we would like to point out , however , that the lipad code also partly assumes uniformities of the phases because it uses the phase - averaged collision / encounter probability of greenzweig and lissauer ( 1990 ) ; note that we use and in the lipad code are the same quantity in different formats .ldt12 included planetesimal - driven migration of sub - embryos by adding torques on them calculated by a series of three - body integrations ( sun , the target , and the interloper ) during lipad simulations .each interloper is randomly chosen from one in seven hill radii of the target .it has the same semimajor axis , orbital eccentricity , and inclination as one of the tracers but its phases are modified . as far as we understand, their point is that the torque on the sub - embryo is smoothed by phase averaging. this may be true in the dispersion dominant regime ( ) . in the shear dominant regime, however , the phase dependence of the torque is weak and the averaged toque turns out to be as strong as the torque from a single interloping tracer at all [ see fig . 8 of ohtsuki and tanaka ( 2003 ) , for example ] .thus , sub - embryos should occasionally suffer strong kicks from tracers . although it is unclear to us whether the point described above is indeed an issue for the lipad code , sub - embryos in lipad simulations ( ngo , 2012 ) look radially much more stationary than those in our simulations ( section 3.6 ) . in our method ,the angular momentum change of a sub - embryo is stored in the -body routine but its release rate is limited to the theoretical prediction derived by the statistical routine ( section 2.6.2 ) .theoretical justifications of our method are discussed in section 3.3 .the lipad code performs a series of three - body integrations during a simulation for two cases .the first one is for the routine of sub - embryo migration .the another one is for the stirring routine in the shear dominant regime .our code does not need to perform three body integrations .migration of sub - embryos are handled as explained above . stirring in the shear dominant regimeis handled using the approximated formulae of the phase - averaged stirring rates ( section 2.5 and appendix b ) .adachi , i. , hayashi , c. , nakazawa , k. 1976 .the gas drag effect on the elliptical motion of a solid body in the primordial solar nebula .56 , 17561771 .kortenkamp , s.j . , wetherill g.w , 2000 . terrestrial planet and asteroid formation in the presence of giant planets .i. relative velocities of planetesimals subject to jupiter and saturn perturbations .icarus 143 , 6073 .morishima , r. , schmidt , m.w . ,stadel , j. , moore , b. 2008 . formation and accretion history of terrestrial planets from runaway growth through to late time : implications for orbital eccentricity .j. 685 , 12471261 .nagasawa , m. , lin , d.n.c . ,thommes , e.w . , 2005 .dynamical shake - up of planetary systems .i. embryo trapping and induced collisions by the sweeping secular resonance and embryo - disk tidal interaction .j. 635 , 578598 .
we introduce a new particle - based hybrid code for planetary accretion . the code uses an -body routine for interactions with planetary embryos while it can handle a large number of planetesimals using a super - particle approximation , in which a large number of small planetesimals are represented by a small number of tracers . tracer - tracer interactions are handled by a statistical routine which uses the phase - averaged stirring and collision rates . we compare hybrid simulations with analytic predictions and pure -body simulations for various problems in detail and find good agreements for all cases . the computational load on the portion of the statistical routine is comparable to or less than that for the -body routine . the present code includes an option of hit - and - run bouncing but not fragmentation , which remains for future work . accretion ; planetary formation ; planetary rings ; planets , migration ; origin , solar system
ime - domain simulation is an important approach for power system dynamic analysis .however , the complete system model , or interchangeably the long - term stability model , typically includes different components where each component requires several differential and algebraic equations ( dae ) to represent , at the same time , these dynamics involve different time scales from millisecond to minute . as a result, the total number of dae of a real power system can be formidably large and complex such that time domain simulation over long time intervals is expensive .these constraints are even more stringent in the context of on - line stability assessment .intense efforts have been made to accelerate the simulation of long - term stability model .one approach is to use a larger time step size to filter out the fast dynamics or use automatic adjustment of step size according to system behavior in time - domain simulation from the aspect of numerical method .another approach is to implement the quasi steady - state ( qss ) model in long - term stability analysis from the aspect of model approximation .nevertheless , the qss model suffers from numerical difficulties when the model gets close to singularities which were addressed in - .moreover , the qss model can not provide correct approximations of the long - term stability model consistently as numerical examples shown in .in addition , sufficient conditions of the qss model were developed in which pointed to a direction to improve the qss model . as a result, the qss model requires improvements in both model development and numerical implementation .this paper contributes to the latter one . in this paper, we apply pseudo - transient continuation ( ) which is a theoretical - based numerical method in power system long - term stability analysis .pseudo - transient continuation method can be implemented directly in the long - term stability model to accelerate simulation speed compared with conventional implicit integration method . on the other hand, the method can also be applied in the qss model to overcome possible numerical difficulties due to good stability property .this paper is organized as follows .section [ sectiondyptc ] briefly reviews general pseudo - transient continuation method in dae system .section [ sectionptcinpowersystem ] includes a introduction about power system models followed by implementation of pseudo - transient continuation method in the long - term stability model and the qss model respectively .section [ sectionnumerical ] presents three numerical examples to show the feasibility of the method . andconclusions are stated in section [ sectionconclusion ] .pseudo - transient continuation is a physically - motivated method and can be used in temporal integration .the method follows the solution of dynamical system accurately in early stages until the steady state is approaching .the time step is thereafter increased by sacrificing temporal accuracy to gain rapid convergence to steady state .if only the steady state of a dynamical system instead of intermediate trajectories is of interest , pseudo - transient continuation method is a better choice than accurate step - by - step integration . on the other hand ,compared with methods that solve nonlinear equations for steady state such as line - search and trust region methods , pseudo - transient continuation method can avoid converging to nonphysical solutions or stagnating when the jacobian matrix is singular .this is particularly the case when the system has complex features such as discontinuities which exist in power system models .therefore , method can be regarded as a middle ground between integrating accurately and calculating the steady state directly . method can help reach the steady state quickly while maintain good accuracy for the intermittent trajectories .for ode dynamics , sufficient conditions for convergence of were given in .the results were further extended the semi - explicit index - one dae system in .we recall the basic algorithm here .we consider the following semi - explicit index - one dae system : with initial value . here , , ^t \in \re^{n_1+n_2} ] , ^t\in\re^{p+m+n} ] , ^t\in\re^{p+m+n}$ ] , , where is the identity matrix of size .in order to make the initial condition of consistent , we switch back to implicit integration method whenever discrete variables jump and set the step length to be . besides , the qss model is implemented at short - term dynamics settle down after the contingency .usually , can be set as 30s .the proposed algorithm is shown as below .run the long - term stability model up to by implicit integration method . set the value at as the initial condition of the qss model , and set .start to run the qss model .if the qss model has a numerical difficulty by using implicit integration method , then go to step 3 , otherwise , continue with the qss model . while is too large .if discrete variables jump at update according to ( [ coupleqss1 ] ) .set , .solve the newton step .set .evaluate .otherwise set .solve the newton step .set .evaluate .update according to ( [ delta ] ) . similarly , and depend on the specific integration method used .if implicit trapezoidal method is used , then this section , three examples are to be presented .the first two examples were the same 145-bus system in which the qss model met numerical difficulties during simulation while the long - term stability model was stable in long - term time scale .firstly , method was implemented in the long - term stability model and the speed was more than 7 times faster than the trapezoidal method . secondly ,when the qss model by trapezoidal method met difficulty , method was implemented in the qss model and provided correct approximations and the speed was still more than 5 times faster than the long - term stability model by trapezoidal method . and in the last example which was a 14-bus system , the long - term stability model was unstable and method successfully captured the instability which was signaled by reaching the bound of maximum iteration in the newton step .all simulations were done in psat-2.1.6 .the system was a 145-bus system .there were exciters and power system stabilizers for each of generator 1 - 20 .and there were turbine governors for each of generator 30 - 40 .besides , there were three load tap changers at lines between bus 73 - 74 , bus 73 - 81 and bus 90 - 92 respectively .the contingency was a line loss between bus 1 - 6 . in this case , the post - fault system was stable after the contingency in the long - term time scale and it took 122.39s for the time domain simulation of the long - term stability model by implicit trapezoidal method .however , method only took 16.12s for the simulation of the long - term stability model which was about of the time consumed by trapezoidal method .[ tdsptc ] shows that the trajectories by method followed closely to the trajectories by trapezoidal method , and finally both converged to the same long - term stable equilibrium point .thus method provided correct approximations for the long - term stability model in terms of trajectories and stability assessment . method . method provided correct approximations.,width=172 ] method . method provided correct approximations.,width=172 ] method . method provided correct approximations.,width=172 ] method . method provided correct approximations.,width=172 ] in this example , the system was the same as the last case . however , the qss model met numerical difficulties at 40s when implicit trapezoidal method was used , thus method was implemented in the qss model starting from 40s .[ tdsptcqss ] shows that the trajectories by method converged to the long - term stable equilibrium point which the long - term stability model converged to , and also provided good accuracy for the intermittent trajectories .it took 21.75s for method which was about of the time consumed by the long - term stability model using implicit trapezoidal method . method . method overcame numerical difficulties and provided correct approximations.,width=172 ] method . method overcame numerical difficulties and provided correct approximations.,width=172 ] method . method overcame numerical difficulties and provided correct approximations.,width=172 ] method . method overcame numerical difficulties and provided correct approximations.,width=172 ] in this case , the 14-bus system was long - term unstable due to wild oscillations of fast variables .the system was modified based on the 14-bus test system in psat-2.1.6 .there were three exponential recovery loads at bus 9 , 10 and 14 respectively and two turbine governors at generator 1 and 3 .besides , there were over excitation limiters at all generators and three load tap changers at lines between bus 4 - 9 , bus 12 - 13 and bus 2 - 4 .the system suffered from long - term instabilities and simulation by implicit trapezoidal method could not continue after 101.22s . method also stopped at 103.34s when the bound on the total number of iterations for the newton step was reached .thus method was able to capture instabilities of the long - term stability model . method . method was able to capture instabilities of the long - term stability model.,width=172 ] method . method was able to capture instabilities of the long - term stability model.,width=172 ] method . method was able to capture instabilities of the long - term stability model.,width=172 ] method . method was able to capture instabilities of the long - term stability model.,width=172 ]in this paper , modified methods for the long - term stability model and the qss model are given for power system long - term stability analysis with illustrative numerical examples .we make use of the fast asymptotic convergence of method in the long - term stability model to achieve fast simulation speed . on the other hand , we take advantage of good stability property of method in the qss model to overcome numerical difficulties .numerical examples show that can successfully provide correct stability assessment for the long - term stability model and overcome numerical difficulties in the qss model , as well as offer good accuracy for the intermediate trajectories . can be regarded as a good in - between method with respect to integration and steady state calculation , thus serves as an alternative method in power system long - term stability analysis .this work was supported by the consortium for electric reliability technology solutions provided by u.s .department no .de - fc26 - 09nt43321 .kurita , a. , h. okubo , k. oki , s. agematsu , d. b. klapper , n. w. miller , w. w. price jr , j. j. sanchez - gasca , k. a. wirgau , and t. d. younkins , _ multiple time - scale power system dynamic simulation _ , ieee transactions on power systems , vol216 - 223 , 1993 .t. v. cutsem , m. e. grenier , d. lefebvre , _ combined detailed and quasi steady - state time simulations for large - disturbance analysis_. international journal of electrical power and energy systems , vol .28 , issue 9 , pp .634 - 642 , november 2006 .x. z. wang , h. d. chiang , _ numerical investigations on quasi steady - state model for voltage stability : limitations and nonlinear analysis_. submitted to international transactions on electrical energy systems .x. z. wang , h. d. chiang , _ analytical studies of quasi steady - state model in power system long - term stability analysis_. to appear in ieee transactions on circuits and systems i : regular papers .doi 10.1109/tcsi.2013.2284171 . c. t. kelley , d. e. keyes _ convergence analysis of pseudo - transient continuation_. siam journal on numerical analysis , vol .508 - 523 , april 1998 .v. vittal , d. martin , r. chu , j. fish , j. c. giri , c. k. tang , f. eugenio villaseca , and r. g. farmer ._ transient stability test systems for direct stability methods_. ieee transactions on power systems , vol . 7 , no .37 - 43 , feb . 1992
in this paper , pseudo - transient continuation method has been modified and implemented in power system long - term stability analysis . this method is a middle ground between integration and steady state calculation , thus is a good compromise between accuracy and efficiency . pseudo - transient continuation method can be applied in the long - term stability model directly to accelerate simulation speed and can also be implemented in the qss model to overcome numerical difficulties . numerical examples show that pseudo - transient continuation method can provide correct approximations for the long - term stability model in terms of trajectories and stability assessment . pseudo - transient continuation , long - term stability model , quasi steady - state model , long - term stability .
about ten years ago , a peculiar dynamical phenomenon was discovered in populations of identical phase oscillators : under nonlocal symmetric coupling , the coexistence of coherent ( synchronized ) and incoherent oscillators was observed .this highly counterintuitive phenomenon was given the name chimera state after the greek mythological creature made up of different animals . sincethen the study of chimera states has been the focus of extensive research in a wide number of models , from kuramoto phase oscillators to periodic and chaotic maps , as well as stuart - landau oscillators .the first experimental evidence of chimera states was found in populations of coupled chemical oscillators as well as in optical coupled - map lattices realized by liquid - crystal light modulators .recently , moreover , martens and coauthors showed that chimeras emerge naturally from a competition between two antagonistic synchronization patterns in a mechanical experiment involving two subpopulations of identical metronomes coupled in a hierarchical network . in the context of neuroscience, a similar effort has been undertaken by several groups , since it is believed that chimera states might explain the phenomenon of unihemispheric sleep observed in many birds and dolphins which sleep with one eye open , meaning that one hemisphere of the brain is synchronouns whereas the other is asynchronous .the purpose of this paper is to make a contribution in this direction , by identifying for the first time a variety of single and multi - chimera states in networks of non - locally coupled neurons represented by hindmarsh rose oscillators .recently , multi - chimera states were discovered on a ring of nonlocally coupled fitzhugh - nagumo ( fhn ) oscillators .the fhn model is a 2dimensional ( 2d ) simplification of the physiologically realistic hodgkin - huxley model and is therefore computationally a lot simpler to handle .however , it fails to reproduce several important dynamical behaviors shown by real neurons , like rapid firing or regular and chaotic bursting .this can be overcome by replacing the fhn with another well - known more realistic model for single neuron dynamics , the hindmarsh - rose ( hr ) model , which we will be used throughout this work both in its 2d and 3d versions . in section 2 of this paper, we first treat the case of 2d - hr oscillators represented by two first order ordinary differential equations ( odes ) describing the interaction of a membrane potential and a single variable related to ionic currents across the membrane under periodic boundary conditions .we review the dynamics in the 2d plane in terms of its fixed points and limit cycles , coupling each of the oscillators to nearest neighbors symmetrically on both sides , in the manner adopted in , through which chimeras were discovered in fhn oscillators .we identify parameter values for which chimeras appear in this setting and note the variety of oscillating patterns that are possible due to the bistability features of the 2d model .in particular , we identify a new `` mixed oscillatory state '' ( mos ) , in which the desynchronized neurons are uniformly distributed among those attracted by a stable stationary state .furthermore , we also discover chimera like patterns in the more natural setting where only the membrane potential variables are coupled with of the same type .next , we turn in section 3 to the more realistic 3d - hr model where a third variable is added representing a slowly varying current , through which the system can also exhibit bursting modes . here , we choose a different type of coupling where the two first variables are coupled symmetrically to of their own kind and observe only states of complete synchronization as well as mos in which desynchronized oscillators are interspersed among neurons that oscillate in synchrony . however , when coupling is allowed only among the first ( membrane potential ) variables chimera states are discovered in cases where spiking occurs within sufficiently long bursting intervals .finally , the paper ends with our conclusions in section 4 .following the particular type of setting proposed in we consider nonlocally coupled hindmarsh - rose oscillators , where the interconnections between neurons exist with nearest neighbors only on either side as follows : \label{eq:01 } \\\dot y_k & = & 1 - 5x_k^2-y_k+ \frac{\sigma_y}{2r}\sum_{j = k - r}^{j = k+r } [ b_{yx}(x_j - x_k)+b_{yy}(y_j - y_k ) ] .\label{eq:02 } \end{aligned}\ ] ] in the above equations is the membrane potential of the -th neuron , represents various physical quantities related to electrical conductances of the relevant ion currents across the membrane , , and are constant parameters , and is the external stimulus current . each oscillator is coupled with its nearest neighbors on both sides with coupling strengths .this induces nonlocality in the form of a ring topology established by considering periodic boundary conditions for our systems of odes . as in ,our system contains not only direct and coupling , but also cross - coupling between variables and .this feature is modeled by a rotational coupling matrix : depending on a coupling phase . inwhat follows , we study the collective behavior of the above system and investigate , in particular , the existence of chimera states in relation to the values of all network parameters : , , , and .more specifically , we consider two cases : ( ) direct and cross - coupling of equal strength in both variables ( ) and ( ) direct coupling in the variable only ( , ) . similarly to we shall use initial conditions randomly distributed on the unit circle ( ) . at of eqs .( [ eq:01]),([eq:02 ] ) ( left ) and selected time series ( right ) for .( a ) , ( b ) and ( c ) . and .,scaledwidth=70.0% ] typical spatial patterns for case ( ) are shown on the left panel of fig .[ fig : fig1 ] , where the variable is plotted over the index number at a time snapshot chosen after a sufficiently long simulation of the system . in this figurethe effect of different values of the phase is demonstrated while the number of oscillators and their nearest neighbors , as well as the coupling strengths are kept constant . for example , for ( fig .[ fig : fig1](a ) ) an interesting novel type of dynamics is observed that we shall call `` mixed oscillatory state '' ( mos ) , whereby nearly half of the are attracted to a fixed point ( at this snapshot they are all at a value near ) , while the other half are oscillating interspersed among the stationary ones . from the respective time series ( fig . [fig : fig1](a ) , right ) it is clear that the former correspond to spiking neurons whereas the latter to quiescent ones .this interesting mos phenomenon is due to the fact that , in the standard parameter values we have chosen , the uncoupled hr oscillators are characterized by the property of _bistability_. clearly , as shown in the phase portrait of fig .[ fig : fig2 ] , each oscillator possesses three fixed points : the leftmost fixed point is a stable node corresponding to the resting state of the neuron while the other two correspond to a saddle point and an unstable node and are therefore repelling . for ( which is the case here )a stable limit cycle also exists which attracts many of the neurons into oscillatory motion , rendering the system bistable and producing the dynamics observed in fig .[ fig : fig1](a ) .now , when a positive current is applied , the -nullcline is lowered such that the saddle point and the stable node collide and finally vanish . in this casethe full system enters a stable limit cycle associated with typical spiking behaviour .similar complex patterns including mos and chimeras have been observed in this regime as well . .three fixed points coexist with a stable limit cycle.,scaledwidth=40.0% ] for , on the other hand , there is a `` shift '' in the dynamics of the individual neurons into the spiking regime , as seen in fig . [fig : fig1](b ) ( right ) .the corresponding spatial pattern has a wave - like form of period 2 .then , for a classical chimera state with two incoherent domains is observed ( ( fig .[ fig : fig1](c ) , left ) . diagonal coupling ( , ) is , therefore , identified as the necessary condition to achieve chimera states . by contrast , it is interesting to note that in nonlocally coupled fitz - hugh nagumo oscillators it has been shown both analytically and numerically that chimera states occur for _ off - diagonal _ coupling . by decreasing the range of coupling and increasing the system size , chimera states occur with multiple domains of incoherence and coherence for ( fig .[ fig : fig3](c , d ) ) , and , accordingly , periodic spatial patterns with larger wave numbers arise for , as seen in fig .[ fig : fig3](a , b ) .this is in agreement with previous works reported in .next we consider the case ( ) where the coupling term is restricted to the variable .this case is important since incorporating the coupling in the voltage membrane ( ) alone is more realistic from a biophysiological point of view . in fig .[ fig : fig4 ] spatial plots ( left ) and the corresponding -plane ( right ) for increasing coupling strength are shown .chimera states ( fig .[ fig : fig4 ] ( b , c ) ) are observed as an intermediate pattern between desynchronization ( fig .[ fig : fig4 ] ( a ) ) and complete synchronization ( fig .[ fig : fig4 ] ( d ) ) . of eqs .( [ eq:01]),([eq:02 ] ) at for . ( top ) and ( bottom ) whereas ( left ) and ( right ) . in ( a ) and ( b ) , in ( c ) , and in ( d ) .,scaledwidth=50.0% ] ( left ) and in the -plane ( right ) of eqs . ( [ eq:01]),([eq:02 ] ) at for , and . (a ) , ( b ) , ( c ) and ( d ) .,scaledwidth=40.0% ]in order to complete our study of the hindmarsh - rose model we shall consider , in this section , its three - dimensional version . the corresponding equations read : the extra variable represents a slowly varying current , which changes the applied current to and guarantees firing frequency adaptation ( governed by the parameter ) as well as the ability to produce typical bursting modes , which the 2d model can not reproduce .parameter controls the transition between spiking and bursting , parameter determines the spiking frequency ( in the spiking regime ) and the number of spikes per bursting ( in the bursting regime ) , while sets the resting potential of the system .the parameters of the fast system are set to the same values used in the two - dimensional version ( , ) and typical values are used for the parameters of the -equation ( , , ) .the 3d hindmarsh - rose model exhibits a rich variety of bifurcation scenarios in the parameter plane .thus , we prepare all nodes in the spiking regime ( with corresponding parameter values and ) and , as in section 2 , we use initial conditions randomly distributed on the unit sphere ( ) . first let us consider direct coupling in both variables and and vary the value of the equal coupling strengths , while and are kept constant .naturally , the interaction of spiking neurons will lead to a change in their dynamics , as we discuss in what follows . for low values of observe the occurrence of a type of mos where nearly half of the neurons spike regularly in a synchronous fashion , while the rest are unsynchronized and spike in an irregular fashion .this is illustrated in the respective time series in the right panel of fig .[ fig : fig5](a ) at higher values of the coupling strength the the system is fully synchronized ( fig .[ fig : fig5](b ) ) . of eqs .( [ eq:03])-([eq:05 ] ) at ( left ) and selected time series ( right ) for : ( a ) and ( b ) . and .,scaledwidth=50.0% ]next we check the case of coupling in the variable alone ( , ) .figure [ fig : fig6 ] displays characteristic synchronization patterns obtained when we increase ( left panel ) and selected time series ( right panel ) . at low values of the coupling strengthall neurons remain in the regular spiking regime and desynchronization alternates with complete synchronization as increases ( fig .[ fig : fig6](a , b , c ) ) . for intermediate values of the coupling strength ,chimera states with one incoherent domain are to be observed .these are associated with a change in the dynamics of the individual neurons , which now produce irregular bursts ( fig .[ fig : fig6](d ) ) .the number of spikes in each burst increases at higher values and the system is again fully synchronized ( fig .[ fig : fig6](e ) ) .extensive simulations show that the chimera states disappear and reappear again by varying which is most likely due to the system s multistability and sensitive dependence on initial conditions . of eqs .( [ eq:03])-([eq:05 ] ) at ( left ) and selected time series for , and . ( a ) , ( b ) , , ( d ) , and ( e ) . ,in this paper , we have identified the occurrence of chimera states for various coupling schemes in networks of 2d and 3d hindmarsh - rose models .this , together with recent reports on multiple chimera states in nonlocally coupled fitzhugh - nagumo oscillators , provide strong evidence that this counterintuitive phenomenon is very relevant as far as neuroscience applications are concerned .chimera states are strongly related to the phenomenon of synchronization . during the last years, synchronization phenomena have been intensely studied in the framework of complex systems . moreover , it is a well - established fact the key ingredient for the occurrence of chimera states is nonlocal coupling .the human brain is an excellent example of a complex system where nonlocal connectivity is compatible with reality .therefore , the study of chimera states in systems modelling neuron dynamics is both significant and relevant as far as applications are concerned .moreover , the present work is also important from a theoretical point of view , since it verifies the existence of chimera states in coupled bistable elements , while , up to now , it was known to arise in oscillator models possessing a single attracting state of the limit cycle type .finally , we have identified a novel type of mixed oscillatory state ( mos ) , in which desynchronized neurons are interspersed among those that are either stationary or oscillate in synchrony . as a continuation of this work , it is very interesting to see whether chimeras and mos states appear also in networks of _ populations _ of hindmarsh - rose oscillators , which are currently under investigation .the authors acknowledge support by the european union ( european social fund esf ) and greek national funds through the operational program `` education and lifelong learning '' of the national strategic reference framework ( nsrf ) - research funding program : thales . investing in knowledge society through the european social fund .funding was also provided by ninds r01 - 40596 .hagerstrom , a. m. , thomas , e. , roy , r. , hvel , p. , omelchenko , i. & schll , e. [ 2012 ] `` chimeras in coupled - map lattices : experiments on a liquid crystal spatial light modulator system '' , _ nature physics _ * 8 * , 658 .omelchenko , i. , omelchenko , o. e. , hvel , p. & schll , e. [ 2013 ] `` when nonlocal coupling between oscillators becomes stronger : patched synchrony or multichimera states '' , _ phys .lett . _ * 110 * , 224101 .rattenborg , n. c. , amlaner , c. j. & lim , s. l. [ 2000 ] `` behavioral , neurophysiological and evolutionary perspectives on unihemispheric sleep , '' _ neuroscience and biobehavioral reviews _ * 24 * , pp . 817-842 .
we have identified the occurrence of chimera states for various coupling schemes in networks of two - dimensional and three - dimensional hindmarsh - rose oscillators , which represent realistic models of neuronal ensembles . this result , together with recent studies on multiple chimera states in nonlocally coupled fitzhugh - nagumo oscillators , provide strong evidence that the phenomenon of chimeras may indeed be relevant in neuroscience applications . moreover , our work verifies the existence of chimera states in coupled bistable elements , whereas to date chimeras were known to arise in models possessing a single stable limit cycle . finally , we have identified an interesting class of mixed oscillatory states , in which desynchronized neurons are uniformly interspersed among the remaining ones that are either stationary or oscillate in synchronized motion .
in the quest for better wireless connectivity and higher data rates , the cellular network is becoming heterogeneous , featuring multiple types of base stations ( bss ) with different cell size .heterogeneity implies that the traditional strategies in cell planning , deployment and communication should be significantly revised .since the number of bss becomes comparable to the number of devices and the deployment pattern of the bss is rather irregular , there are multiple bss from which a device can select one to associate with . the key issue in a wireless heterogeneous setting is the way in which a device selects an access point ( ap ) . the authors in and indicate that the ap selected for downlink ( dl ) , termed downlink ap ( dlap ) , is not necessarily the same as the uplink ap ( ulap ) .the current cellular networks use a criterion applicable to the dl for association in both directions , i.e. a device selects the bs that offers maximal signal - to - interference - plus - noise ratio ( sinr ) in the dl and then uses the same bs for ul transmission .when dlap , we say that the device has a _ decoupled access_. there are two main drivers for decoupled access : ( 1 ) the difference in signal power and interference in dl as compared to ul ; and ( 2 ) the difference in congestion between bss .decoupled dl / ul access has been considered in , where the authors devise separate criteria for selection of dlap and ulap , respectively , and demonstrate the throughput benefits by using real - world data from planning tools of a mobile operator . another related work that considers different associations in ul and dl is , where coverage probability and throughput are analyzed for dynamic tdd networks enhanced with device - to - device ( d2d ) links .this letter focuses on the analytical characterization of the decoupled access by using the framework of stochastic geometry .we use the same association criteria as in .we perform a joint analysis of the dl and ul association , using the same realization of the random process that describes spatial deployment of the bss and devices .the analysis is performed for a two - tier cellular network , consisting of macro bss ( mbss ) and femto bss ( fbss ) .this is used to obtain the central result of the paper , which is the set of association probabilities for different dl / ul configurations .the analytical results are closely matching the simulations and provide interesting insights about the decoupled access in terms of e.g. fairness regarding the ul throughput .combining novel results from this letter with already available results in the literature , we provide an analytical justification of the phenomenon of decoupled access compared to current dl - based association in heterogeneous networks .the letter is organized as follows .section ii describes the system model . in section iii , we derive the association probabilities and the average throughput .section iv gives the numerical results and section v concludes the paper .we model a two - tier heterogeneous cellular network . the locations of bssare modeled with independent homogeneous poisson point processes ( ppps ) .we use to denote the set of points obtained through a ppp with intensity , where for mbss , for fbss and for the devices .similarly , we use with to denote the transmission power of the node . the variables denote the two - dimensional coordinate of mbs and fbs , respectively . the analysis is performed for a typical device located at the origin , which is the spatial point . by slivnyaks theorem , the distribution of a point process in is unaffected by addition of a node at the origin . the power received by a typical device in dl from a bs located at , where is denoted by .the power received by a bs from the typical device in ul is denoted by .these powers are given by : where and are distances from the points and to the origin , respectively , and is the path loss exponent ( ) . is independent exponentially distributed random variable with unit mean , representing rayleigh fading at the point .each receiver in the system has a constant noise power of .the dl sinr when the device is associated to is : where and . with the notion of typical point located at the origin , ul sinr is calculated at the location of ulapthis involves calculation of distances between the interfering devices and ulap , which complicates the analysis because none of them is located at the origin .the problem is solved by using the translation - invariance property of stationary point processes , by which the processes and have the same distribution for all .thus , translation of the points for the same value of preserves the process properties .we use this to shift the points for the distance between the typical device and ulap such that the ulap becomes located at the origin .the interfering devices are modeled by thinning the ppp in order to take into account that only one device per bs acts as an interferer , using the same resource as the typical device . by thinning ,we randomly select fraction of points from the original point process with probability .the thinned process is denoted as with density .the presence of a device in a voronoi cell of a bs forbids the presence of other devices and introduces dependence among the active devices .however , this dependence is weak , as shown in , and it is justified to assume independent ppp for the active devices .the ul sinr at is defined as : analysis is divided into two mutually related parts .we first derive the association probabilities for dl / ul and afterward use them to evaluate the average throughput . in dl, the device is associated to the bs from which it receives the highest average power . inul it is associated to bs to which it transmits with the highest average power .the average power is obtained by averaging over the received signals given by ( [ dlul_signals ] ) with respect to the fading .this is justified as the fading - induced variations can lead to ping - pong effects in the association process .the average received signal powers in dl and ul are : = p_v \left\|x_v\right\|^{-\alpha } \label{dl_signals_ave } ; \mathbb{e}_h\left[\textrm{s}_{v , u}\right ] = p_d \left\|x_v\right\|^{-\alpha } \label{dlul_signals_ave}\end{aligned}\ ] ] ulap will always be the closest bs at distance , where .the dl case is more complicated since is also variable .let be the closest point to the origin from the set , with .the device is associated to otherwise , the device is associated to fbs .let .the distribution of follows from the null probability of 2d ppp , the probability that there is no point in the circle with radius , i.e. . the pdf of is : for two - tier heterogeneous network , there are four possible combinations for choosing dlap and ulap : the probability that a device will be associated to mbs both in dl and ul is : assuming , it follows that .therefore , the intersection of the events is the region defined by , denoted as region 1 on fig .[ allregions ] .= 46 dbm , =20 dbm , =4).,width=230 ] the association probability of case 1 is calculated as : the derivation of the remaining cases follows the same procedure and we thus only provide the final results .case 2 defines decoupled access since dlap .the association probability is defined as : the domain that satisfies both events is and is denoted as region 2 on fig . [ allregions ] .the association probability for case 2 is equal to : the association probability for case 3 should satisfy the following conditions : the intersection ( [ case3 ] ) is an empty set and therefore the probability of dlap and ulap is .the probability for associating to fbs in both dl and ul is defined as : since , the intersection of the events is , denoted as region 4 on fig .[ allregions ] .the association probability for case 4 is equal to : the average throughput for devices associated to with in direction using association rules , with for dl and for ul , is calculated as : where is target sinr , is the probability that the instantaneous sinr is greater than and is the average number of associated devices on and is equal to , with being the association probability for using association rules . using the association probabilities derived in section [ sec : assocprob ] , is expressed as : in order to calculate the average throughput in a two - tier network , we first need to calculate the distribution of the distance to the serving bs , which depends on the association process . for dl association rules , given by ( [ dl_assocrule ] ), the pdf of the distance to the serving bs is derived in and is given by : for ul association rules , given by ( [ ul_assocrule ] ) , the pdf of the distance to the serving bs is given by : the average throughput in the dl is calculated as : where and are expressed by the general formula given by ( [ throughputdef ] ) .using the approach derived in , we derive the final expression for the average throughput in dl on : where . the key point in the evaluationis the following observation : if the device is associated to mbs located at the interfering mbss are at a distance greater than and the interfering fbss are at a distance greater than ; if the device is associated to fbs located at , the interfering fbss are at a distance greater that and the interfering mbss are at a distance greater than .the average downlink throughput can be also calculated in a more elegant way by using the following : ( equivalent downlink model ) _ a two - tier heterogeneous model is equivalent to a novel homogeneous model with bss deployed by ppp with intensity .then , the average throughput in dl can be calculated as : _ where .the dl signal power at the typical point can be represented as , where z is a discrete random variable with two possible values , and , with probabilities and , respectively .the points are from a ppp with density . by equivalence theorem ,the spatial points form new ppp with density $ ] .the average throughput in ul without decoupled access is calculated using dl association rules given by ( [ dl_assocrule ] ) : where and are evaluated as : it can be observed that the distribution to the serving bs and the association probabilities are from dl association rules .the average throughput in ul with decoupled access is : where and are evaluated as : * remark 1 * ( equivalent uplink model ) _ a two - tier hetnet model with homogeneous devices is represented by an equivalent homogeneous model with bss deployed by ppp with intensity .this is a consequence of the ul association rule , which is based on path - loss only .then , the average throughput in ul can be elegantly calculated as : _ the throughput gain for case , , is defined as the ratio between the throughput achieved with and without decoupling and is denoted as .the average throughput gain is calculated as : .= 46 dbm ; =20 dbm ; =4).,width=302 ] the association probabilities for each of the cases are equal to the percentage of devices that will be associated with the particular case .[ jap ] shows the association probabilities for different densities of fbss and it gives an important information about dl / ul decoupling .the percentage of devices that choose decoupled access of case 2 ( dl through mbs and ul through fbs ) increases rapidly by increasing the density of fbss .as the density of fbss increases further , the probability for decoupled access starts to decrease slowly at the expense of increased probability for case 4 .there is a region of interest for with a high percentage of devices for which the decoupled access is optimal . as , the probability of decoupled access will go to zero .[ ulthroughput ] shows the throughput gain for the devices associated to ( m / f)bss and the average gain . there is a difference between , on one side , the accurate simulation of the devices with ppp , and , on the other side , its approximation by simulation of independent ppp for the active devices only .while the ul coverage probability with dl / ul decoupling is strictly superior , the congestion of the bss affects the throughput in a different manner. basically , fbss have significantly small dl coverage and therefore associate very small number of devices compared to mbss , but each device gets higher throughput .it is visible that for =2 db the throughput achieved on mbss is 40 times higher with decoupling , while the throughput achieved on fbss is 5 times lower with decoupling .the average throughput gain is always positive .it can be concluded that by dl / ul decoupling the devices with low sinr ( located in regions 1 and 2 ) achieve significant improvement , at the expense of marginal decrease in the ul throughput of the devices in region 4 .this suggests that decoupled access can be used as a tool towards achieving fairness among the accessing devices .= 46dbm ; =20dbm ; =20dbm ; =4 ; =10 ; =).,width=302 ]this letter considers the problem of device association in a heterogeneous wireless environment .the analysis is done using models based on stochastic geometry .the main result is that , as the density of the femto bss ( fbss ) increases compared to the density of the macro bss ( mbss ) , a large fraction of devices chooses to receive from a mbs in the downlink ( dl ) and transmit to a fbs in the uplink ( ul ) .this is the concept of _ decoupled access _ and challenges the common approach in which both dl and ul transmission are associated to the same bs .it is shown that the decoupling of dl and ul can be used as a tool to improve the fairness in the ul throughput .part of our future work refers to the architecture for decoupled access , which includes signaling and radio access protocols .h. elsawy , e. hossain , and m. haenggi , `` stochastic geometry for modeling , analysis , and design of multi - tier and cognitive cellular wireless networks : a survey , '' _ ieee comm .surveys & tutorials _ , vol .15 , no . 3 , 2013 h.- s jo , y. j. sang , p. xia and j. g. andrews , `` heterogeneous cellular networks with flexible cell selection : a comprehensive downlink sinr analysis , '' _ ieee trans . on wireless comm .11 , no . 10 , 2012
wireless cellular networks evolve towards a heterogeneous infrastructure , featuring multiple types of base stations ( bss ) , such as femto bss ( fbss ) and macro bss ( mbss ) . a wireless device observes multiple points ( bss ) through which it can access the infrastructure and it may choose to receive the downlink ( dl ) traffic from one bs and send uplink ( ul ) traffic through another bs . such a situation is referred to as _ decoupled dl / ul access_. using the framework of stochastic geometry , we derive the association probability for dl / ul . in order to maximize the average received power , as the relative density of fbss initially increases , a large fraction of devices chooses decoupled access , i.e. receive from a mbs in dl and transmit through a fbs in ul . we analyze the impact that this type of association has on the average throughput in the system . smiljkovikj , popovski gavrilovska : decoupling of uplink and downlink transmissions heterogeneous networks , decoupled downlink / uplink , average throughput .
the signals received by the antennas obey the stationary stochastic process and then ergodic process .the ergodic theory can be applied to the auto - correlation function for a spectrometer and the cross - correlation function for radio interferometer . under such conditions , weinreb ( 1963 )developed the first digital spectrometer .this digital spectrometer is called the xf correlator in which the correlation is calculated before fourier transform . meanwhile , chikada et al .( 1987 ) developed the first the fx correlator of an another design , in which fourier transform is performed before cross multiplication .although there is a difference of property between two basic designs , the obtained astronomical spectra of them were confirmed to be identical .determining the number of correlation lags in the xf scheme or of fourier transform points in the fx scheme is essential for the realization of high - dispersion and wideband observation , because the frequency resolution is derived as where is the sampling period , is the number of correlation lags or fourier transform points , and the bandwidth of b is equal to .the material size and cost of the correlator strongly depend on the sampling period , , and the number of correlation lags or fourier transform points , . the new xf architecture with the digital tunable filter bank that is designed with the finite impulse response ( fir ) has been proposed and developed for the next generation radio interferometers , the expanded very large array ( evla ) and the atacama large millimeter / submillimeter array ( alma ) ( , ) .this is called the `` fxf correlator '' .the architecture of the fxf scheme can make the material size smaller in comparison with that of the conventional xf scheme . since the digital filter allows a variety of observation modes [ scientific and observational availabilitywere shown in iguchi et al .( 2004 ) ] , the fxf scheme will provide us with the most appropriate specifications which meet the scientific requirements .this will lower the risk of over - engineering of the correlator .the improved fx architecture with dft filterbank was developed by bunton ( 2000 ) . the use of polyphase filter banks allows arbitrary filter responses to be implemented in the fx scheme ( bunton 2003 ) .this is called the `` polyphase fx correlator '' .this scheme has a possibility to achieve the spectral leakage of about -120 db . in particular , this performance is significant to suppress the leakage from the spurious lines mixed in receiving , down - converting or digitizing . the ffx correlator is a new algorithm for correlation process in radio astronomy .the ffx scheme consists of 2-stage fourier transform blocks , which perform the 1st - stage fourier transform as a digital filter , and the 2nd - stage fourier transform to achieve higher dispersion .the first f of the ffx is the initial letter of the word `` filter '' . in this paper, we present a new ffx architecture .the principle of the ffx scheme in section 2 , the properties of the ffx scheme in section 3 , the algorithm verification and performance evaluation with the developed ffx correlator in sections 4 and 5 , and the summary of this paper in section 6 are presented .this section shows the algorithm and the data flow diagram of the signal processing in the fourier transform of the ffx scheme ( see figure [ fig : ffx ] ) .suppose that are the digital waveforms at the correlator input from the astronomical radio signals that are received by the telescope .the inputs , , are real digital signals at sampling period of , and obey the zero - mean gaussian random variable .the suffix is an integer for time .fig1 ( 160mm,200mm)fig1.eps [ step 1 ] the correlator receives the time - domain digital sampling signals from the analog - to - digital converter ( adc ) , and accumulate them up to points .[ step 2 ] the time - domain -point data are transferred to the frequency - domain by using the -point discrete complex fourier transform as follows : where is the spectrum after the 1st fourier transform , the suffix is an integer for frequency , and is equal to at the bandwidth of .the is the minimum frequency resolution of the 1st fourier transform , which is equal to .[ step 3 ] the extraction of the points from the frequency domain -point data after the 1st fourier transform is conducted as if filter and frequency conversion are performed simultaneously : where is the minimum frequency channel in the extraction , and the suffix is an integer for frequency .[ step 4 ] the -point data after inverse fourier transform is written by ,\ ] ] where is the time - domain signal after inverse fourier transform , the suffix is an integer for time , and is the sampling period after filtering at the bandwidth of .[ step 5 ] by repeating the procedure from step 1 to step 4 , the data are gathered up to points as follows ; where is , and is the number of repeating times of the procedure from step 1 to step 4 .[ step 6 ] the time - domain -point data after gathering are transferred to the frequency - domain by using the -point discrete complex fourier transform as follows : where is the spectrum after the 2nd fourier transform , and the suffix is an integer for frequency .the is the minimum frequency resolution after the 2nd fourier transform , which is equal to ( = ) ..definition of functions . [ cols="<,<",options="header " , ] [ table : fxopmode ] llllll stage & bandwidth & spectral & spectral & velocity & correlation + & & points & resolution & resolution & + & & & & at 1 mm & + 1 & 4096 mhz & 512 & 8 mhz & 8.0 km s & 2ac1cc + 2 & 64 mhz & 2048 & 31.25 khz & 0.031 km s & 2ac1cc + + 1 & 4096 mhz & 512 & 8 mhz & 8.0 km s & 2ac1cc + 2 & 128 mhz & 4096 & 31.25 khz & 0.031 km s & 1ac + + 1 & 2048 mhz & 512 & 4 mhz & 4.0 km s & 4ac2cc + 2 & 32 mhz & 2048 & 15.625 khz & 0.016 km s & 4ac2cc + + 1 & 2048 mhz & 512 & 4 mhz & 4.0 km s & 2ac1cc + 2 & 64 mhz & 4096 & 15.625 khz & 0.016 km s & 2ac1cc + + 1 & 2048 mhz & 512 & 4 mhz & 4.0 km s & 2ac1cc + 2 & 128 mhz & 8192 & 15.625 khz & 0.016 km s & 1ac + + 1 & 2048 mhz & 512 & 4 mhz & 4.0 km s & 2ac1cc + 2 & 96 mhz & 6144 & 15.625 khz & 0.016 km s & 1ac + 2 & 32 mhz & 2048 & 15.625 khz & 0.016 km s & 1ac + + + + [ table : ffxopmode ] correction is performed based on the baseline . coefficient is generated within fpga .firstly , , the gradient of to time change , is evaluated and then the value of each frequency channel is calculated ( see figure [ fig : deltaw ] ) .circuit diagram of the line graphs above is shown in figure [ fig : deltawcir ] .for every 1 millisecond , the initial value ( init ) is provided by the monitor control computer , and read as a initial gradient [ .in the border between segments , the previous value is multiplied by the gradient ( grad ) data to generate a gradient of a new segment [ ] . in the arbitrary time , initial value ( the value of dc ) is set to 0 . is set for every 128 channel in the full bandwidth of 8 channel , which means the full bandwidth ( 8 channel ) is corrected with 64 steps .the initial value of can be set for every 1 millisecond , however , a given value is set within 128 channel .the gradient of is the gradient variation of per one segment length for fft .the monitor control computer specifies a set of the initial value and gradient approximately every 1 second .( 80mm,120mm)fig17.eps ( 130mm,120mm)fig18.eps final correlation value is calculated by adding the output of 16 correlators .the composition of the long - term accumulation / output board ( ltaob ) is shown in figure [ fig : lta ] .when the correlation result is output , the data is converted into the format of ieee single - precision floating point . in the fx processing ,the number of the output frequency channels is normally 4 ( = 4096 ) .thus the output data size per one correlation is where is the number of single - precision floating bits , is the complex , and is the number of correlations ; auto - correlations of and , and a cross - correlation between them . assuming the minimum integration time is 0.1 second , the estimated output speed is approximately 7.7 mbps . in the ffx processing ,the number of the output frequency channels is 512 + 2 , thus the data size per one correlation is 480 bits .consequently , the estimated output speed with 0.1-second integration time is approximately 4.8 mbps .correlation results are sent to the monitor control computer using tcp / ip protocol of 100 baset - ether .the ffx correlator are divided into four f parts in principle ( not physically ) .each f part in a fx mode and the first fft stage of ffx mode are operated at 2048 mhz , and each f part at the second fft stage of ffx mode are operated at 32 mhz .also , the digital signals input from eib are distributed to arbitrary f parts by using the command ( `` fchsel '' ) from the monitor control computer . in fx mode and the first fft stage of ffx mode ,the digital signals of 8192 msps are operated by combining two f parts (= 2 mhz ) , while the signals of 4096 msps are operated by using one parts ( = 1 mhz ) . at the second fft stage of ffx mode ,the digital signals of 8192 msps are operated by combining two f parts ( = 2 mhz ) , while the signals of 4096 msps are operated using only one parts ( = 1 mhz ) . by using the command ( `` fchsel '' ) , the effective bandwidth after the second fft stage can be changed .all of the operation modes available in this ffx correlator are listed in table [ table : fxopmode ] and [ table : ffxopmode ] .the hardware of the fx correlator is shown in figure [ fig : farm ] .the dts - r module consists of two electrical input interface boards ( eibs ) , two delay correction and data configuration boards ( dcdcbs ) , and one dts - r monitor control boards ( drmcbs ) .the correlation module consists of eight correlation boards ( corbs ) , long - term accumulation / output board ( ltaob ) , and one correlation monitor control board(cormcb ) .each module is connected to the independent power module .the power consumption of the filter module is 400 w , while that of the output module is 600 w. the total ac power is 750 w at 1-phase 100 - 220 vac 50/60 hz ( 100 v or 220 v ) .the total weight is 71.3 kg .( 80mm,65mm)fig19.eps figure [ fig : measure ] shows a block diagram of the measurement setup of the frequency response of the ffx correlator . to investigate the frequency response , it is useful to use the cw signal , which can measure the folding effects by sweeping the frequency range of 0 to 4096 mhz .the white noise is important to measure the frequency response , because the astronomical signals obey the gaussian random variable . to obtain the input signals that are approximated to the zero - mean gaussian probability ,mixing of the cw signal with the white noise is necessary .( 147mm,100mm)fig20.eps the frequency response of the ffx correlator when cw is included is written as \nonumber \\ & & \cdot h_{1}(f ) h_{2}^{\ast}(f ) \cdot |h_{\mathrm{d}}(f)|^2 , \label{eq : oncor}\end{aligned}\ ] ] where is the frequency response of the cw signal , is the frequency response of the white noise from the aste analog backend subsystem , and are the frequency response by different transmission paths , in which and ( see figure [ fig : measure ] ) , and is the frequency response of the ffx correlator , including the effects of requantization and the folding noise after downsampling . the bandpass calibration is essential for estimating the cw power accurately , because the bandpass response becomes a time - variable due to outdoor air temperature .the frequency response without the cw signal is written as the adcs work as 1-bit performance ( ) . in that case, it is important to adjust the power as precisely as possible so as to avoid the high - order spurious effects .the frequency responses of and depend on the relative power of the cw signal to the white noise , and also the threshold levels in quantization . to correct these effects , we need to calibrate the bandpass by sensitively adjusting the continuum floor level of to that of in the data analysis .these values are and . from equations ( [ eq : oncor ] ) and( [ eq : coroff ] ) , the frequency response in a ffx mode is written as \cdot h_{1}(f ) h_{2}^{\ast}(f ) \cdot |h^{\mathrm{f}}_{\mathrm{d}}(f)|^2 , \label{eq : ffxon}\end{aligned}\ ] ] while the frequency response without the cw signal can be written as and then the cw frequency response including the response of the measurement system is derived as similarly , the frequency response in a fx mode is written as \cdot h_{1}(f ) h_{2}^{\ast}(f ) \cdot |h^{\mathrm{n}}_{\mathrm{d}}(f)|^2 , \label{eq : fxon}\end{aligned}\ ] ] while the frequency response without the cw signal can be written as and then the cw frequency response including the response of the measurement system is derived as from equations ( [ eq : ffxpwr ] ) and ( [ eq : fxpwr ] ) , we can derive the frequency response in a ffx mode from the correlated spectra obtained in ffx and fx modes as since the frequency response of in the fx mode is well known ( ) , the frequency response in a ffx mode can be derived finally .figure [ fig : response ] shows the frequency response of the ffx correlator at a bandwidth of 64 mhz in the range of 2048 to 2112 mhz , and that the lower limits of about -33 db to -40 db are successfully measured with this method .the measurement frequency resolution was 31.25 khz .the measurement results show that the effective bandwidth is about 59.28 mhz , which was obtained by the passband responses of about db at 2046.40625 and 2106.28125 mhz , db at 2045.3125 and 2107.28125 mhz , and by a stopband response with the first sidelobe of about db . the measurement results are well consistent with the theoretical curve in the passband , both bandedges ( sharpness ) , and first sidelobe levels . in the stopband response except these responses, however , it is shown that there are differences between theoretical curve and measured results .this problem is probably due to the non - linear response of 1-bit adc and the precision of the data reduction process in this measurement method .the cross - modulation distortion is strongly generated in digitizing the cw signals at 1 bit .this character complicates the data reduction method , and will reduce the measurement precision .if the adcs with 3 bits or more are feasible , this problem will be relaxed .finally , the measurement results show that the theory of the ffx scheme can be confirmed , and the development of the ffx correlator was successfully realized .there are two basic designs of a digital correlator : the xf - type in which the cross - correlation is calculated before fourier transformation , and the fx - type in which fourier transformation is performed before cross multiplication . to improve the fx - type correlator , we established a new algorithm for correlation process , that is called the ffx scheme .the ffx scheme demonstrates that the realization of a stopband response with first and second sidelobes of db and higher - order sidelobes of db is technically feasible .the ffx scheme consists of 2-stage fourier transform blocks , which perform the 1st - stage fourier transform as a digital filter , and the 2nd - stage fourier transform to achieve higher dispersion .the ffx scheme provides flexibility in the setting of bandwidth within the sampling frequency .the input data rate of the developed ffx correlator is about 48 giga bit per second ( gbps ) with 3-bit quantization at the sampling frequency of 8192 or 4096 msps , which is 8192 msps x 3 bits x 2 if or 4096 msps x 3 bits x 4 ifs .we have successfully evaluated the feasibilities of the ffx correlator hardware . also , this hardware will be installed and operated as a new spectrometer for aste .we successfully developed the ffx correlator , measured its performances , and demonstrated the capability of a wide - frequency coverage and high - frequency resolution of the correlation systems .our development and measurement results will also be useful and helpful in designing and developing the next generation correlator .the authors would like to acknowledge yoshihiro chikada for his helpful technical discussions .the author would like to express gratitude to brent carlson who provided constructive comments and suggestions on this paper .this research was partially supported by the ministry of education , culture , sports , science and technology , grant - in - aid for young scientists ( b ) , 17740114 , 2005 .bunton , j , 2000 , alma memo 342 , ( charlottesville : nrao ) bunton , j , 2003 , alma memo 447 , ( charlottesville : nrao ) carlson , b. 2001 , nrc - evla memo 014 chikada , y. , ishiguro , m. , hirabayashi , h. , morimoto , m. , morita , k. , kanzawa , t. , iwashita , h. , nakazima , k. , et al .1987 , proc .ieee , 75 , 1203 escoffier , r. p. , comoretto , g. webber , j. c. , baudry , a. , broadwell , c. m. , greenberg , j. h. , r. r. treacy , r. r. , cais , p. , et al .2007 , a 462 , 801 ezawa , h. , kawabe , r. , kohno , k. , yamamoto , s. 2004 , spie 5489 , 763 iguchi , s. , okuramu , s. k. , okiura , m. , momose , m. , chikada , y. ursi general assembly ( j6:recent scientific developments ) , maastricht , 2002 .iguchi , s. , kurayama , t. , kawaguchi , n. , kawakami , k. 2005 , pasj 57 , 259 narayanan , d. , groppi , c. e. , kulesa , c. a. , & walker , c. k. 2005 , , 630 , 269 okuda , t. , iguchi , s. 2008 , pasj 60 , 315 okumura , s. k. , chikada , y. , momose , m. , iguchi , s. 2001 , alma memo no.350 ( charlottesville : nrao ) rabiner , l. r. , schafer , r. w. 1971 , ieee trans .audio electroacoust . , au-19 , 200 thompson , a. r. , moran , j. m. , swenson , g. w.jr .2001 , interferometry and synthesis in radio astronomy , 2nd ed . ,( new york : john wiley sons ) , 289 weinreb , s. 1963 , digital spectral analysis technique and its application to radio astronomy , ( r. l. e. , mit , cambridge , mass . ) , tech . rep .
we established a new algorithm for correlation process in radio astronomy . this scheme consists of the 1st - stage fourier transform as a filter and the 2nd - stage fourier transform for spectroscopy . the `` ffx '' correlator stands for filter and fx architecture , since the 1st - stage fourier transform is performed as a digital filter , and the 2nd - stage fourier transform is performed as a conventional fx scheme . we developed the ffx correlator hardware not only for the verification of the ffx scheme algorithm but also for the application to the atacama submillimeter telescope experiment ( aste ) telescope toward high - dispersion and wideband radio observation at submillimeter wavelengths . in this paper , we present the principle of the ffx correlator and its properties , as well as the evaluation results with the production version .
the multiple sequence alignment ( msa ) problem constitutes one of the fundamental research areas in bioinformatics . while at first sight it may seem a simple extension of the two - string alignment problem _ two strings good , four strings better _, for biologists , the multiple alignment of proteins or dna is crucial in deducing their common properties . quoting arthur lensk : _ one or two homologous sequences whisper ... a full multiple alignment shouts out loud_. in general , the sequences consist of a linear array of symbols from an alphabet of -letters ( for dna and for proteins ) . given sequences to determine a good multiple sequence alignment is a relative task .usually one defines a score function that depends on the distances between the letters of the alphabet , and assumes that the better alignment is the one that minimizes this score function .it is a common use to define the msa score in terms of the scores of the pairwise global alignments of the sequences ( sum of pairs score) . given two sequences and let be a cost of the mutation of into and the cost of inserting or deleting of the letter . extending so that and and considering that a null ( - ) symbol isolated from others ( - ) pays an extra cost we may define the score of a pairwise alignment for sequences and of size as : where is the number of isolated ( - ) .then , the score for the multiple alignment is given by : the multiple sequence alignment has at least three important applications in biology : classification of protein families and superfamilies , the identification and representation of conserved sequences features of both dna or proteins that correlate structure and/or function and the deduction of the evolutionary history of the sequences studied .unfortunately the problem is known to be _ np - complete _ and no complete algorithm exist to solve real or random instances .therefore , many heuristic algorithms have been proposed to solve this problem .the algorithm of carrillo - lipman ( which is complete ) , is a dynamic programming algorithm able to find the multiple alignment of 3 sequences , and with some heuristic added , to find the alignments , in reasonable time , of up to 6 sequences .however , its computational cost scales very fast with the number of sequences and is of little utility for more ambitious tasks . in the first 90 sthe problem was approached using ideas coming from physics , j. kim and collaborators and m. ishikawa and collaborators used different version of the simulated annealing technique with some success , but their algorithms were unable to change the number of gaps in the alignment .this means that once they started with a given initial configuration ( usually taken from some heuristics ) , any motion of segments in the sequences conserved the number of gaps . to extend these programs allowing the number of gaps to changewill cause the appearance of global moves in the algorithm that are very expensive from the computational point of view .probably the must successful attempt to solve this problem has been the clustal project , a progressive algorithm that first organizes the sequences according to their distances and then aligns the sequences in a progressive way , starting with the most related ones .moreover , it uses a lot of biological information , some motifs of residues rarely accept gaps , sub - sequences of residues associated with structural sub - units are preferred to stay together during the alignment , etc .these features , and a platform easy to use and integrated with other standard bioinformatic tools , have made clustal the favorite multiple sequence alignment program for biologists and people doing bionformatics in general .however , it also has important drawbacks .once the first sequences are aligned , the inclusion of a new sequence would not change the previous alignment , the gap penalties are the same independently on how many sequences have been already aligned or their properties , and being a progressive method the global minimum obtained is strongly biased by those sequences which are more similar .another , recent and also successful approach uses the concepts of hidden markov models . while some of the previous drawback associated to clustal disappear , because for example , the sequences do not need to be organized a priori , one most start assuming a known model of protein ( or dna ) organization , which is usually obtained after training the program in a subset of sequences . then, one must be aware that the results usually depend on the training set , specially if it is not too large .moreover , if we are dealing with sequences of unknown family , or difficult to be characterized this approach does not guarantee good alignments .therefore we decided to propose a new simulated annealing ( sa ) algorithm that avoids the main difficulty of the previous attempts .our algorithm allows for the insertion and deletion of gaps in a dynamic way using only local moves .it makes use of the mathematical mapping between the multiple sequence alignment and a directed polymer in a random media ( dprm ) pointed out some years ago by hwa et al .in such a way , it should be also possible to extrapolate all the computer facilities and techniques , developed in the field of polymers to this biological problem .the rest of the paper is organized in the following way . in the next sectionwe make a short review of the theoretical foundations of our algorithm .then in section [ sec : alg ] we explain the implementation details to discuss the results in section [ sec : res ] .finally the conclusions are presented including and outlook for future improvements of this program .usually , multiple sequence alignments are studied and visualized writing one sequence on the top of the other , miming a table , ( see figure [ fig : usual_alignment ] ) and all the probabilistic algorithms devised so far use the simplicity of this representation to generate the moves . instead of that , we will use the well known fact that the alignment of sequences may be represented in a dimensional lattice ( see figure [ fig : square ] for ) .the cells of the -dimensional lattice are labeled by the indexes .the bonds encode the adjacency of letters : a diagonal bond in a dimensional space represents the -pairing .the insertion of gaps are represented by bonds without components in the sequences where the gaps were inserted .for example , a -pairing is represented by a bond whose projection on the third sequence is zero , and the -pairing is represented by a bond whose projection on the second and third sequences are zero .then , any alignment maps onto a lattice path that is directed along the diagonal of the -dimensional hypercube .this lattice path may be interpreted as a directed polymer and the random media in the problem is provided by the structure of the sequences to be aligned and by the distance between the residues in the different sequences .this mapping was already fruitfully used by hwa and co. to prove that the similarities between two sequences can be detected only if their amount exceeds a threshold value and for proposing a dynamic way to determine the optimal parameters for a good alignment of two sequences . here , our main focus will be to optimize the directed polymer ( lattice path ) under the constraints imposed by the sequences and their interactions in dimensions larger than 2 . to use a simulated annealing algorithm we extend the usual representation of computer science of determining the ground state of the problem to a finite temperature description .then , a finite - temperature alignment is a probability distribution over all possible conformations of the polymer and where is given by equation [ eq : cost_function ] , and is the partition function of the alignment .the temperature ( ) controls the relative weight of alignments with different scores ( different conformations of the polymer ) while the length of the polymer and the frequency of the gaps . in physical terms, defines and ensemble at temperature with line tension and chemical potential . simulated annealing ( sa ) was introduced many years ago by kirkpatrick et al to find a global minimum of a function in combinatorial optimization problems .sa is a probabilistic approach , that in general needs a state space ( the different configurations of the directed path ) and a cost or energy function ( [ eq : cost_function ] ) to be minimize ( eq . [ eq : cost_function ] ) .simulated annealing generates new alignments from a current alignment by applying transition rules of acceptance . the criteria for acceptanceare the following : * if , accept the new alignment * if accept it with probability the parameter controls the probability to accept a new configuration .initially , one starts at low values of ( high temperatures ) and then increases it applying an annealing schedule .if the temperature is lowered slowly enough , it can be proved that the system reaches a global minimum .unfortunately it will require infinitely computational time and one usually selects the best scheduling that is possible to afford with the computational facilities at hand .then sa is run over many initial conditions , and one assumes that the output of minimum energy is ( or is close ) to the global minimum . in the multiple sequence alignmentthis is also the case , but differently to what happens in other combinatorial optimization problems , here the average solution , i.e. that obtained after averaging over all the local minima s may be interesting by itself .in fact , researchers are often interested not in the particular details of the alignment , but in its robust properties , and comparing all the outputs of the sa is a way to get this information . from the technical point of view, once a cost function is defined , one needs to select the moves to be associated to the transition rates . our description of the multiple sequence alignment problem as a directed polymer in a random media allows us to define three types of moves , insertion , deletion and motion of gaps .all these moves are represented in figure ( [ fig : local_moves ] ) in a two - dimensional grid .the extension to -dimensional systems is straightforward . in this waywe get an algorithm that allows for the creation of gaps , which means a search space larger than the usually proved by similar methods . at the same time the algorithm is quadratic in the number of sequences .in fact , the computational cost of any move is limited by the square of the number of sequences to be aligned . in this work, we did not follow any heuristic strategy of optimization .our intention was to prove the potentiality of this strategy and we kept things as simple as possible .for example , if we start too far from the global minimum , the selection of local moves alone will make the algorithm to converge very slowly to it .this drawback may be overcome using very different initial conditions or trying , every some time steps , global moves that change radically the conformation of the polymer .we did not take care of this . during the simulationthe three moves were chosen with probability .the only biological information inserted was given by the cost matrix used to align the protein sequences .we avoid the use of important and well know biological information , fixed residues , phylogenetic tree of the sequences , etc , and the program was designed without the use programming optimization tricks .all the results presented in this section reflects the alignments of proteins from the kinase family , but qualitatively similar results were obtained for the gpcrs ( g protein - coupled receptors ) and crp ( camp receptor protein ) families .the simulations started with and every montecarlo steps was increased by a multiplicative factor of 1.01 until .we took care and in all cases the system reached the equilibrium .the different initial conditions were chosen inserting gaps randomly in all the sequences but the larger one , such that considering these gaps at , all the sequences were of equal length . to define the distance between the letters of the alphabet we use the pam matrix . in figure[ fig : relax3d ] are shown the approach to equilibrium of the multiple sequence alignment of 3 sequences averaged over 100 initial conditions for and . comparing with the result of the carrillo - lipman algorithm it is evident that for the algorithm is very close to the global minimum .however , one should take care that differently to what is usually obtained in other algorithms for the multiple sequence alignment problem , these figures reflects averages values of the multiple sequence alignment . in fact , one is often interested , rather than on the average , in the minimum of all the alignments . in figure[ fig : hist0 - 3d-123 ] we represent an histogram in energy for the alignment , over 1000 initial conditions for different values of , of the same three proteins of the kinase family presented in figure [ fig : relax3d ] .note again that for we obtain exactly the carillo - lipman result with probability larger than 0 .moreover , looking to the structure of the histogram for , one can also conjecture that if the average multiple sequence alignment were calculated only with those alignments concentrated in the peak of lower energies , the result presented in figure [ fig : relax3d ] will be closer to the one of the carrillo - lipman algorithm .another symptom suggesting that the average over the realization must be taken with care comes from the analysis of figure [ fig : hist-3d-1 - 9 ] .there we present again histograms for but using three different samples of the kinase family .note that while sample 1 and sample 3 are very well behaved and the results compare very well with the carillo - lipman method , the situation for sample 2 is different . to get good results in this case , it is clearly necessary to go beyond 1000 montecarlo steps . in the same spirit of figure [ fig : relax3d ] ,figures [ fig : relax9d ] and [ fig : relax18d ] reflect results suggesting that also for higher dimensions , if is large enough the algorithm should produce good alignments .in fact , also in these cases the energy decreases linearly with . if the number of sequences is higher , the correlations between the sequences increases , and the algorithm should find better results .this fact may be clearly seen in figure [ fig : equilibritation - e - time ] where the equilibration time of the algorithm ( in m.c.s ) is shown as a function of the number of sequences .the equilibration time , measured as the time necessary to reduce the energy by a factor , decreases linearly with the number of sequences to be aligned . of course the results may change if very different sequences are aligned , but in this case all other known algorithms fail to predict good alignments .then , we may say that for the most common cases , of correlated sequences , we present an algorithm whose convergence time decreases with , and whose moves , only increase quadratically with . with these results at hand , we follow to study the structure of the space of the solutions as a function of the number of aligned sequences .we define a distance ( ) between two alignments , and in the following way .given two solutions , and ( where the index stands for the sequence and for the position of the symbol in the sequence ) we , aligned one by one of the sequences of each solution using a dynamic programming algorithm reminiscent of the needleman - wunsh algorithm with the following score function : and express as : in this way identical alignments will be at distance from each other , and the insertion of gaps to obtain good alignments is penalized , such that the original alignments are altered minimally during the calculation of .we calculated with and and the results were the same ( apart a constant shift in ) .below we present figures for .we study how different are , from the solution of minimum score , the other solutions obtained aligning sequences . for every value of we use 1000 initial conditions and a .note that the sequences used were always the same .this mean , we first aligned three sequences from the kinase family .then , to align 4 sequences we just add a new one to the previous three and started the alignment from scratch. the procedure was repeated for every new value of .the results appear in figures [ fig : clustering - result - down ] and [ fig : clustering - result - up ] where is plotted as a function of . from the figures it becomes evident that the space of solution strongly depends on the number of sequences aligned .for example , while for three sequences the distance between the alignments is correlated with the difference in score , for more than 4 sequences it is not true anymore .moreover , while for the distance between the solutions decreases , it increases for and remains constant for .surely , these results reflect the correlation between the proteins aligned .for example , we may speculate that for , any new protein added contributed to find more similar alignments , i.e. added relevant information to the system .however , for the sequences contributed with new and uncorrelated information that produce more distant alignments and for to add new proteins that belong to the kinase family will not change the relevant characteristics of the alignment .these results are relevant , considering the limitations of progressive algorithms to change the previous alignment when new sequences are added .figures [ fig : clustering - result - down ] and [ fig : clustering - result - up ] , clearly suggest that the inclusion of one single sequence may dramatically change the character of the solutions .in this work we presented a new probabilistic algorithm , to perform the multiple alignment of proteins .the algorithm is based on the mapping between the dprm and the multiple sequence alignment problem . at variance with other probabilistic algorithmsour algorithm permits the variation of the number of gaps in the alignment without the necessity of expensive global moves .it is proved that for small number of sequences it reproduces the results of a complete algorithm .moreover , we show that for practical purposes the equilibration time is almost independent on the number of sequences aligned and in the worst case , it scale linearly with . finally , we study the space of solutions for different number of aligned sequences , and find a very rich structure that indicates the importance of just one sequence in the multiple alignment .of course the algorithm is still far from being competitive with other approaches like hmm and clustal .we are already working in implementing a similar work but using parallel tempering instead of simulated annealing .it is know that parallel tempering is more useful that s.a . when dealing with very hard problems , like spin glasses .moreover , it is very suitable to parallelization . also a direction of current work is the introduction of biological information relevant for the alignment .this may impose important constraints in the possible alignments , that may in turn strongly reduce the space of possible solutions . andlast , but not least , important programing optimization are necessary to make competitive this program from the computing time point of view .d. gunsfield , _ algorithms on strings , trees and sequences _ ( 1999 ) cambridge university press .t. j. p. hubbard , a. m. lensk and a. tramontano , nature structural biology * 4 * , 313 ( 1996 ) m.s .waterman , _ introduction to computational biology _ ( 200 ) , chapm & hall .p. pevzner , _ computational molecular biology _ ( 2000 ) mit press . h.carrillo and d. lipman , siam j. appl .math , * 48 * , 1073 , ( 1988 ) j. kim , s. pramanik and m. j. chung , cabios * 10 * 419 ( 1994 ) m. ishikawa , t. toya , m. hoshida , k. nitta , a. ogiwara and m. kanehisa , cabios * 9 * 267 , ( 1993 ) s. kirkpatrick , c.d .gelatt and m. p. vecchi , science * 220 * , 671 ( 1983 ) j. d. thompson , d. g. higgings and t. j. gibson , nucleid acids res . * 22 * , 4673 ( 1994 ) we are not aware of any statistical study proving this , but we believe that this statment is well accepted within the colleagues we contacted .r. durbin , s. r. eddy , a. krogh and g. mitchison , _ biological sequence analysis _( 1998 ) cambridge university press .t. hwa and michael lassig , phys .* 76 * , 2591 ( 1996 ) m. q. zhang and t. g. marr , j. theor. biol . * 174 * , 119 ( 1995 ) m. kschischo and m. l , preprint s. geman , s. geman , ieee transictions on pattern analysis and machine inteligence , * 6*,721 ( 1984 ) s.b . needleman and c. d. wunsch , j. mol48 * , 444 ( 1981 )
we proposed a probabilistic algorithm to solve the multiple sequence alignment problem . the algorithm is a simulated annealing ( sa ) that exploits the representation of the multiple alignment between sequences as a directed polymer in dimensions . within this representation we can easily track the evolution in the configuration space of the alignment through local moves of low computational cost . at variance with other probabilistic algorithms proposed to solve this problem , our approach allows for the creation and deletion of gaps without extra computational cost . the algorithm was tested aligning proteins from the kinases family . when the results are consistent with those obtained using a complete algorithm . for where the complete algorithm fails , we show that our algorithm still converges to reasonable alignments . moreover , we study the space of solutions obtained and show that depending on the number of sequences aligned the solutions are organized in different ways , suggesting a possible source of errors for progressive algorithms .
modulation level classification ( mlc ) is a process which detects the transmitter s digital modulation level from a received signal , using a priori knowledge of the modulation class and signal characteristics needed for downconversion and sampling . among many modulation classification methods ,a cumulant ( cm ) based classification is one of the most widespread for its ability to identify both the modulation class and level .however , differentiating among cumulants of the same modulation class , but with different levels , i.e. 16qam vs. 64qam , requires a large number of samples .a recently proposed method based on a goodness - of - fit ( gof ) test using kolmogorov - smirnov ( ks ) statistic has been suggested as an alternative to the cm - based level classification which require lower number of samples to achieve accurate mlc . in this letter, we propose a novel mlc method based on distribution distance functions , namely kuiper ( k ) ( * ? ? ?3.1 ) and ks distances , which is a significant simplification of methods based on gof .we show that using a classifier based only on k - distance achieves better classification than the ks - based gof classifier . at the same time, our method requires only additions in contrast to additions for the ks - based gof test , where is the number of distinct modulation levels , is the sample size and is the number of test points used by our method .following , we assume a sequence of discrete , complex , i.i.d . and sampled baseband symbols , ] , where , and .the task of the modulation classifier is to find , from which was drawn , given .without loss of generality , we consider unit power constellations and define snr as .the proposed method modifies mlc technique based on gof testing using the ks statistic .since the ks statistic , which computes the minimum distance between theoretical and empirical cumulative distribution function ( ecdf ) , requires _ all _ cdf points , we postulate that similarly accurate classification can be obtained by evaluating this distance using a smaller set of points in the cdf .let =f(\mathbf{r}) ] denote the set of test points , , sorted in ascending order . for notational consistency, we also define the following points , and . given that these points are distinct , they partition into regions. an individual sample , , can be in region , such that , with a given probability , determined by . assuming are independent of each other, we can conclude that given , the number of samples that fall into each of the regions , ] , and is the probability of an individual sample being in region . given that is drawn from , , for .now , with particular , the ecdf at all the test points is ,\quad f_n(t_l ) = { \frac{1}{n } } \sum\limits_{i=1}^l n_i.\ ] ] therefore , we can analytically find the probability of classification to each of the classes as for the rck classifier .a similar expression can be applied to rcks , replacing with in ( [ eq : probabilityofclassification ] ) .given that the theoretical cdfs change with snr , we store distinct cdfs for snr values for each modulation level ( impact of the selection of on the accuracy is discussed further in section [ sec : detection_samples ] . )further , we store theoretical cdfs of length each . for the non - reduced complexity classifiers that require sorting samples ,we use a sorting algorithm whose complexity is . from table [table : complexity ] , we see that for rck / rcks tests use less addition operations than k / ks - based methods and cm - based classification . for ,the rck method is more computationally efficient when implemented in asic / fpga , and is comparable to cm in complexity when implemented on a cpu .in addition , the processing time would be shorter for an asic / fpga implementation , which is an important requirement for cognitive radio applications .furthermore , their memory requirements are also smaller since has to be large for a smooth cdf .it is worth mentioning that the authors in used the theoretical cdf , but used as the number of samples to generate the cdf in their complexity figures .the same observation favoring the proposed rck / rcks methods holds for the magnitude - based ( mag ) classifiers ( * ? ? ?* sec iii - a ) ..number of operations and memory usage [ cols="<,<,<,<",options="header " , ] [ table : complexity ]as an example , we assume that the classification task is to distinguish between m - qam , where . for comparison we also present classification result based on maximum likelihood estimation ( ml ) .in the first set of experiments we evaluate the performance of the proposed classification method for different values of snr .the results are presented in fig .[ fig : vary_snr ] .we assume fixed sample size of , in contrast to ( * ? ? ?1 ) to evaluate classification accuracy for a smaller sample size . we confirm that even for small sample size , as shown in ( * ? ? ?1 ) , cm has unsatisfying classification accuracy at high snr . in( 10,17)db region rck clearly outperforms all detection techniques , while as snr exceeds all classification methods ( except cm ) converge to one . in low snr region , ( 0,10)db , ks , rcks , rck perform equally well , with cm having comparable performance .the same observation holds for larger sample sizes , not shown here due to space constraints .note that the analytical performance metric developed in section [ sec : analysis ] for rck and rcks matches perfectly with the simulations .for the remaining results , we set , unless otherwise stated .= 50 ; ( an . ) analytical result using ( [ eq : probabilityofclassification ] ) . ] in the second set of experiments , we evaluate the performance of the proposed classification method as a function of sample size .the result is presented in fig .[ fig : vary_n ] . as observed in fig .[ fig : vary_snr ] , also here cm has the worst classification accuracy , e.g. 5% below upper bound at .the rck method performs best at small sample sizes , . with ,the accuracy of rck and ks is equal .classification based on rcks method consistently falls slightly below rck and ks methods .in general , rcks , rck and ks converge to one at the same rate .db . ] in the third set of experiments we evaluate the performance of the proposed classification method as a function of snr mismatch and phase jitter .the result is presented in fig .[ fig : mismatch ] . in case of snr mismatch ,[ fig : vary_gamma ] , our results show the same trends as in ( * ? ? ? * fig .4 ) ; that is , all classification methods are relatively immune to snr mismatch , i.e. the difference between actual and maximum snr mismatch is less than 10% in the considered range of snr values .this justifies the selection of the limited set of snr values for complexity evaluation used in section [ sec : complexity ] .as expected , ml shows very high sensitivity to snr mismatch . noteagain the perfect match of analytical result presented in section [ sec : analysis ] with the simulations . in the case of phase jitter caused by imperfect downconversion , we present results in fig .[ fig : vary_phase ] for as in , in contrast to used earlier , for comparison purposes .we observe that our method using the magnitude feature , rck / rcks ( mag ) , as well as the cm method , are invariant to phase jitter .rck and rcks perform almost equally well , while cm is worse than the other three methods by % .as expected , the ml performs better than all other methods .quadrature - based classifiers , as expected , are highly sensitive to phase jitter .note that in the small phase jitter , , quadrature - based classifiers perform better than others , since the sample size is twice as large as in the former case .in this letter we presented a novel , computationally efficient method for modulation level classification based on distribution distance functions .specifically , we proposed to use a metric based on kolmogorov - smirnov and kuiper distances which exploits the distance properties between cdfs corresponding to different modulation levels .the proposed method results in faster mlc than the cumulant - based method , by reducing the number of samples needed .it also results in lower computational complexity than the ks - gof method , by eliminating the need for a sorting operation and using only a limited set of test points , instead of the entire cdf .o. a. dobre , a. abdi , y. bar - ness , and w. su , `` survey of automatic modulation classification techniques : classical approaches and new trends , '' _ iet communications _ , vol . 1 , no . 2 , pp .137156 , apr .2007 .g. a. p. cirrone , s. donadio , s. guatelli , a. mantero , b. masciliano , s. parlati , m. g. pia , a. pfeiffer , a. ribon , and p. viarengo , `` a goodness of fit statistical toolkit , '' _ ieee trans ._ , vol .51 , no . 5 ,pp . 20562063 , oct .
we present a novel modulation level classification ( mlc ) method based on probability distribution distance functions . the proposed method uses modified kuiper and kolmogorov - smirnov distances to achieve low computational complexity and outperforms the state of the art methods based on cumulants and goodness - of - fit tests . we derive the theoretical performance of the proposed mlc method and verify it via simulations . the best classification accuracy , under awgn with snr mismatch and phase jitter , is achieved with the proposed mlc method using kuiper distances .
optimization is a challenging problem in economic analysis and risk management , which dates back to the seminal work of markowitz [ 1 ] .the main assumption is that the return of any financial asset is described by a random variable , whose expected mean and variance are assumed to be reliably estimated from historical data . the expected mean and varianceare interpreted as the reward , and respectively the risk of the investment .the portfolio optimization problem can be formulated as following : given a set of financial assets , characterized by their expected mean and their covariances , find the optimal weight of each asset , such that the overall portfolio provides the smallest risk for a given overall return [ 1 - 5 ] .therefore , the problem reduces to finding the `` efficient frontier '' , which is the set of all achievable portfolios that offer the highest rate of return for a given level of risk . using the quadratic optimization mathematical frameworkit can be shown that for each level of risk there is exactly one achievable portfolio offering the highest rate of return . here , we consider the standard risk - return portfolio optimization model , when both long buying and short selling of a relatively large number of assets is allowed .we derive the analytical expression of the efficient frontier for a portfolio of risky assets , and for the case when a risk - free asset is added to the model .also , we provide an r implementation for both cases , and we discuss in detail a numerical example of a portfolio of several risky common stocks .a portfolio is an investment made in assets , with the returns , , using some amount of wealth .let denote the amount invested in the -th asset .negative values of can be interpreted as short selling .since the total wealth is we have : it is convenient to describe the investments in terms of relative values such that : and to characterize the portfolio we consider the expected return : where is the expected return of each asset , .also , we use the covariance matrix of the portfolio : ,\ ] ] where in order to quantify the deviation from the expected return , and to capture the risk of the investment .the variance of the portfolio is then given by : where ^{t} ] and ^{t} ] , . for each eigen - portfolio ,the weight of a given asset is inversely proportional to its volatility .also , the eigen - portfolios are pairwise orthogonal , and therefore completely decorrelated , since : thus , any portfolio can be represented as a linear combination of the eigen - portfolios , since they are orthogonal and form a basis in the asset space .it is also important to emphasize that the first eigen - portfolio , corresponding to the largest eigenvalue , typically has positive weights , corresponding to long - only positions .this is a consequence of the classical perron - frobenius theorem , which states that a sufficient condition for the existence of a dominant eigen - portfolio with positive entries is that all the pairwise correlations are positive .one can always get a dominant eigen - portfolio with positive weights using a shrinkage estimate [ 8 - 9 ] , which is a convex combination of the covariance matrix and a shrinkage target matrix : where the shrinkage matrix is a diagonal matrix : in this case , there exists a such that the shrinkage estimator has a dominant eigen - portfolio ( dep ) with all weights positive .this portfolio is of interest since it provides a long - only investment solution , which may be desirable for investors who would like to avoid short positions and high risk .let us now assume that one can also invest in a risk - free asset .a risk - free asset is an asset with a low return , but with no risk at all , i.e. zero variance . the risk - free asset is also uncorrelated with the risky assets , such that for all risky assets .the investor can both lend and borrow at the risk - free rate .lending means a positive amount is invested in the risk - free asset , borrowing implies that a negative amount is invested in the risk - free asset . in this case , we consider the following quadratic optimization problem [ 1 - 3 ] : subject to : the lagrangian of the problem is given by : .\ ] ] the critical point of the lagrangian is the solution of the system of equations : from the first equation we have : therefore , the second equation becomes : and from here we obtain : where the weights of the risky assets are therefore given by : and the corresponding amount that is invested in the risk - free asset is : also , the standard deviation of the risky assets is : or equivalently : this is the efficient frontier when the risk - free asset is added , or the capital market line ( cml ) , and it is a straight line in the return - risk ( ) space .obviously , cml intersects the return axis for , at , which is the return when the whole capital is invested in the risk - free asset .the tangency point of intersection between the efficient frontier and the cml corresponds to the `` market portfolio '' .this is the portfolio on the cml where nothing is invested in the risk - free asset .if the investor goes on the left side of the market portfolio , then he invests a proportion in the risk - free asset .if he chooses the right side of the market portfolio , he borrows at the risk - free rate .the market portfolio can be easily calculated from the equality condition : the solution of the above equation provides the coordinates of the market portfolio : and the weights of the market portfolio are then given by : code for portfolio optimization was written in r , which is a free software environment for statistical computing and graphics [ 10 ] . to exemplify the above analytical results, we consider a portfolio of common stocks .the raw data can be downloaded from yahoo finance [ 11 ] , and contains historical prices of each stock .the list of stocks to be extracted is given in a text file , as a comma delimited list .the raw data corresponding to each stock is downloaded and saved in a local `` data '' directory , using the `` data.r '' script ( appendix a ) , which has one input argument : the file containing stock symbols .once the raw data is downloaded the correct daily closing prices for each stock are extracted , and saved in another file , which is the main data input for the optimization program .the extraction is performed using the `` price.r '' script ( appendix b ) , which has three input arguments : the file containing stock symbols included in the portfolio , the number of trading days used in the model , the output file of the stock prices .the pseudo - code for the case with _n _ risky assets is presented in algorithm 1 . also , the r script performing the optimization and visualization for the _n _ risky assets case is `` optimization1.r '' ( appendix c ) .the script has three input arguments : the name of the data file , the number of portfolios on the efficient frontier to be calculated , and the maximum return considered on the `` efficient frontier '' ( this should be several ( 5 - 10 ) times higher than the maximum return of the individual assets ) .the pseudo - code for the case with _n _ risky assets and a risk - free asset is presented in algorithm 2 , and the r script performing the optimization and visualization for the _n _ risky assets case is `` optimization2.r '' ( appendix d ) .the script has four input arguments : the name of the data file , the number of portfolios on the cml to be calculated , the daily return of the risk free asset , and the maximum return considered on the `` efficient frontier '' . in order to exectute the code , on unix / linux platforms one can simply run the following script : .... # file containing the list of stock symbols stocks="stocks.txt " # number of trading days t=250 # file containing the daily stock prices prices="portfolio.txt " # number of portfolios on the frontier n=100 # daily return for the risk free asset r=0.0003 # maximum daily return value considered rmax=0.01 ./data.r stocks prices ./optimization1.r n prices r vec ; for(j in 1:j ) v[,j ] < - v[,j]/(u%*%v[,j ] ) ; sv < - rv < - rep(0 , j ) ; for(j in 1:j ) { rv[j ] < - t(v[,j])%*%p ; if(rv[j ] < 0 ) { rv[j ] < - -rv[j ] ; v[,j ] < - -v[,j ] ; } sv[j ] < - sqrt(t(v[,j])%*%s%*%v[,j ] ) ; } return(list(s , r , ss , p , minp , tanp , wminp , wtanp , w , v , sv , rv ) ) ; } plot_results<- function(data , returns , results ) { dat < - log(data[[1 ] ] ) ; m< - nrow(dat ) ; j < - ncol(dat ) ; ymax = max(dat ) ; ymin = min(dat ) mycolors < - rainbow(j+1 ) ; s < - results[[1 ] ] ; r < - results[[2 ] ] ; ss < - results[[3 ] ] ; p < - results[[4 ] ] ; minp < - results[[5 ] ] ; tanp < - results[[6 ] ] ; wminp < - results[[7 ] ] ; wtanp < - results[[8 ] ] ; f < - t(results[[9 ] ] ) ; v < - results[[10 ] ] ; sv < - results[[11 ] ] ; rv < - results[[12 ] ] ; postscript(file="./results1/fig1.eps " , onefile = false , horizontal = false , height=10 , width=5 ) ; par(mfrow = c(2,1 ) ) ; i d < - c(1:nrow(dat ) ) ; plot(id , rev(dat[,1 ] ) , ylim = c(ymin , ymax ) , type="l " , col = mycolors[1 ] , xlab="day " , ylab="log(price ) " , main = " asset prices " ) ; if(j >1 ) { for(j in 2:j ) { lines(id , rev(dat[,j ] ) , type="l " , col = mycolors[j ] ) ; } } legend("topleft " , names(dat ) , cex=0.5 , pch = rep(15 , j ) , col = mycolors ) ; ret < - returns[[1 ] ] ; ymax = max(ret ) ; ymin = min(ret ) ; i d < - c(1:nrow(ret ) ) ; plot(id , rev(ret[,1 ] ) , ylim = c(ymin , ymax ) , type="l " , col = mycolors[1 ] , xlab="day " , ylab="returns " , main = " asset returns " ) ; if(j >1 ) { for(j in 2:j ) { lines(id , rev(ret[,j]),type="l",col = mycolors[j ] ) ; } } legend("topleft " , returns[[2 ] ] , cex=0.5 , pch = rep(15 , j ) , col = mycolors ) ; postscript(file="./results1/fig2.eps " , onefile = false , horizontal = false , height=10 , width=5 ) ; par(mfrow = c(2,1 ) ) ; plot(s , r , xlim = c(0,max(s ) ) , ylim = c(min(r , p ) , max(r , p ) ) , type="l " , col="blue " , xlab="risk " , ylab="return " , main = " efficient frontier , mvp1 , tgp " ) ; points(ss , p , pch=19 , col = mycolors ) ; text(ss , p , pos=4 , cex=0.5 , names(p ) ) ; points(sv[1 ] , rv[1 ] , pch=15 , col="black " ) ; text(sv[1 ] , rv[1 ] , pos=4 , cex=0.5 , " dep " ) ; points(minp[1 ] , minp[2 ] , pch=19 , col="black " ) ; text(minp[1 ] , minp[2 ] , pos=2 , cex=0.5 , " mvp1 " ) ; points(tanp[1 ] , tanp[2 ] , pch=19 , col="black " ) ; text(tanp[1 ] , tanp[2 ] , pos=2 , cex=0.5 , " tgp " ) ; lines(c(0,max(s ) ) , c(0,max(s)*tanp[2]/tanp[1 ] ) , lty=3 ) ; abline(h=0 , lty=2 ) ; abline(v=0 , lty=2 ) ; plot(s , f[,1 ] , xlim = c(0,max(s ) ) , ylim = c(min(f),max(f ) ) , col = mycolors[1 ] , type="l " , xlab="risk " , ylab="portfolio weights " , main = " efficient portfolio weights " ) ; if(j >1 ) { for(j in 2:j ) { lines(s , f[,j ] , type="l " , col = mycolors[j ] ) ; } } abline(h=0 , lty=2 ) ; abline(v = minp[1 ] , lty=3 ) ; abline(v = tanp[1 ] , lty=3 ) ; text(minp[1 ] , min(f ) , pos=4 , cex=0.5 , " mvp1 " ) ; text(tanp[1 ] , min(f ) , pos=4 , cex=0.5 , " tgp " ) ; legend("topleft " , names(p ) , cex=0.5 , pch = rep(15 , j ) , col = mycolors ) ; postscript(file="./results1/fig3.eps " , onefile = false , horizontal = false , height=10 , width=5 ) ; par(mfrow = c(2,1 ) ) ; barplot(wminp , main="minimum variance portfolio 1 ( mvp1 ) " , xlab="assets " , ylab="weights " , col = mycolors , beside = true ) ; abline(h=0 , lty=1 ) ; legend("topleft " , names(p ) , cex=0.5 , pch = rep(15 , j ) , col = mycolors ) ; barplot(wtanp , main="tangency portfolio ( tgp ) " , xlab="assets " , ylab="weights " , col = mycolors , beside = true ) ; abline(h=0 , lty=1 ) ; legend("topleft " , names(p ) , cex=0.5 , pch = rep(15 , j ) , col = mycolors ) ; barplot(v[,1 ] , main="dominant eigen - portfolio ( dep ) " , xlab="assets " , ylab="weights " , col = mycolors , beside = true ) ; abline(h=0 , lty=1 ) ; legend("topleft " , names(p ) , cex=0.5 , pch = rep(15 , j ) , col = mycolors ) ; } read_data < - function(args ) { data < - read.table(paste("./data/ " , args[1 ] , sep= " " ) , header = true , sep= " , " ) return(list(data , as.integer(args[2 ] ) , as.double(args[3 ] ) , as.double(args[4 ] ) ) ) ; } returns < - function(data ) { dat < - data[[1 ] ] ; n < - nrow(dat ) - 1 ; j < - ncol(dat ) ; ret < - ( dat[1:n,1 ] - dat[2:(n+1),1])/dat[2:(n+1),1 ] ; if(j > 1 ) { for(j in 2:j ) { ret < - cbind(ret , ( dat[1:n , j ] - dat[2:(n+1),j])/ dat[2:(n+1),j ] ) ; } } return(list(ret , names(data[[1 ] ] ) , data[[2 ] ] , data[[3 ] ] , data[[4 ] ] ) ) ; } foptimization < - function(returns ) { p < - colmeans(returns[[1 ] ] ) ; names(p ) < - returns[[2 ] ] ; j < - ncol(returns[[1 ] ] ) ; m < - returns[[3 ] ] ; r < - returns[[4 ] ] ; rmax < - returns[[5 ] ] ; s < - cov(returns[[1 ] ] ) ; q < - solve(s ) ; u < - rep(1,j ) ; a < - matrix(rep(0,4),nrow=2 ) ; a[1,1 ] < - u%*%q%*%u ; a[1,2 ] < - a[2,1 ] < - u%*%q%*%p ; a[2,2 ] < - p%*%q%*%p ; d < - a[1,1]*a[2,2 ] - a[1,2]*a[1,2 ] ; r< - seq(r , rmax , length = m ) ; s < - sqrt ( a[1,1]*((r - a[1,2]/a[1,1])^2)/d + 1/a[1,1 ] ) ; ss < - sqrt(diag(s ) ) ; cml < - c(sqrt(a[1,1]*r*r - 2*a[1,2]*r + a[2,2 ] ) , r ) ; z < - ( r - r)/cml[1 ] ; f < - q%*%(p - r*u)/(cml[1]*cml[1 ] ) ; wcml < - matrix(rep(0,j*m ) , nrow = j ) ; wf < - rep(0,m ) ; for(m in 1:m ) { wcml[,m ] < - ( r[m ] - r)*f ; wf[m ] < - 1 - wcml[,m]%*%u ; } wcml < - rbind(wcml , t(wf ) ) ; mp < - c(cml[1]/(a[1,2 ] - a[1,1]*r ) , ( a[2,2 ] - a[1,2]*r)/(a[1,2 ] - a[1,1]*r ) ) ; wmp < - q%*%(p - r*u)/(a[1,2 ] - a[1,1]*r ) ; minp < - c(sqrt(1/a[1,1 ] ) , cml[1]*sqrt(1/a[1,1 ] ) + r ) ; wminp < - ( minp[2 ] - r)*f ; wfminp < - 1- t(wminp)%*%u ; wminp< - rbind(wminp , wfminp ) ; return(list(s , z , r , ss , p , cml , wcml , mp , wmp , minp , wminp ) ) ; } plot_results<- function(data , returns , results ) { dat < - log(data[[1 ] ] ) ; m < - nrow(dat ) ; j < - ncol(dat ) ; ymax = max(dat ) ; ymin = min(dat ) ; mycolors < - rainbow(j ) ; s < - results[[1 ] ] ; z < - results[[2 ] ] ; r < - results[[3 ] ] ; ss < - results[[4 ] ] ; p < - results[[5 ] ] ; cml < - results[[6 ] ] ; mp < - results[[8 ] ] ; wmp < - results[[9 ] ] ; minp < - results[[10 ] ] ; wminp < - results[[11 ] ] ; postscript(file="./results2/fig1.eps " , onefile = false , horizontal = false , height=10 , width=5 ) ; par(mfrow = c(2,1 ) ) ; i d < - c(1:nrow(dat ) ) ; plot(id , rev(dat[,1 ] ) , ylim = c(ymin , ymax ) , type="l " , col = mycolors[1 ] , xlab="day " , ylab="log(price ) " , main = " asset prices " ) ; if(j >1 ) { for(j in 2:j ) { lines(id , rev(dat[,j]),type="l",col = mycolors[j ] ) ; } } legend("topleft " , names(dat ) , cex=0.5 , pch = rep(15 , j ) , col = mycolors ) ; ret < - returns[[1 ] ] ; ymax = max(ret ) ; ymin = min(ret ) ; i d < - c(1:nrow(ret ) ) ; plot(id , rev(ret[,1]),ylim = c(ymin , ymax),type="l " , col = mycolors[1 ] , xlab="day " , ylab="returns " , main = " asset returns " ) ; if(j >1 ) { for(j in 2:j ) { lines(id , rev(ret[,j]),type="l",col = mycolors[j ] ) } } legend("topleft " , returns[[2 ] ] , cex=0.5 , pch = rep(15 , j ) , col = mycolors ) ; postscript(file="./results2/fig2.eps " , onefile = false , horizontal = false , height=10 , width=5 ) ; par(mfrow = c(2,1 ) ) ; mycolors < - rainbow(length(p)+1 ) ; plot(s , r , xlim = c(0 , max(s)),ylim = c(min(r , p),max(r , p ) ) , type="l " , col="blue " , xlab="risk " , ylab="return " , main = " capital market line , mvp2 , mp " ) ; points(ss , p , pch=19 , col = mycolors ) ; text(ss , p , pos=4 , cex=0.5 , names(p ) ) ; points(mp[1 ] , mp[2 ] , pch=19 , col="black " ) ; points(mp[1 ] , mp[2 ] , pch=19 , col="black " ) ; text(mp[1 ] , mp[2 ] , pos=2 , cex=0.6 , " mp " ) ; points(minp[1 ] , minp[2 ] , pch=19 , col="black " ) ; text(minp[1 ] , minp[2 ] , pos=2 , cex=0.6 , " mvp2 " ) ; text(0 , cml[2 ] , pos=4 , cex=0.5 , " rfa " ) ; lines(c(0 , max(s ) ) , c(cml[2 ] , max(s)*cml[1 ] + cml[2 ] ) , lty=3 ) ; abline(h=0 , lty=2 ) ; abline(v=0 , lty=2 ) ; f < - t(results[[7 ] ] ) ; mycolors < - rainbow(j+1 ) ; plot(z , f[,1 ] , xlim = c(0,max(z ) ) , ylim = c(min(f),max(f ) ) , type="l " , col = mycolors[1 ] , xlab="risk " , ylab="portfolio weights " , main="cml portfolio weights " ) ; if(j >1 ) { for(j in 2:j+1 ) { lines(z , f[,j],type="l",col = mycolors[j ] ) ; } } abline(h=0 , lty=2 ) ; abline(v = mp[1 ] , lty=3 ) ; text(mp[1 ] , min(f ) , pos=4 , cex=0.5 , " mp " ) ; abline(v = minp[1 ] , lty=3 ) ; text(minp[1 ] , min(f ) , pos=4 , cex=0.5 , " mvp2 " ) ; legend("topleft " , c(names(p ) , " rfa " ) , cex=0.5 , pch = rep(15 , j+1 ) , col = mycolors ) ; postscript(file="./results2/fig3.eps " , onefile = false , horizontal = false , height=10 , width=5 ) ; par(mfrow = c(2,1 ) ) ; barplot(wminp , main="minimum variance portfolio 2 " , xlab="assets " , ylab="weights " , col = mycolors , beside = true ) ; abline(h=0 , lty=1 ) ; legend("topleft " , c(names(p),"rfa " ) , cex=0.5 , pch = rep(15 , j+1 ) , col = mycolors ) ; barplot(wmp , main="market portfolio " , xlab="assets " , ylab="weights " , col = mycolors , beside = true ) ; abline(h=0 , lty=1 ) ; legend("topleft " , names(p ) , cex=0.5 , pch = rep(15 , j ) , col = mycolors ) ; } s. k. mishra , g. panda , b. majhi , r. majhi , `` improved portfolio optimization combining multiobjective evolutionary computing algorithm and prediction strategy , '' _ proceedings of the world congress on engineering 2012 vol i , wce 2012 , july 4 - 6 , 2012 , london , u.k._.
we consider the problem of finding the efficient frontier associated with the risk - return portfolio optimization model . we derive the analytical expression of the efficient frontier for a portfolio of risky assets , and for the case when a risk - free asset is added to the model . also , we provide an r implementation , and we discuss in detail a numerical example of a portfolio of several risky common stocks . portfolio optimization , efficient frontier , r.
the study of changes in the natural world , dynamics , is divided among several distinct disciplines .thermodynamics , for example , considers changes between special states , the so - called states of equilibrium , and addresses the question of which final states can be reached from any given initial state .mechanics studies the changes we call motion , chemistry deals with chemical reactions , quantum mechanics with transitions between quantum states , and the list goes on . in all of these examples we want to predict or explain the observed changes on the basis of information that is codified in a variety of ways into what we call the states . in some casesthe final state can be predicted with certainty , in others the information available is incomplete and we can , at best , only assign probabilities .the theory of thermodynamics holds a very special place among all these forms of dynamics . with the development of statistical mechanics by maxwell , boltzmann , gibbs and others , and eventually culminating in the work of jaynes , thermodynamics became the first clear example of a fundamental physical theory that could be derived from general principles of probable inference .the entire theory follows from a clear idea of the subject matter , that is , an appropriate choice of which states one is talking about , plus well - known principles of inference , namely , consistency , objectivity , universality and honesty .these principles are sufficiently constraining that they lead to a unique set of rules for processing information : these are the rules of probability theory and the method of maximum entropy . there are strong indications that a second example of a dynamics that can be deduced from principles of inference is afforded by quantum mechanics .many features of the theory , traditionally considered as postulates , follow from the correct identification of the subject matter plus general principles of inference . briefly, the goal of quantum mechanics is not to predict the behavior of microscopic particles , but rather to predict the outcomes of experiments performed with certain idealized setups .thus , the subject of quantum theory is not just the particles , but rather the experimental setups .the variables that encode the information relevant for prediction are the amplitudes or wave functions assigned to the setups .these ingredients plus a requirement of consistency ( namely , that if there are two ways to compute an amplitude , the two results should agree ) supplemented by entropic arguments are sufficient to derive most of the standard formalism including hilbert spaces , a time evolution that is linear and unitary , and the born probability rule .if quantum mechanics , deemed by many to be _ the _ fundamental theory , can be derived in this way , then it is possible , perhaps even likely , that other forms of dynamics might ultimately reflect laws of inference rather than laws of nature .should this turn out to be the case , then the fundamental equations of change , or motion , or evolution as the case might be , would follow from probabilistic and entropic arguments and the discovery of new dynamical laws would be reduced to the discovery of what is the necessary information for carrying out correct inferences . unfortunately , this search for the right variables has always been and remains to this day the major stumbling block in the understanding of new phenomena . the purpose of this paper is to explore this possible connection between the fundamental laws of physics and the theory of probable inference : can dynamics be derived from inference ? rather than starting with a known dynamical theory and attempting to derive it , i proceed in the opposite direction and ask : what sort of dynamics can one derive from well - established rules of inference ? in section 2i establish the notation , define the space of states , and briefly review how the introduction of a natural quantitative measure of the change involved in going from one state to another turns the space of states into a metric space .( such metric structures have been found useful in statistical inference , where the subject is known as information geometry , and in physics , to study both equilibrium and nonequilibrium thermodynamics . ) typically , once the kinematics appropriate to a certain motion has been selected , one proceeds to define the dynamics by additional postulates .this is precisely the option i want to avoid : in the dynamics developed here there are no such postulates .the equations of motion follow from an assumption about what information is relevant and sufficient to predict the motion . in a previous paper tackled a similar problem .there i answered the question : q1 : : : given the initial state and that the system evolves to other states , what trajectory is the system expected to follow ?this question implicitly assumes that there is a trajectory and that information about the initial state is sufficient to determine it .the dynamical law follows from the application of a principle of inference , the method of maximum entropy ( me ) , to the only information available , the initial state and the recognition that motion occurred .nothing else .the resulting ` entropic ' dynamics is very simple : the system moves continuously and _ irreversibly _ along the entropy gradient .thus , the honest , correct answer to the inference problem posed by question q1 has been given , but the equally important question ` will the system in fact follow the expected trajectory ? ' remained unanswered . whether the actual trajectory isthe expected one depends on whether the information encoded in the initial state happened to be sufficient for prediction . indeed , for many systems , including those for which the dynamics is _ reversible _ , more information is needed . in section 3we answer the question : q2 : : : given the initial and the final states , what trajectory is the system expected to follow ?again , the question implicitly assumes that there is a trajectory , that in moving from one state to another the system will pass through a continuous set of intermediate states . andagain , the equation of motion is obtained from a principle of inference , the principle of maximum entropy , and not from a principle of physics .( for a brief account of the me method in a form that is convenient for our current purpose see . )the resulting ` entropic ' dynamics also turns out to be simple : the system moves along a geodesic in the space of states .this is simple but not trivial : the geometry of the space of states is curved and possibly quite complicated .important features of this entropic dynamics are explored in section 4 .we show that there are some remarkable formal similarities with the theory of general relativity ( gr ) .for example , just as in gr there is no reference to an external physical time .the only clock available is provided by the system itself .it turns out that there is a natural choice for an ` intrinsic ' or ` proper ' time .it is a derived , statistical time defined and measured by the change itself .intrinsic time is quantified change .this entropic dynamics can be derived from a jacobi - type principle of least action , which we explore both in lagrangian and hamiltonian form . just as in gr there is invariance under arbitrary reparametrizations a form of general covariance and the entropic dynamics is an example of what is called a constrained dynamics .in this section we briefly review how to quantify the notion of change ( for more details see ) .the idea is simple : since the larger the change involved in going from one state to another , the easier it is to distinguish between them , we claim that change can be measured by distinguishability .next , using the me method one assigns a probability distribution to each state .this transforms the problem of distinguishing between two states into the problem of distinguishing between the corresponding probability distributions .the solution is well - known : the extent to which one distribution can be distinguished from another is given by the distance between them as measured by the fisher - rao information metric .thus , change is measured by distinguishability which is measured by distance .let the microstates of a physical system be labelled by , and let be the number of microstates in the range .we assume that a state of the system ( _ i.e. _ , a macrostate ) is defined by the expected values of some appropriately chosen variables ( ) , a crucial assumption is that the selected variables codify all the information relevant to answering the particular questions in which we happen to be interested .this is a point that we have made before but must be emphasized again : there is no systematic procedure to choose the right variables . at present the selection of relevant variablesis made on the basis of intuition guided by experiment ; it is essentially a matter of trial and error .the variables should include those that can be controlled or observed experimentally , but there are cases where others must also be included .the success of equilibrium thermodynamics , for example , derives from the fact that a few variables are sufficient to describe a static situation , and being few , these variables are easy to identify . in fluid dynamics , on the other hand ,the selection is more dificult .one must include many more variables , such as the local densities of particles , momentum , and energy , that are neither controlled nor usually observed .the states form an -dimensional manifold with coordinates given by the numerical values . to each statewe can associate a probability distribution .the distribution that best reflects the prior information contained in updated by the information is obtained by maximizing the entropy =-\int \,dx\,p(x)\log \frac{p(x)}{m(x)}. \label{s[p]}\ ] ] subject to the constraints ( [ aalpha ] ) .the result is where the partition function and the lagrange multipliers are given by next , we argue that the change involved in going from state to the state can be measured by the extent to which the two distributions can be distinguished . as discussed in , except for an overall multiplicative constant , the measure of distinguishability we seek is given by the ` distance ' between and , where is the fisher - rao metric .it turns out that this metric is unique : it is the only riemannian metric that adequately reflects the fact that the states are not ` structureless points ' , but happen to be probability distributions . to summarize : the very act of assigning a probability distribution to each state , automatically provides the space of states with a metric structure .given the initial and the final states , what trajectory is the system expected to follow ?the key to answering this question lies in the implicit assumption that there exists a trajectory or , in other words , that large changes are the result of a continuous succession of very many small changes .thus , the difficult problem of studying large changes is reduced to the much simpler problem of studying small changes .let us therefore focus on small changes and assume that the change in going from the initial state to the final state is small enough that the distance between them is given by to find which states are expected to lie on the trajectory between and we reason as follows . in going from the initial to the final state the system must pass through a halfway point , that is , a state that is equidistant from and ( see fig.1a ) .the question is which halfway state should we choose ?an answer to this question would clearly determine the trajectory : first find the halfway point , and use it to determine ` quarter of the way ' points , and so on .next we notice that there is nothing special about halfway states .we could equally well have argued that in going from the initial to the final state the system must first traverse a third of the way , that is , it must pass through a state that is twice as distant from as it is from .in general , we can assert that the system must pass through intermediate states such that , having already moved a distance away from the initial , there remains a distance to be covered to reach the final .halfway states have , ` third of the way ' states have , and so on ( see fig.1b ) .it appears that each different value of provides a different criterion to select the trajectory . if there is a trajectory and there are several ways to determine it , consistency demands that all these ways should agree : in the end we must verify that the selected trajectory is independent of or else we have a problem .our basic dynamical question q2 can be rephrased as follows : q2 : : the system is initially in state and we are given the new information that the system has moved to one of the neighboring states in the family .which do we select ? phrased in this way it is clear that this is precisely the kind of problem to be tackled using the me method . recall : the me method is a method for processing information and change our minds .it allows us to go from an old set of beliefs , described by the prior probability distribution , to a new set of beliefs , described by the posterior distribution , when the information available is just a specification of the family of distributions from which the posterior must be selected . in the more traditional applications of the method this family of posteriorsis constrained or defined by the known expected values of some relevant variables , but this is not necessary , the constraints need not be linear functionals . herethe constraints are defined geometrically . an important question that should arise whenever one contemplates using the me method is which entropy should one maximize .since we want to select a distribution the entropies to be considered must be of the form =-\int \,dx\,p(x|a)\log \frac{p(x|a)}{q(x)}.\ ] ] this is the entropy of relative to the _ prior _ .the interpretation of as the prior follows from the logic behind the me method itself .recall : in the absence of new information there is no reason to change one s mind . when there are no constraints the selected posterior distribution should coincide with the prior distribution .since the distribution that maximizes ] subject to no constraints yields the posterior .the correct choice is .now we are ready to tackle the question q2 : the answer is obtained by maximizing the entropy =-\int \,dx\,p(x|a)\log \frac{p(x|a)}{p(x|a_i)},\ ] ] subject to the constraint .this presents no problems .it is convenient to write and so that ] under variations of subject to the constraint introduce a lagrange multiplier , we get the multiplier , or equivalently the quantity , is determined substituting back into the constraint ( [ w constr ] ) . from eq.([distances ] ) we get and , and therefore thus , the intermediate state selected by the maximum entropy method is such that the geometrical interpretation is obvious : the triangle defined by the points , , and ( fig.1 ) degenerates into a straight line .this is sufficient to determine a short segment of the trajectory : all intermediate states lie on the straight line between and .the generalization beyond short trajectories is immediate : if any three nearby points along a curve lie on a straight line the curve is a geodesic .note that this result is independent of the value the potential consistency problem we had identified earlier does not arise . to summarize , the answer to our question q2 is simple and elegant : ed : : the expected trajectory is the geodesic that passes through the given initial and final states .this is the main result of this paper .as promised , in entropic dynamics the motion is predicted on the basis of a ` principle of inference ' , the principle of maximum entropy , and not from a ` principle of physics ' .the dynamics ed was derived in an unusual way and one should expect some unusual features .indeed , they become evident as soon as one asks any question involving time . for example, ed determines the vector tangent to the trajectory , but not the actual ` velocity ' .the reason is not hard to find : nowhere in question q2 nor in any implicit background information is there any reference to an external time .additional information is required if one is to find a relation between the distance along the trajectory and the external time . in conventional forms of dynamicsthis information is implicitly supplied by a ` principle of physics ' , by a hamiltonian which fixes the evolution of a system relative to external clocks . butq2 makes no mention of any external universe ; the only clock available is the system itself , and our problem becomes one of deciding how this clock should be read .we could , for example , choose one of the variables , say , as our clock variable and arbitrarily call it intrinsic time .ultimately it is best _ to define intrinsic time so that motion looks simple_. a very natural definition consists in stipulating that the system moves with unit velocity , then the intrinsic time is given by the distance itself , . _intrinsic time is quantified change_. a peculiar consequence of this definition is that intervals between events along the trajectory are not a priori known , they are determined only after the equations of motion are solved and the actual trajectory is determined .this reminds us of the theory of general relativity ( gr ) . an important feature of gris the absence of references to an external time .given initial and final states , in this case the initial and final three - dimensional geometries of space , the proper time interval along any curve between them is only determined after solving the einstein equations of motion .the absence of an external time has been a serious impediment in understanding the classical theory because it is not clear which variables represent the true gravitational degrees of freedom and also in formulating a quantum theory because of difficulties in defining equal - time commutators . in the following section we rewrite the entropic dynamics in lagrangian and hamiltonian forms and we point out some further formal similarities between ed and gr .the entropic dynamics can be derived from an ` action ' principle .since the trajectory is a geodesic , the ` action ' is the length itself , =\int_{\eta _ i}^{\eta _ f}d\eta \,l(a,\dot{a } ) , \label{j[a]}\ ] ] where is an arbitrary parameter along the trajectory , the lagrangian is the action ] with respect to , , and yields the equations of motion , and eq.([h constraint ] ) .naturally there is no equation of motion for and it must be determined from the constraint .we obtain , which , using the supplementary condition eq.([suppl cond ] ) , implies the analogue of in gr is called the lapse function , it gives the increase of ` intrinsic ' time per unit increase of the unphysical parameter . in terms of the equations of motion become in reparametrization invariant or generally covariant theories there is no canonical hamiltonian ( it vanishes identically ) but there are constraints .it is the constraints that play the role of generators of evolution , of change .accordingly , the analogue of eq.([h constraint ] ) in gr is called the hamiltonian constraint .i have provided an answer to the question ` given the initial and final states , what is the trajectory followed by the system ? 'the answer follows from established principles of inference without invoking additional ` physical ' postulates .the entropic dynamics thus derived turns out to be formally similar to other generally covariant theories : the dynamics is reversible ; the trajectories are geodesics ; the system supplies its own notion of an intrinsic time ; the motion can be derived from a variational principle that turns out to be of the form of jacobi s action principle rather than the more familiar principle of hamilton ; and the canonical hamiltonian formulation is an example of a dynamics driven by constraints .to conclude one should point out that a reasonable physical theory must satisfy two key requirements : the first is that it must provide us with a set of mathematical models , the second is that the theory must identify real physical systems to which the models might possibly apply .the entropic dynamics we propose in this paper satisfies the first requirement , but so far it fails with respect to the second ; it may be a reasonable theory but it is not yet ` physical ' .there are formal similarities with the general theory of relativity and one wonders : is this a coincidence ? whether gr will in the end turn out to be an example of ed is at this point no more than a speculation .a more definite answer hinges on the still unsettled problem of identifying those variables that describe the true degrees of freedom of the gravitational field .j. e. shore and r. w. johnson , ieee trans .theory * it-26 * , 26 ( 1980 ) ; y. tikochinsky , n. z. tishby and r. d. levine , phys . rev . lett .* 52 * , 1357 ( 1984 ) and phys. rev . * a30 * , 2638 ( 1984 ) ; j. skilling , ` the axioms of maximum entropy ' , in _ maximum - entropy and bayesian methods in science and engineering _ , ed . by g. j. erickson and c. r. smith ( kluwer , dordrecht , 1988 ) .a. caticha , phys . lett . *a244 * , 13 ( 1998 ) ; phys . rev . *a57 * , 1572 ( 1998 ) ; ` probability and entropy in quantum theory ' , in _ maximum entropy and bayesian methods _ , ed . by w. von der linden et al .( kluwer , dordrecht , 1999 ) ; found .phys . * 30 * , 227 ( 2000 ) .a. caticha , ` change , time and information geometry ' , in _ bayesian methods and maximum entropy in science and engineering _ , ed . by a. mohammad - djafari , aip conf. proc . * 568 * , 72 ( 2001 ) ( online at arxiv.org/abs/math-ph/0008018 ) .s. amari , _ differential - geometrical methods in statistics _( springer - verlag , 1985 ) ; c. c. rodrguez , ` the metrics generated by the kullback number ' , in _ maximum entropy and bayesian methods _ , ed . by j. skilling ( kluwer , dordrecht , 1989 ) ; ` objective bayesianism and geometry ' , in _ maximum entropy and bayesian methods _ , ed . by p. f. fougre ( kluwer , dordrecht , 1990 ) ; and ` bayesianrobustness : a new look from geometry ' , in _ maximum entropy and bayesian methods _ , ed .by g. r. heidbreder ( kluwer , dordrecht , 1996 ) .f. weinhold , j. chem .* 63 * , 2479 ( 1975 ) ; g. ruppeiner , phys . rev .a * 20 * , 1608 ( 1979 ) and * 27 * , 1116 ( 1983 ) ; l. disi and b. lukcs .a * 31 * , 3415 ( 1985 ) and phys112a * , 13 ( 1985 ) ; r. s. ingarden , tensor , n. s. * 30 * , 201 ( 1976 ) .a. caticha , ` maximum entropy , fluctuations and priors ' , in _ bayesian methods and maximum entropy in science and engineering _ , ed . by a. mohammad - djafari , aip conf .* 568 * , 94 ( 2001 ) ( online at arxiv.org/abs/math-ph/0008017 ) .the terms ` prior ' and ` posterior ' are normally used to refer to the distributions appearing in bayes theorem ; we retain the same terminology when using the me method because both bayes and me are concerned with the similar goal of processing new information to upgrade from a prior to a posterior .the difference lies in the nature of the information involved : for bayes theorem the information is in the form of data , for the me method it is a constraint on the family of allowed posteriors .in many important applications of the me method the prior happens to be uniform ( _ e.g. _ , the ` postulate ' of equal a priori probabilities in statistical mechanics ) and it is easy to overlook its presence and the important role it plays . j. w. york , ` kinematics and dynamics of general relativity ' , in _ sources of gravitational radiation _, ed . by l. smarr ( cambridge univ. press , 1979 ) ; a. ashtekar , ` quantum mechanics of geometry ' online as gr - qc/9901023 ; c. rovelli , ` strings , loops and the others : a critical survey on the present approaches to quantum gravity ' , in _ gravitation and relativity : at the turn of the millenium _ , ed . by n. dadhich and j. narlikar ( poona univ .press , 1998 ) ; j. barbour , class .* 11 * , 2853 and 2875 ( 1994 ) .k. kuchar , ` time and interpretations of quantum gravity ' , in _ proceedings of the 4thcanadian conference on general relativity and astrophysics _ , ed . by g. kunsatter , d. vincent , and j. williams ( world scientific , singapore , 1992 ) ; c. isham ` canonical quantum gravity and the problem of time ' , in _ integrable systems , quantum groups and quantum field theories _ , ed . by l. a. ibort and m. a. rodriguez( world scientific , singapore , 1993 ) .
i explore the possibility that the laws of physics might be laws of inference rather than laws of nature . what sort of dynamics can one derive from well - established rules of inference ? specifically , i ask : given relevant information codified in the initial and the final states , what trajectory is the system expected to follow ? the answer follows from a principle of inference , the principle of maximum entropy , and not from a principle of physics . the entropic dynamics derived this way exhibits some remarkable formal similarities with other generally covariant theories such as general relativity .
in this paper we scratch an intelligent variable neighbourhood search ( int - vns ) aimed to achieve further improvements of a successful vns for the minimum labelling spanning tree ( mlst ) and the -labelled spanning forest ( ) problems .this approach integrates the basic vns with other complementary intelligence tools and has been shown a promising strategy in for the mlst problem and in for the problem .the approach could be easily adapted to other optimization problems where the space solution consists of the subsets of a reference set ; like the feature subset selection or some location problems .first we introduced a local search mechanism that is inserted at top of the basic vns to get the complementary variable neighbourhood search ( co - vns ) .then we insert a probability - based constructive method and a reactive setting of the size of shaking process .a labelled graph consists of an undirected graph where is its set of nodes and is the set of edges that are labelled on the set of labels . in this paperwe consider two problems defined on a labelled graph : the mlst and the problems .the mlst problem consists on , given a labelled input graph , to get the spanning tree with the minimum number of labels ; i.e. , to find the labelled spanning tree of the input graph that minimizes the size of label set .the problem is defined as follows .given a labelled input graph and an integer positive value , to find a labelled spanning forest of the input graph having the minimum number of connected components with the upper bound for the number of labels to use , i.e. , the labelled subgraph may contain cycles , but they can arbitrarily break each of them by eliminating edges in polynomial time until a forest or a tree is obtained . therefore in both problems , the matter is to find the optimal set of labels . since a mlst solution would be a solution also to the problem if the obtained solution tree would not violate the limit on the used number of labels , it is easily deductable that the two problems are deeply correlated .the np - hardness of the mlst and problems was stated in and in respectively .therefore any practical solution approach to both problems requires heuristics .the first extension of the vns metaheuristic that we introduced for these problems is a local search mechanism that is inserted at top of the basic vns .the resulting local search method is referred to as _ complementary variable neighbourhood search _( co - vns ) .given a labelled graph with vertices , edges , and labels , co - vns replaces iteratively each incumbent solution with another solution selected from the _ complementary space _ of defined as the sets of labels that are not contained in ; .the iterative process of extraction of a complementary solution helps to escape the algorithm from possible traps in local minima , since the complementary solution lies in a very different zone of the search space with respect to the incumbent solution .this process yields an immediate peak of diversification of the whole local search procedure . to get a complementary solution , co- vns uses a greedy heuristic as constructive method in the complementary space of the current solution . for the mlst and problemsthe greedy heuristic is the maximum vertex covering algorithm ( mvca ) applied to the subgraph of with labels in .note that co - vns stops if either the set of unused labels contained in the complementary space is empty ( ) or a final feasible solution is produced .successively , the basic vns is applied in order to improve the resulting solution . at the starting point of vns, it is required to define a suitable series of neighbourhood structures of size . in order to impose a neighbourhood structure on the solution space use the hamming distance between two solutions given by where consists of labels that are in one of the solutions but not in the other .vns starts from an initial solution with increasing iteratively from 1 up to the maximum neighborhood size , .the basic idea of vns to change the neighbourhood structure when the search is trapped at a local minimum , is implemented by the shaking phase .it consists of the random selection of another point in the neighbourhood of the current solution .given , we consider its neighbourhood comprised by sets having a hamming distance from equal to labels , where . in order to construct the neighbourhood of a solution , the algorithm proceeds with the deletion of labels from .the proposed intelligent metaheuristic ( int - vns ) is built from co - vns , with the insertion of a probability - based local search as constructive method to get the complementary space solutions .in particular , this local search is a modification of greedy heuristic , obtained by introducing a probabilistic choice on the next label to be added into incomplete solutions . by allowing worse components to be added to incomplete solutions ,this probabilistic constructive heuristic produces a further increase on the diversification of the optimization process .the construction criterion is as follows .the procedure starts from an initial solution and iteratively selects at random a candidate move .if this move leads to a solution having a better objective function value than the current solution , then this move is accepted unconditionally ; otherwise the move is accepted with a probability that depends on the deterioration , , of the objective function value .this construction criterion takes inspiration from simulated annealing ( sa ) .however , the probabilistic local search works with partial solutions which are iteratively extended with additional components until complete solutions emerge . in the probabilistic local search , the acceptance probability of a worse component into a partial solutionis evaluated according to the usual sa criterion by the boltzmann function , where the temperature parameter controls the dynamics of the search .initially the value of is large , so allowing many worse moves to be accepted , and is gradually reduced by the following geometric cooling law : , where and q_{max } \leftarrow min(q_{max}+1 ; 2 \cdot algorithm . in each case , the adaptive setting of is bounded to lie in the interval between and to avoid a lack of balance between intensification and diversification factors .the algorithm proceeds with the same procedure until the user termination conditions are satisfied , producing at the end the best solution to date , , as output .the achieved optimization strategy seems to be highly promising for both labelling graph problems . ongoing investigationconsists in statistical comparisons of the resulting strategy against the best algorithms in the literature for these problems , in order to quantify and qualify the improvements obtained .further investigation will deal with the application of this strategy to other problems .
in this paper we describe an extension of the variable neighbourhood search ( vns ) which integrates the basic vns with other complementary approaches from machine learning , statistics and experimental algorithmic , in order to produce high - quality performance and to completely automate the resulting optimization strategy . the resulting intelligent vns has been successfully applied to a couple of optimization problems where the solution space consists of the subsets of a finite reference set . these problems are the labelled spanning tree and forest problems that are formulated on an undirected labelled graph ; a graph where each edge has a label in a finite set of labels . the problems consist on selecting the subset of labels such that the subgraph generated by these labels has an optimal spanning tree or forest , respectively . these problems have several applications in the real - world , where one aims to ensure connectivity by means of homogeneous connections .
urban vegetation is receiving a significant amount of attention from researchers in recent years .this interest stems from its impacts on the environment , affecting the pedestrian comfort and mitigating the negative health effects of the air pollution .microscale modelling using the computational fluid dynamics ( cfd ) proved to be an indispensable tool for assessing the impacts of the vegetation in the urban settings .some numerical studies focused only on the effects of the vegetation on the flow or on the thermal comfort of the pedestrian .others investigated pollutant dispersion in the presence of the vegetation , but without taking the deposition of the pollutant into account . when including the deposition , authors opted both for simplified model with constant deposition velocity and complex models expressing the dependence of the deposition velocity on various parameters such as the particle size or the wind speed .dispersion of the particles with a fixed size can be described by one scalar partial differential equation .when the behaviour of the particle size distribution is of interest , straightforward approach - so called _ sectional _ approach - is to divide the size range into a number of discrete bins and then model the appropriate number of scalar pdes , i.e. one for each bin .other option is to use the transport equation for the moments of the particle size distribution .such approach can reduce the number of pdes to be solved , and therefore reduce the computational demands .this class of methods , here referred to as the _ moment method _ , has been used for the simulation of the aerosol behaviour for a long time .usage of the moment method in the air quality models is also widespread ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?adapting the deposition velocity models to the moment method framework is not straightforward , since the mathematical formulation of the moment method requires all terms in the equation to be in the form of the power law of the particle size . simplified the problem by using the resistance model with brownian particle diffusivity and settling velocity averaged over the particle size range . developed a deposition velocity model based on the model proposed by .this model , however , only includes the processes of brownian diffusion , impaction and gravitational settling , and does not take into account the processes of interception and turbulent impaction , which play an important role in the dry deposition process .this study aims to fix this shortcoming by adapting the model by for the use in the moment method .the developed model is then used in the microscale cfd solver to solve the problems of a pollutant dispersion in the presence of a vegetation .to the authors best knowledge , the moment method has not yet been applied to the microscale urban vegetation problems .comparison of the obtained results with the results from the sectional model shows the applicability of such approach .the governing equation for the transport and the deposition of the aerosol particles of a diameter in the flow field given by the velocity can be formulated as where is the number concentration of the particles .diffusion coefficient is expressed as a fraction of the turbulent viscosity and the turbulent schmidt number .effects of the gravitational acceleration are captured in the terminal settling velocity of a particle , given by the stokes equation , where is the density of the particle , is the dynamic viscosity of air , and is the cunningham correction factor .the formula used for the correction factor is discussed in section [ sec : brown - dif ] .the removal of the particles via dry deposition is modelled by a product of three parameters : _ leaf area density _ ( lad ) , defined as a leaf area per unit volume ( ) , deposition velocity ( ) measuring the filtration efficiency of the vegetation under given conditions , and the particle concentration .its form is discussed in section [ sec : depvel ] .the moment method is based on the idea that in order to model the size distribution of the particles , we can investigate the behaviour of the moments of the distribution .moment of the distribution is defined as where is the order of the moment .some moments have straightforward physical interpretation : is the total number concentration , is proportionate to the surface area concentration and is proportionate to the volume concentration . assuming is sufficiently smooth in space and time , moment equations are obtained by multiplying eq .( [ eq : n - eq ] ) by , integrating over the whole size range and interchanging the derivatives and the integrals : now we are left with the evaluation of the integrals in ( [ eq : moment - eq ] ) .this can be done easily if the multiplicative terms are in a form of a polynomial function of .such is the case with the gravitational term , if we take into account that gravity plays significant role only for larger particles , where the cunningham correction factor in ( [ eq : us ] ) can be left out . using ( [ eq : us ] ) in the third term on the rhs of ( [ eq : moment - eq ] ) , the term can be rewritten as here we introduced a dependence on the moment of a higher order .that necessitates that we either solve a separate moment equation also for this higher order moment , or that this moment can be calculated from the moments that we solve for .the task of integrating the deposition term is more difficult and will be examined in the following section .variety of models describing the rate of particle transport from the air to the vegetation surface has been proposed .dry deposition schemes used in the large - scale air quality models are usually formulated in terms of a friction velocity or a wind speed at some height above the canopy , and do not explicitely model the behaviour inside the canopy .as such , they are not directly applicable in the microscale cfd models , but the same principles governing the deposition still apply . in this study we adopted the deposition velocity expression given by for needle - like obstacles . formulated the expressions for deposition velocities ( or collection velocities in the original terminology ) associated with each of the five main processes driving the dry deposition inside the canopy .the model then assumes that these processes , which are the brownian diffusion , interception , impaction , turbulent impaction , and sedimentation , are acting in parallel and independently .the deposition velocity thus can be written as a sum of the deposition velocities of all processes , contributions of each process to the deposition velocity for an exemplary set of parameters are shown on fig .[ fig : depvel - petroff - contribs ] . .see sections [ sec : brown - dif ] - [ sec : sedim ] for the description of the parameters).,scaledwidth=50.0% ] the assumption of the parallel and independent acting is advantageous for adapting the model to the moment method , since it allows us to split the rightmost integral in eq .( [ eq : moment - eq ] ) into integrals pertaining to the every physical process separately . in this section, we will describe each process in more detail and show how it can be adapted to the moment method .few simplifications were made to the original model to make the subsequent analysis simpler : we considered only plagiophile canopies and dirac distribution of the needle sizes .also note that here we focus only on needle - like obstacles . similar model for broadleaf canopies , given in , could be adapted to the moment method as well .brownian diffusion is the dominant process driving the deposition of the particles smaller then 0.1 .original model formulates the contribution to the deposition velocity due to the brownian diffusion as where is the magnitude of the wind velocity , is the schmidt number ( with being the kinematic viscosity of air and the brownian diffusion coefficient , ) , is the reynolds number , and is the needle diameter .assuming laminar boundary layer around the obstacles , the model constants hold the values and .cunningham correction factor is used in the original model , where is the mean free path of the particle in the air . in this whole studywe use simpler approximation .comparison of these expressions is on fig .[ fig : correction - factor ] , where it can be seen that their difference peaks to 12% for particle diameter around 0.2 ..,scaledwidth=80.0% ] furthermore , for the brownian diffusion we take into account only the size - dependent part of the correction factor , dominant in the particle size range where the diffusion is significant , .putting the expressions above into eq .( [ eq : vd - b ] ) , we obtain where and . using this formula ,the contribution to the moment equation can be written as interception denotes the process where the particle follows the streamline , but too close to the obstacle so that it is captured on the surface .the original model parameterizes its contribution to the deposition velocity as where is the ratio of the leaf surface projected on the plane perpendicular to the flow to the total leaf surface .since there is a linear dependence on the particle diameter , the expression can be integrated as is , resulting in with .inertial impaction occurs when particles do not follow the streamlines due to their inertia , resulting in the collision with the obstacle .the deposition velocity due to the inertial impaction is written as where is the impaction efficiency with the constant . to use this deposition velocity in the moment equations , we approximate the impaction efficiency as , where the coefficients and were obtained by minimizing the quadratic error of the original function and the approximation for ] , \ \si{\mm} ] , \ \si{\um}$ ] ) .each interval was discretized using 50 points .local friction velocity was set to 0 , as the turbulent impaction is implemented exactly and its contribution can only reduce the relative difference of the deposition velocities .the largest relative difference was found to be 4.56 for the parameter values at the end of the expected ranges ( , , ) and particle diameter , giving the deposition velocities and . while this difference is certainly significant , measured values shows even higher variability . considering this ,the costs of the alternative to this approximation - i.e. the numerical integration of the impaction term integral - does not outweigh the better fit to the original model . ).,scaledwidth=50.0% ] before we move on to the description of the implementation , it is necessary to provide some assumptions on the particle size distribution .size distributions of the atmospheric aerosols are often well fitted by a multimodal lognormal distribution .this is the distribution we will use from now on .we restrict ourselves only to the case of unimodal distribution , noting that the multimodal distribution can be modelled by a superposition of several unimodal distributions .unimodal lognormal distribution can be described by three parameters : total number concentration , geometric mean size and geometric standard deviation .its probability density function is knowing the three parameters , -th moment can be calculated using the formula from the three moments of order 0 , and the three parameters can be obtained using the relations where and . for the incomplete higher order moments following holds : where is the normal cumulative distribution function .now we turn our attention to the choice of the moments . forwhich orders we decide to solve the moment equation ( [ eq : moment - eq ] ) is to a degree an arbitrary decision .when this problem is discussed in literature , cited reasons for a certain choice include the mathematical simplicity and ease of the formulation of the modelled processes or the physical interpretation of some moments .choices of the moments used in the field of atmospheric aerosol modelling in the selected literature are summarized in tab .[ tab : moments - choice ] ..choices of the moments in the selected literature [ cols="<,^,<",options="header " , ] the recurrent usage of zeroth order moment brings substantial advantage , as it is equal to the total number concentration , and it is the order we will use as well . on the choice of the other moments authors differ . to assess the influence of the choice of the moments , following numerical experiment was performed .we investigated the particle deposition in a 1d tube , spanning between 0 and 300 m. homogenous vegetation block of lad = 3 was placed between 100 a 150 m. velocity of the air in the whole tube was set to constant 1 , unaffected by the vegetation .source of the pollutant was placed at 50 m from the inlet with the intensity of number of particles 1 and the distribution parameters .the tube was discretized using 400 cells . beside the choices mentioned in tab .[ tab : moments - choice ] , we tested also a variant with a negative order moment : 0 , -1 , 1 .non integer choices of the orders would also be possible to use , but we saw no advantage that such choice could bring .transport and the deposition of the pollutant was calculated by the sectional model based on the eq .( [ eq : n - eq ] ) and by the moment method based on the eq .( [ eq : moment - eq ] ) ( see section [ sec : implementation ] for details on the implementation ) . to discard possible errors due to the inexact approximation of the deposition velocity , only the sedimentation contribution , adapted exactly , was taken into account .the numerical experiment is not meant to model any real - world situation , rather just demonstrate the behaviour of the moment method in a simple setting .number and volume concentration distribution behind the barrier ( at 150 m ) are shown on fig .[ fig : choice - dist ] . as a reference , calculated distributions are complemented by the distribution for a case without the vegetation present .evolution of the zeroth moment ( equal to the number concentration ) and the third moment ( proportional to the volume concentration ) through the vegetation block are shown on fig .[ fig : choice - horiz ] .effect of the vegetation , while small in number concentration , is significant in volume concentration .only the variant using the moments of orders 0 , -1 , and 1 reproduces well the number concentration distribution , but overpredicts the peak of the volume concentration .variants using the orders 0 , 1 , 2 and 0 , 2 , 3 produce result closer to the sectional model in volume concentration , but with larger differences in number concentration .variant using the orders 0 , 3 , 6 shows no advantages over the other variants .choosing between the orders 0 , 1 , 2 and 0 , 2 , 3 , we opted for the latter variant , as the third moment is proportionate to the main quantity of interest - volume ( and mass ) concentration of the pollutant .both the sectional model and the moment model were implemented using the openfoam platform .second order upwind scheme was used for convective terms in equations ( [ eq : n - eq ] ) and ( [ eq : moment - eq ] ) and second order scheme based on the gauss theorem was used for the diffusive terms .residual levels of were used to test for convergence of the steady state solver . when using the moment method , we have to solve the discretized eq .( [ eq : moment - eq ] ) for the three selected moments .these equations are coupled through the gravitational settling term and the deposition term , which depends on the moments of a different order than the one solved by the equation .the coupling is dealt with the following way . in every iteration ,first the parameters of the lognormal distribution , and are calculated using the equations ( [ eq : n]-[eq : ln2sigmag ] ) from the values in the preceeding iteration .three moment equations are then solved one after another with the coupling terms resulting from the deposition being treated explicitly .fully explicit treatment of the gravitational settling term ( [ eq : mom - grav ] ) can result in numerical instability , unless low values of the relaxation factors are used .that would however lead to slower convergence , therefore we employed a semi - implicit treatment .moment in ( [ eq : mom - grav ] ) is rewritten as with and the term is then treated explicitly and implicitly .relaxation factors 0.95 were used both for the sectional equations and for the moment equations . for the first five iterations of the moment method the relaxation factors for the moment equationswere however set to lower value 0.8 , as the computations proved to be less stable at the beginning .calculation of the distribution parameters and via eq .( [ eq : dgn ] ) and ( [ eq : ln2sigmag ] ) includes the division of the moments , potentially very small far away from the source of pollutant . to avoid this problem , small background concentration in the whole domainis set as an initial condition and used as a boundary condition where zero would be used otherwise .here we describe two example problems of microscale flows through and around the vegetation and assess the applicability of the developed moment method to the simulation of pollutant dispersion .two vegetation elements that could be encountered in the urban settings are investigated in this test : small patch of full grown trees and a dense hedgerow .the flow field in both cases was precomputed by an in - house finite volume cfd solver .the solver is based on the navier - stokes equations in the boussinesq approximation and utilizes turbulence model .inlet profiles of velocity and the turbulence quantities , as well as the wall functions , are prescribed by the analytical expressions given by .vegetation model for the momentum and equations described by is employed . for further detailswe refer to , where the solver is described in more detail .turbulent schmidt number was set to in both cases , based on the analysis by . in both cases presented below, we simulated the dispersion of a coarse mode particles from a point or a line source .the coarse mode is chosen as the mode that contains , together with the accumulation mode , majority of the volume of the particles in the urban environment , but is affected more strongly by the dry deposition than the accumulation mode .the number distribution at the source is assumed to be lognormal with the parameters and , typical for the urban environment .evaluation of the developed moment method was based on the comparison with the results obtained by the sectional model . in the sectional model , eq .( [ eq : n - eq ] ) is solved for 41 particle sizes distributed uniformly between and .the interval is chosen so that the behaviour of the number distribution as well as the volume distribution can be captured by the sectional model .first case investigates the filtering properties of a small patch of full grown conifer trees .a simplified 2d model is constructed as follows .the 30 meters wide and 15 meters high tree patch is represented as a horizontally homogeneous vegetation block .pollutant source is placed 15 meters upstream from the vegetation , 5 meters above the ground .lad profile of the vegetation is prescribed by a formula given by , where m is the height of the trees , is the maximum lad , chosen so that leaf area index , , is equal to 5 , and is the corresponding height of maximal lad .the sketch of the domain and the lad profile of the vegetation is shown on fig .[ fig : veg2d - domain ] . trees are modelled as generic conifers with .the drag coefficient is chosen as .intensity of the point source is set to a normalized value in terms of number of particles . since all terms in eq .( [ eq : n - eq ] ) and eq .( [ eq : moment - eq ] ) are linear with respect to the number concentration , results can be simply scaled to other value of the source intensity if needed .inlet wind profile is set as logarithmic with at height 20 m and m. for the number concentration in the sectional model and for all moments in the moment method the neumann boundary conditions are used on the ground , at the top and at the outlet .no resuspension of the particles is allowed , i.e. any particle that falls on the ground stays on the ground indefinitely .small value of the concentration and of the moments calculated from the lognormal distribution with the parameters is prescribed at the inlet .domain is discretized using a cartesian grid with 220 cells in horizontal direction and 100 cells in vertical direction , graded so that the grid is finer near the ground and around the tree patch .the near ground cells are 0.25 m high , and the vegetation block itself consists of 42 x 40 cells . flow field obtained by the cfd solveris shown on fig .[ fig : veg2d - streamlines ] . as visible, the vegetation block slows the wind down , but allows the air to pass through .results from the sectional and the moment model are compared in terms of the third moment of the particle size distribution , proportionate to the volume concentration of the particles . as we assume that the density is the same for particles of all sizes , third moment is also proportionate to the mass concentration of the particles . calculated field of the third moment by the moment methodis shown on fig .[ fig : veg2d - m3 ] ( left ) . the relative difference of the results obtained by the moment method , , and by the sectional model , ,is shown on the right panel of fig .[ fig : veg2d - m3 ] . of the third moment calculated by the moment method and the sectional approach.,title="fig:",scaledwidth=70.0% ] of the third moment calculated by the moment method and the sectional approach.,title="fig:",scaledwidth=70.0% ]the source of the largest discrepancies between the two methods is the vegetation block .the relative difference raises up to 4% at the downstream edge of the vegetation block , and then decreases with the increasing distance from the vegetation .the moment method overpredicts the deposition inside the vegetation due to the inexact approximation of the impaction efficiency described in section [ sec : vd - im ] .( right ) volume concentration at [ 200 ; 2 ] .discrete points calculated by the sectional method and the distribution calculated by the moment method are shown . for reference ,the distribution calculated without the size dependent deposition and gravitational settling terms is shown as well.,title="fig:",scaledwidth=40.0% ] ( right ) volume concentration at [ 200 ; 2 ] .discrete points calculated by the sectional method and the distribution calculated by the moment method are shown . for reference ,the distribution calculated without the size dependent deposition and gravitational settling terms is shown as well.,title="fig:",scaledwidth=40.0% ] further insights can be obtained from fig .[ fig : veg2d - dist ] .it shows the volume concentration distribution at the downstream edge of the tree patch , and at 120 m downstream from the tree patch , both at height 2 m above ground .the vegetation has negligible effect on the particles smaller than , but significantly reduces the mass of the particles above .this is captured well both by the sectional and the moment method . at the downstream edge of the treepatch the moment method predicts lower peak of the volume concentration distribution than the sectional approach .the difference is reduced by the mixing of the filtered air with the unfiltered air flowing above the vegetation further away from the tree patch . next we tested the method on a 3d model of a dense hedgerow placed near a line source of the pollutant .this case is a three dimensional extension of the 2d situation investigated in .the yew hedge is 10 m wide , 3.2 m deep and 2.4 m high .it is placed in the 40 m wide , 40 m long , and 20 m high computational domain .two meters upstream from the hedge is a line source at height 0.5 m above ground .intensity of the line source is set to a value in terms of number of particles , noting as in section [ sec : veg2d ] that the results can be scaled if other value is desired .sketch of the domain is shown on the left panel of fig .[ fig : veg3d - domain ] .right panel shows the lad profile of the hedge , taken from the original article .vegetation is further described by the needle diameter is , , and the vegetation drag coefficient which is set to as in .the computational mesh was created using the openfoam _ snappyhexmesh _ generator .the domain consist of 376 000 cells , refined near the ground and around the hedge .the near - ground cells are 0.07 m high and the hedge itself is discretized using 54 x 20 x 22 cells . wind profile at the inlet is set as logarithmic with at height 2.4 m and m. boundary conditions for the sectional solver and moment method solvers are set similarly as in section [ sec : veg2d ] : neumann boundary conditions are used at the ground , top , sides , and at the outlet .no resuspension of the particles fallen to the ground is allowed .again , small amount of the particles given by the lognormal distribution with the parameters is prescribed at the inlet .streamlines of the flow field calculated by the separate cfd solver are shown on fig .[ fig : veg3d - streamlines ] . as in the 2d simulation in ,recirculation zone is developed behind the dense hedge .unlike the 2d case , here we can observe part the of the flow to be deflected to the sides .third moment of the particle size distribution obtained by the moment method is shown on the left panels of fig .[ fig : veg3d - m3-horiz ] and fig .[ fig : veg3d - m3-vert ] . while a portion of the pollutant penetrates the barrier , part is deflected to the sides of the hedgerow , creating a zone with a reduced pollutant concentration behind it .m. quantities shown are as on fig .[ fig : veg2d - m3].,title="fig:",scaledwidth=70.0% ] m. quantities shown are as on fig .[ fig : veg2d - m3].,title="fig:",scaledwidth=70.0% ] m. quantities shown are as on fig .[ fig : veg2d - m3].,title="fig:",scaledwidth=70.0% ] m. quantities shown are as on fig .[ fig : veg2d - m3].,title="fig:",scaledwidth=70.0% ] relative difference between the solution obtained by the moment method and sectional approach is shown on the right panels of fig .[ fig : veg3d - m3-horiz ] and fig .[ fig : veg3d - m3-vert ] . as in the tree patch case ,the moment method overpredicts the deposition and consequently underestimates the volume concentration behind the barrier .the difference is below 10% , and decreases away from the barrier .effects of the coarser mesh in the upper part of the computational domain are visible on fig .[ fig : veg3d - m3-vert ] .however , it does not negatively affect the difference between the two methods .volume concentration distribution at two points - inside the vegetation and downstream from the vegetation - is shown on fig .[ fig : veg3d - dist ] . due to the smaller size of the vegetation than in the 2d tree patch case , the effect of the vegetation is less pronounced .the moment method is able to reproduce the shape of the distribution well , but again produces a lower peak than the sectional method .similarly as before , better fit can be observed further from the barrier due to the mixing with unfiltered air . .( right ) volume concentration at [ 30 ; 0 ; 2 ] . for reference ,the distribution calculated without the size dependent deposition and gravitational settling terms is shown.,title="fig:",scaledwidth=40.0% ] .( right ) volume concentration at [ 30 ; 0 ; 2 ] . for reference ,the distribution calculated without the size dependent deposition and gravitational settling terms is shown.,title="fig:",scaledwidth=40.0% ] to compare the computational performance of the developed model , we measured the runtime of the sectional approach and the moment method approach for the 3d case described in section [ sec : case3d ] .both solvers were run on a single core of an intel xeon x5365 processor . the sectional model , comprised of 41 scalar pdes , finished in 9120 seconds .the average runtime per each equation was thus 222 seconds .moment model , comprised of 3 coupled pdes , finished in 1128 seconds .that gives us eightfold acceleration compared to the sectional model .even though the high number of bins used in this study might not be necessary to obtain sufficiently accurate results , to get an equivalent workload as the moment method in this case , only 5 bins could be used in the sectional model .such number is insufficient to model the behaviour of the number distribution as well as the volume distribution well .two points can be made in favor of the sectional method though .first , the solution process of every equation is independent on the other equations , therefore the approach offers effortless parallelization for the number of cores up to the numbers of bins used .this is not especially advantageous in our implementation , as the openfoam solvers are already parallelizable , but it could be an important factor for other implementations .secondly , the relaxation factor 0.95 used for all simulations in the sectional approach was needed only for the bins representing the larger particles . using different values of this parameter for different bins can provide some reduction of the runtime .in this study , we introduced a formulation of a dry deposition model suitable for implementation in a moment method .as the original model by , our approximation includes five main processes of the dry deposition : brownian diffusion , interception , impaction , turbulent impaction , and sedimentation .the developed deposition velocity model was implemented in a microscale finite volume solver based on the openfoam platform .the solver employs the moment method to calculate the particle size distribution in the domain .the deposition model was tested on two example problems of microscale pollutant dispersion .comparison with the sectional method using the original dry deposition model revealed that the moment method is able to reproduce the shape of the particle size distribution well .the relative differences between the sectional and the moment method in terms of the third moment of the distribution were below 10 % .this difference was caused by the higher deposition velocity in our model , which in turn was caused by the inexact representation of the impaction process .the impaction term could be represented better by employing numerical integration , that would however result in a higher computational costs , and reduce the advantages of using the moment method .the moment method , described by three coupled pdes , proved to be more computationally efficient than the sectional model using 41 bins .the speedup was approximately eightfold , and a workload equivalent to the moment method would be achieved by running a sectional model that uses only 5 bins .this performance improvement together with the reliable results shows that the moment methods , often used in large scale atmospheric models , can be useful also for the microscale problems of pollutant dispersion in the urban environment . the developed methodas formulated here is applicable only when the particle size distribution can be approximated as a lognormal distribution .here we used only unimodal distribution , but the usage of multimodal distribution would be also possible by superposition of several unimodal ones . furthermore , the method could be reformulated for other distributions , provided that algebraic relations between the moments and distribution parameters are known .this work was supported by the grant sgs16/206/ohk2/3t/12 of the czech technical university in prague .authors would like to thank to jan halama and ji frst for helpful discussions .gromke , c. , blocken , b. , 2015 .influence of avenue - trees on air quality at the urban neighborhood scale .part i : quality assurance studies and turbulent schmidt number analysis for rans cfd simulations .196 , 214223 .mochida , a. , lun , i. y. , 2008 .prediction of wind environment and thermal comfort at pedestrian level in urban area .j. wind eng .96 ( 1011 ) , 14981527 , 4th international symposium on computational wind engineering ( cwe2006 ) .pirjola , l. , kulmala , m. , wilck , m. , bischoff , a. , stratmann , f. , otto , e. , 1999 .formation of sulphuric acid aerosols and cloud condensation nuclei : an expression for significant nucleation and model comparison . j. aerosol sci .30 ( 8) , 10791094 .raupach , m. , briggs , p. , ford , p. , leys , j. , muschal , m. , cooper , b. , edge , v. , 2001 .endosulfan transport ii . modeling airborne dispersal and deposition by spray and vapor .. qual . 30 ( 3 ) , 729740 .vranckx , s. , vos , p. , maiheu , b. , janssen , s. , 2015 .impact of trees on pollutant dispersion in street canyons : a numerical study of the annual average effects in antwerp , belgium .total environ .532 , 474483 .p , v. , bene , l. , 2016 .cfd optimization of a vegetation barrier . in : karaszen , b. , manguoglu , m. , tezer - sezgin , m. , gktepe , s. , mr ugur ( eds . ) , numerical mathematics and advanced applications - enumath 2015 .springer international publishing , cham , to appear .
a dry deposition model suitable for use in the moment method has been developed . contributions from five main processes driving the deposition - brownian diffusion , interception , impaction , turbulent impaction , and sedimentation - are included in the model . the deposition model was employed in the moment method solver implemented in the openfoam framework . applicability of the developed expression and the moment method solver was tested on two example problems of particle dispersion in the presence of a vegetation on small scales : a flow through a tree patch in 2d and a flow through a hedgerow in 3d . comparison with the sectional method showed that the moment method using the developed deposition model is able to reproduce the shape of the particle size distribution well . the relative difference in terms of the third moment of the distribution was below 10% in both tested cases , and decreased away from the vegetation . main source of this difference is a known overprediction of the impaction efficiency . when tested on the 3d test case , the moment method achieved approximately eightfold acceleration compared to the sectional method using 41 bins . dry deposition , vegetation , microscale modeling , moment method , particle dispersion
challenge when determining the performance of a block error - correcting code is to measure its bit - error rate ( ber ) , which quantifies the reliability of the system . in practice ,the ber estimation for a single code is simple , just send data and divide the errors committed among the total number of bits .however , it would be too costly and time - consuming if a comparative between several codes is required .mathematical software packages for encoding and decoding are very limited and restricted to specific codes and simulations would consume a huge amount of time when dealing with low bit - error rates .for this reason , a theoretical approach to the measurement of the ber is proposed by several authors in the literature , see for instance .all these papers follow this scheme of codification : let be a code of length and dimension over the field with elements , being .an -tuple is transmitted through a -ary symmetric discrete memoryless channel . in this step , there are two possibilities , the transmission is right or failed in some symbols . in a second step ,the code corrects the -tuple , detects an erroneous transmission but does not correct it , or asserts that the transmitted -tuple is a codeword .finally , there is a comparison between the encoded and decoded -tuples , see figure [ f1 ] . { encoded -tuple } ; \draw ( 5,0 ) node(dos ) [ draw , shape = rectangle , rounded corners , fill = gray!20 ] { transmitted -tuple } ; \draw ( 0,-1 ) node(tres ) [ draw , shape = rectangle , rounded corners , fill = gray!20 ] { decoded -tuple } ; \draw[decorate , decoration={expanding waves , angle=5 } ] ( uno ) to node[snake = snake , midway , sloped , above , pos=0.5 ] { \small transmission } ( dos ) ; \draw [ - > ] ( dos ) to node[midway , sloped , below , pos=0.5 ] { \small decoding } ( tres ) ; \draw [ < - > , dashed ] ( uno ) to node[midway , right ] { \small comparison } ( tres ) ; \end{tikzpicture}\ ] ] when we run over all the possibilities in each step ( of course , not all combinations are possible ) , this yields five disjoint cases : 1 . a correct transmission ( ct ) , i.e. , all the symbols are correctly received .2 . a right correction ( rc ) , i.e., some of the symbols are incorrectly received but the decoding algorithm corrects them .an error detection ( ed ) , i.e. , the number of errors exceeds the error - correction capability of the code , the block is not corrected and the bad transmission is detected .4 . a wrong correction ( wc ) , i.e., some errors occur ( beyond the error capability of the code ) , there is a code - correction , but nevertheless , the encoded block differs from the decoded block .5 . a false negative ( fn ) , i.e. , some symbols are incorrectly received but the whole block is a codeword , so , from the receiver s point of view , the block is correctly received .cases fn and wc are called undetected errors in , and it is proven that , for maximum - distance - separable ( mds ) codes , the probability of an undetected error decreases monotonically as the channel symbol error decreases , that is , mds codes are proper in the terminology of . hence , the performance of an mds code is characterized by the probability that an erroneous transmission will remain undetected . in , as a performance criterion , the probability of an ed is added and an exhaustive calculus of the word , symbol , and bit error rate of ed , wc , and fn is made . in this paper , we propose a refinement in the calculi of the probability of an fn , a wc , and an ed .consequently , we get a better approximation of the ber for a -ary mds code . as in the above references , we consider a bounded distance reproducing decoder , i.e., it reproduces the received word whenever there are uncorrectable errors .the underlying idea consists in removing the symbol errors produced in the redundancy part of the decoded -tuple , that is , following the nomenclature of , unlike the aforementioned papers , we estimate the information bit error rate ( iber ) , sometimes also called post - decoding bit error rate .more formally , let us assume , without loss of generality , that the codification is systematic and the first symbols of the -tuples form an information set .hence , following the above scheme , after the comparison step , if there are errors , the symbol - error proportion is . nevertheless , some of these errors belong to the redundancy part and they will not spread in the final post - decoded -tuple . in other words , a new variable should be considered : the comparison between the original block and the final -tuple obtained after decoding , see figure [ f2 ] . { original -tuple } ; \draw ( 0,1 ) node(uno ) [ draw , shape = rectangle , rounded corners , fill = gray!20 ] { encoded -tuple } ; \draw ( 3,0 ) node(dos ) [ draw , shape = rectangle , rounded corners , fill = gray!20 ] { transmitted -tuple } ; \draw ( 0,-1 ) node(tres ) [ draw , shape = rectangle , rounded corners , fill = gray!20 ] { decoded -tuple } ; \draw ( -3,0 ) node(cuatro ) [ draw , shape = rectangle , rounded corners , fill = gray!20 ] { target -tuple } ; \draw [ decorate , decoration={expanding waves , angle=4 } ] ( uno ) to [ bend left ] node[midway , sloped , above , pos=0.5 ] { \small transmission } ( dos ) ; \draw [ - > ] ( dos ) to [ bend left ] node[midway , sloped , below , pos=0.4 ] { \small decoding } ( tres ) ; \draw [ < - > , dashed ] ( uno ) to node[midway ] { \small comparison } ( tres ) ; \draw [ - > ] ( cero ) to[bend left ] node[midway , sloped , above ] { \small encoding } ( uno ) ; \draw [ - > ] ( tres ) to[bend left ] node[midway , sloped , below ] { \small post - decoding } ( cuatro ) ; \draw [ < - > , dashed ] ( cero ) to node[midway ] { \small comparison } ( cuatro ) ; \end{tikzpicture}\ ] ] attending to this new variable , we may split the ed into two disjoint cases : 1 . a pure error detection ( ped ) , i.e., some errors affect the information set .2 . a false positive ( fp ) , i.e., all errors belong to the redundancy part .so , from the point of view of the receiver , there are uncorrectable errors but , indeed , the post - decoded block is right. then , a study of the ber should consider the probability of obtaining a false positive after the post - decoding process and , hence , the criterion for measuring the performance of the code should be the probability of undetected and ped errors . all along this paperwe shall assume that the code under consideration is a -ary ] mds code generated by a systematic matrix .the aim of this section is to compute , the number of codewords in with non - zero information symbols and non - zero redundancy symbols , where and .this is called the _ input - redundancy weight enumerator _ ( irwe ) in the literature ( see e.g. ) .in fact the weight enumerator of any partition on is computed in ( * ? ? ?* theorem 1 ) and ( * ? ? ?* theorem 3.1 ) , hence can be obtained as a particular case of those theorems .we propose a new way to compute involving linear algebra techniques .we shall need the following lemmata .[ matrixmdscfr ] any square submatrix of is regular .let and let and be the sets of indices corresponding to the rows and columns of that form an submatrix , where .we call such submatrix .if , there exists a non - zero vector such that let be the vector in which is in the -th coordinate for all and zero otherwise .since , there are non - zero coordinates in the first coordinates of .now , for any by , so , in the last coordinates of , there are no more than non - zero coordinates .then , has weight at most .since and the minimum distance of is , we get a contradiction .consequently , is regular . given a system of linear equations , we say that a solution is totally non -zero if all the coordinates of the vector are other than zero .we say that a matrix has totally full rank if any submatrix has full rank . by lemma [ matrixmdscfr ], has totally full rank .let us denote by the number of totally non - zero solutions of a homogeneous linear system , over the field , with variables and equations whose coefficient matrix has totally full rank .[ fqijrecursiva ] for any integers , is given by the following recurrence : if , there are , at least , as many equations as variables .since its coefficient matrix has full rank , the zero vector is the only solution. then . if , since its coefficient matrix has full rank , the system is undetermined. specifically , whenever we fix coordinates , we will find a single unique solution .then , there are solutions whose first coordinates are non - zero . in order to calculate ,it is enough to subtract those solutions for which some of the remaining coordinates are zero .for any , the solutions with exactly coordinates being zero may be obtained choosing coordinates from , and calculating the number of totally non - zero solutions in a linear system of variables and equations , that is , .therefore , [ fqijdirecta ] for any integers , by lemma [ fqijrecursiva ] , it is enough to check that the expression satisfies .observe that if , then , since the sum runs over the empty set .suppose that .since for any , we obtain we substitute by in , and using , we get this expression is a polynomial in .let us now calculate the coefficients of for .we group those coefficients in which .if , then , and therefore the coefficient of is suppose that .the coefficient of is given by where the last equality comes from lemma [ diffin ] , since is a polynomial in of degree . because of , then .as a result the case when is left to be analyzed . in such case , the coefficient of is given by where , again , the last equality follows from lemma [ diffin ] since is a polynomial in of degree . then , by , so .we recall that is the number of codewords of with weight in the first coordinates ( the information set ) and , in the remaining coordinates ( the redundancy part ) .[ ankijviafqij ] let .if , then if , then , where is the kroneker s delta .the case is evident because the minimum distance of is .suppose that .a codeword with weight in the information set and in the redundancy part must be obtained as a linear combination of rows of with non zero coefficients such that exactly coordinates of the redundancy part become zero , i.e. it is a totally non - zero solution of a homogeneous linear system with equation and variables . by lemma [ matrixmdscfr ] , this number is given by . however some of the solutions counted in can turn zero some of the remaining coordinates .hence , the lemma is obtained as a consequence of the inclusion exclusion principle .[ ankij=0 ] observe that if , then , so .this agrees with the fact that the minimum weight of a non - zero codeword must be .let us now give a neater description of . by mixing proposition [ fqijdirecta ] and lemma [ ankijviafqij ], we have we proceed analogously to the proof of proposition [ fqijdirecta ] .let us examine the coefficient of in , where . since , it is necessary that .we distinguish two cases . on the one hand , if , then , and then , the coefficient is given by where the last equality is a consequence of lemma [ chuvandermonde ] .on the other hand , if , the coefficient of is given by where is given by lemma [ chuvandermonde ] and lemma [ extbinom ] , and is a consequence of lemma [ prodbinom ] .that is , by and , we have proven the main result of this section : [ ankij ] recall from ( * ? ? ?* theorem 1 ) and ( * ? ? ?* theorem 3.1 ) that for any partition on , the partition weight enumerator is [ pwe = ankij ] let be the partition on associated to the information symbols and redundancy symbols .then for all and .the proof of proposition [ pwe = ankij ] is in appendix [ pf : pwe = ankij ] . as a consequence of theorem [ ankij ], we may recover the well known formula of the weight distribution of an mds code as it appears in .let denotes the weight distribution of .then since a codeword of weight distributes its weight between the first and the last coordinates .[ mdswd ] the weight distribution of an mds code is given by the following formula : \ ] ] for , and if .the case is evident .if , as we have pointed out in remark [ ankij=0 ] , .let us now suppose that . by theorem [ ankij ] , as if , we calculate . since , then and if , then for all , so , for any and , by lemma [ chuvandermonde ] , we obtain that so , and hence we drop and develop the rest of the previous formula , where we have split the case . by lemma [ binomsumprod ] for , , and , and this yields the independent term in of this expressioncan be rewritten as where the last equality is given by lemma [ binomsumprod ] with , , and . by lemma [ altbinom ] with , and have and consequently .\end{split}\ ] ] this finishes the proof . as a corollary of the above proof , we find a new description of the weight distribution of an mds code .for any , the number of weight codewords of an mds code is given by the formula now , we are in the position to describe the information bit and information symbol error rate of a false negative , concretely , the iber of a false negative can be compared for different codes in figure [ graphfn ] .observe that it increases monotonically as the channel ber increases .the iber of a fn is significantly smaller than the channel ber , at least for dimensions less or equal than . of length 127 and dimension .in this section , we shall make use of the calculus of the values in order to obtain the information errors of a decoding failure . for simplicity, we may assume that the zero codeword is transmitted .suppose that we have received an erroneous transmission with non - zero coordinates in the information set and in the redundancy .all along the paper , we shall say that the weight of this error is .if the received word is corrected by the code , then it is at a maximum distance of a codeword .obviously , if , the word is properly corrected , so we may assume that .in this case , the correction is always wrong and we have a decoding failure .our aim now is to count these words , highlighting the number of wrong information symbols and the errors belonging to the redundancy , i.e. the words of weight that decode to a codeword of weight .the reasoning is as follows : for any codeword of such weight , we calculate , the number of words of weight which belong to its sphere of radius .this can be carried out by provoking up to changes in .our reasoning is analogous to the one in .firstly , there is a minimum number of symbols that must be changed , either to zero or to something non - zero depending on the sign of and in order to obtain the correct weight .if is large enough , we can use the remaining correction capacity to modify an additional number of symbols to zero , and the same number of symbols to a non - zero element of in order to keep the weight unchanged .finally , the remaining possible symbol modifications can be used to change some non - zero symbols into other non - zero symbols , without affecting the weight of the word .let where denotes the absolute value of .we may distinguish four cases : [ [ r_1-leq - c_1-and - r_2-leq - c_2 ] ] and + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in the non - zero information coordinates , of them should be changed to zero .additionally , we also allow more . in the same way , on the - zero coordinates of the redundancy , we must change and of them . therefore, we have the following number of possibilities : now , we should give a non - zero value to coordinates between the remaining information symbols , and coordinates between the remaining redundancy symbols .thus , the number of possible changes is as follows : since the changes can not exceed , the admissible quantities for and satisfy and hence finally , we may change some of the remaining non - zero and coordinates to another non - zero symbol .if we change the and coordinates , respectively , we obtain changes , where and satisfy therefore , the total number of words is the following : if we denote and , by lemma [ prodbinom ] , we have [ [ r_1-leq - c_1-y - r_2-c_2 ] ] y + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we proceed as in the above case with the information symbols .so we can make changes in the information set . in the redundancy part, we must give a non - zero symbol to coordinates , and , therefore , change of coordinates to zero . in that way, we have possible changes , where and then finally , changing the value of and of the remaining and non - zero coordinates , we have changes , where thus , the total number of words is again , we denote and , and , by lemma [ prodbinom ] , we simplify the expression to if we change and we take in care that binomial coefficients of negative integers are , we get [ [ r_1-c_1-y - r_2-leq - c_2 ] ] y + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this case , as before .if we make the change of variable we can proceed as in the previous case and we get the formula where and .[ [ r_1c_1-y - r_2c_2 ] ] y + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this case , we proceed as in the two previous cases and we obtain the formula where , , and . [ nrc ] let .for any codeword of weight the number of words of weight which belong to its sphere of radius is where and .it is a direct consequence of equations , , and than by lemma [ prodbinom ] hence lemma [ chuvandermonde ] provides the result . in order to calculate the iber in a wrong correction , we follow again the ideology of .indeed , the probability of getting a wrong bit is different if it is due to the channel or due to the encoder . in the first case ,it is given by the channel ber whilst , in the second case , it is assumed to be .hence , we should calculate the rate of errors in the information set committed by the decoder , that we shall call .this calculation is similar to the one for numbers . concretely , if , for each , , , and , in equation , the number of changes due to the decoder is . in order to simplify the resulting expression , note that by lemmas [ extbinom ] and [ chuvandermonde ] , where .hence , number , for the case , is derived from equation if the number of changes due to the decoder is . in this casewe check as before that so and are as in theorem [ nrc ] .hence , the information bit and symbol error rate of a wrong correction is as follows . observe from figure [ graphwc ] that the iber of a wc also increases monotonically , so mds codes can be said `` proper '' with respect to the iber , see . of length 127 and dimension . ]as we pointed out in the introduction , there exists the possibility of occurrence of an fp . up to our knowledge , this has not been treated before in the literature . in this section ,we calculate the probability that a ped and an fp occur , finishing our estimation of the iber of an mds code .as we noticed above , without loss of generality , we may suppose that the zero word is transmitted and we want to analyze the behaviour of the received word .our purpose now is to count the words whose weight decomposes as , i.e. there are no non - zero information symbols , which are not corrected by the decoder . obviously ,if , the word shall be properly corrected , so we assume that .two disjoint cases can take place : the error is detected but not corrected , producing an fp , or the error is ( wrongly ) corrected to a codeword . since the total number of such words is given by it is enough to calculate the words corresponding to one of the two cases .we can make use of the calculations in section [ s3 ] and give an expression of the words belonging to the second case .indeed , the number of words of weight with that are corrected to a codeword is as follows : hence , the number of false positives of weight is given by and the probability of producing a false positive is given by the following formula it can be observed in figure [ graphfp ] that the probability of a fp has a maximum for each code .when the channel ber is high enough this probability increases as the error correction capability of the code increases , see figure [ graphfp2 ] . 0.45 of length 127 and dimension .,title="fig : " ] 0.45 of length 127 and dimension .,title="fig : " ] we may now give an estimation of the iber of a ped . indeed , when the received word has a weight greater than , the error - correcting capability of the code , three disjoint cases can take place : an undetected error , an fp , or a ped .hence , for a given weight , the number of words producing a ped is given by whenever and zero otherwise .therefore , the reader may observe from figures [ graphped0 ] that , for high channel ber s , the behaviour of the iber of a ped becomes almost linear . actually , the curves approximate to the line according the dimension of the code diminishes .0.45 of length 127 and dimension .,title="fig : " ] 0.45 of length 127 and dimension .,title="fig : " ]for the convenience of the reader and in order to make the paper self - contained , we add the combinatorial identities that have been referenced all along the paper . [ chuvandermonde ] see e.g. ( * ? ? ?* , equation ( 3 ) ) .[ prodbinom][extbinom ] it follows directly from the definition .[ altbinom ] an easy induction on using pascal s rule .[ binomsumprod ] ( * ? ? ?* pg.7 , equation ( 2 ) ) [ diffin ] let $ ] be a polynomial whose degree is less that . then direct consequence of the calculus of finite differences , see e.g. .let s develop the formula of theorem [ ankij ] : let s compute the coefficients of each degree in .let .the coefficient of is obtained when : where the last equality follows from lemma [ binomsumprod ] . if , the coefficient of is where we have applied lemma [ binomsumprod ] again in the equality .therefore we proceed with is a similar way .by where and are consequence of lemma [ chuvandermonde ] .making we are done .the authors wish to thank dr .pascal vontobel for pointing out the references and and for its comments of the manuscript .the authors also wish to thank dr .javier lpez for his assistance in performing the graphics .m. el - khamy and r.j .mceliece , `` the partition weight enumerator of mds codes and its applications , '' _ information theory , 2005 .isit 2005 . proceedings .international symposium on _ , pp .926 - 930 , 4 - 9 sept .
in this paper , a computation of the input - redundancy weight enumerator is presented . this is used to improve the theoretical approximation of the information bit and symbol error rate , in terms of the channel bit - error rate , in a block transmission through a discrete memoryless channel . since a bounded distance reproducing encoder is assumed , the computation of the here - called false positive ( a decoding failure with no information - symbol error ) is provided . as a consequence , a new performance analysis of an mds code is proposed . lobillo , navarro and gmez - torrecillas mds code , bit - error rate ( ber ) , block error - correcting code , information bit error rate ( iber ) , false positive .
sex , which involves the alternation of meiosis and gamete fusion , is a rather inefficient means of self - propagation as compared to asexual reproduction , where offspring stem only from a mitotically produced cells .one of the most common reasons used to explain the origin and maintenance of sex is its ability to reduce the mutation load if consecutive mutations lead to an increasing decline in relative fitness , although it is not clear _ a priori _ that the heritable variance in fitness is significantly increased by sex . despite decades of developing theoretical models to explain why sex is such a widespread phenomenon and how sexual reproduction may confer advantages that outweigh its disadvantages , until now no such general clear advantage has been found .investigations of evolutionary problems by physicists have in fact boomed in the last few years . since computer simulations of natural systems can provide much insight into their fundamental mechanisms , they can be used to test theoretical ideas that could be otherwise viewed as too vague to deserve the status of scientific knowledge . in this way , many computer models in population dynamics have been proposed to investigate the evolution of sex and its justification , as well as the comparison between sexual and asexual reproduction , for instance , the redfield model , the penna bit - string model , a genomic bit - string model without aging and stauffer model . of particular interest here is the heumann - htzel model , which originally simulated the evolution of asexual population , composed of haploid individuals , without recombination .thus now we introduce the recombination in this model , in order to find out if the sexual reproduction ( with males and females ) can produce better results than the simple asexual reproduction . in the next section ,we describe the standard and the modified heumann - htzel model , in section 3 , we present our results and in section 4 , our conclusions .since it has been proposed by michael heumann and michael htzel in , the heumann - htzel model , which was an unsuccessful attempt to introduce more ages in the dasgupta model , has remained forgotten due to the fact that after many generations it reduces to the two - age model of partridge - barton .the dasgupta model consists in taking into account some modifications such as hereditary mutations and food restrictions in the partridge - barton model .in fact , the heumann - htzel paper , treats basically the computer simulations using the generalized dasgupta aging model proposed by michael htzel in his work , under the supervision of dietrich stauffer , in order to obtain the license to teach in german secondary school .michael heumann , who was another teacher s candidate , worked only on the inclusion of the `` dauer '' state in the dasgupta model .recently , the heumann - htzel model was reinvestigated and , according to the authors , with `` simple and minor change in the original model '' this incapacity to describe populations with many ages seems to be surmounted . in the original version of the heumann - htzel , the genome of each ( haploid ) individualis represented by one set of probabilities , where is the survival probability that an individual has to reach age from age . at every time step , are chosen randomly to have their survival probability altered by mutations to , where the age is also randomly chosen . is the size of the population at time and is the maximum age one individual can live , which is set up in the beginning of the simulation . the quantity is chosen randomly as any number between and and when it is negative ( positive ) it corresponds to a deleterious ( beneficial ) mutation .the effect of food and space restrictions is taken account by an age - independent verhulst factor , which gives to each individual a probability $ ] of staying alive ; represents the maximum possible size of the population .this mean - field probability of death for the computer simulations has the benefit of limiting the size of population to be dealt with .the passage of time is represented by the reading of a new locus in the genome of each individual in the population , and the increase of its age by . after taking account the natural selection and the action of verhulst dagger , at the completion of each period of life , each individual gives birth to one baby ( age=0 ) which inherits its set of probabilities ( ) . in the recent reinvestigation of this model ,individuals with age in the interval will generate offspring and the mutations are allowed only on a fraction ( ) of the babies .= 12,5 cm in the sexual version , each ( diploid ) individual of the population , which consists of males and females , is genetically represented now by two sets of survival probabilities , and , to be read in parallel . in this way, we have studied the following cases ( see below ) : * case ( a ) * - the effective survival probability in some age will be the arithmetic average of the values present in both sets at that age : * case ( b ) * - the effective survival probability in some age will be the maximum value between the values present in both sets at that age : , \max\left[p_1^{1},p_1^{2}\right ] , ... , \max\left[p_{{\rm maxage}-1}^{1},p_{{\rm maxage}-1}^{2}\right])\ ] ] = 12,5 cm = 12,5 cm if the female succeeds in surviving until the minimum reproduction age , it chooses , at random , an able male to mate ( ) and it generates , with probability , offspring every iteration until the maximum age of reproduction .the offspring inherits its set of survival probabilities from its parents in the following way : the two sets of survival probabilities of the male , for instance , are broken in the same random position , and the complementary pieces , belonging to different strings , are joined to form two male gametes .one of the gametes is then randomly chosen to be passed to the offspring .after that , random mutations are introduced into this gamete , and the final result corresponds to one string of the baby genome .the same process occurs with the female genome , generating the second string of the baby , with mutations . at the endthe offspring genome contains a total of mutations . finally , the sex of the baby is randomly chosen , each one with probability .this procedure is repeated for each of the offspring .the simulation starts with individuals ( half for each sex ) and runs for time steps , at the end of which ( the last steps , when the population was stabilized ) averages are taken over the population .the parameters of the simulations are : initial population ; maximum population size ; probability to give birth ; birth rate ; mutation rate per gamete ; and ( the same values used in the original htzel model ) .our figures with are confirmed by larger simulations with , and also by larger simulations with time steps .= 12,5 cm from figure 1 and its inset we can see that the diploid sexual population is not only larger than the haploid asexual one , but also presents a higher survival probability .= 12,5 cm = 12,5 cm in figure 2 ( case ( a ) ) and figure 3 ( case ( b ) ) , we present the survival probability as a function of age for different period of reproduction ( ) . _ aging starts with reproduction _ :the survival rate decays as soon as reproduction age is reached .there are no individuals alive older than the maximum reproduction age .figure 4 corresponds to the case in which all the individuals of the population reproduces only once - the so - called catastrophic senescence effect . in this way, two rules of reproducing were adopted : 1 ) the reproduction age is the same for all individuals ( ) , 2 ) the reproduction age is randomly chosen between and .we can noticed that this effect is more pronounced for the former , as well that the responsible for that are both breeding once and breeding for all individuals at the same age .the explanation for these effects observed in figures 2 - 4 is based on the darwin theory : individuals must stay alive in order to reproduce and perpetuate the species .if they can no longer generate offspring but remain in the population , they are killed by the accumulation of deleterious mutations .in fact , real mutations can be divided into the common recessive ( almost of the real mutations are recessive ) and the rare dominant mutations . in this way , if among the many genes of a species , one of the father s genes differs from the corresponding one of the mother , then it adversely affects the child only if the mutation is dominant .recessive mutations affect the child only if both the father and mother have them . in order to take into account this aspect in the sexual version of htzel , at the beginning of the simulation we choose randomly dominant positions and keep them fixed during the whole simulation . the effective survival probability in the dominant positions ( ages ) will be the smallest value of the two located in the same position in both strands , and for recessives ones the effective survival probability will be the arithmetic average of them . in figure 5 , we can see that the inclusion of dominance does not alter the lifespan of the population , although it has been observed that population evolved without dominance is larger than the other without dominance due the deaths in the former being bigger than the latter , since in these dominant positions the effective survival probability is the minimum value between the values present in both sets at that age . in the particular simulation shown , for is noticed a decrease in the survival probability when the dominance is considered , since the ages , , , were dominant positions .figure 6 shows the time evolution of the total population of each age for sexual reproduction when the mutations are exclusively harmful . from this figurewe can notice that a stable population for ages is obtained , in contrast to the original model in which there are no individuals alive older than age , even if beneficial mutation and also a deleterious mutation rate times smaller have been assumed .the result obtained here ( fig .1 ) , introducing sex in the original model , was found with the asexual htzel model only when mutations were allowed on a very small fraction of the babies and also a minimum age of reproduction was considered . in our simulations , , and .we have shown that main problem related to the htzel model , which was its incapacity to treat populations with many age intervals , has been overcome by introducing recombination ( with males and females ) in this model , without any other assumptions . as well as , with the inclusion of sex in the model , the population meltdown observed in the asexual version , when only deleterious mutations are considered , has been avoided . moreover , in agreement with some earlier models , we have also obtained that the sexual reproduction ( with males and females ) produces better results than the asexual one .m. eigen ( 1971 ) naturwissenchaften * 58 * , 465 ; d. charlesworth , m.t .morgan and b. charlesworth ( 1992 ) genet .camb . * 59 * , 49 ; l.d .mueller and m.r . rose ( 1996 ) proc .usa * 93 * , 15249 ; b. charlesworth ( 2001 ) j. theor .biology * 210 * , 47 .redfield ( 1994 ) nature * 369 * , 145 ; s. siller , nature * 411 * , 689 ( 2001 ) ; r.s .howard and c.m .lively , nature ( london ) * 367 * , 554 ( 1994 ) ; c. zeyl , t. vanderford and m. carter , science * 299 * , 555 ( 2003 ) .
why sex evolved and it prevails in nature remains one of the great puzzles of evolution . most biologists would explain that it promotes genetic variability , however this explanation suffers from several difficulties . what advantages might sex confer ? the present communication aims at certain investigations related to this question , in this way we introduce sexual recombination on the htzel model ( with males and females ) and we compare these results with those from asexual reproduction without recombination . * sex and recombination in the htzel aging model * a.o . sousa _ institute for theoretical physics , cologne university , d-50937 k " oln , germany _ * keywords*:population dynamics ; aging ; monte carlo simulations ; evolution ; recombination
we investigate different effects contributing to the determination of the optimal angle of release at shot put .standard text - book wisdom tells us that the optimal angle is , while measurements of world - class athletes typically give values of below . in table[ tab1 ] we show the data from the olympic games in 1972 given by kuhlow ( 1975 ) with an average angle of release of about .the measurements of dessureault ( 1978 ) , mccoy et al .( 1984 ) , susanaka and stepanek ( 1988 ) , bartonietz and borgstrm ( 1995 ) , tsirakos et al .( 1995 ) and luhtanen et al .( 1997 ) give an average angle of release of about . +this obvious deviation triggered already considerable interest in the literature .most of these investigations obtained values below but still considerably above the measured values .e.g. in the classical work of lichtenberg and wills ( 1976 ) optimal release angles of about were found by including the effect of the height of an athlete . + we start by redoing the analysis of lichtenberg and wills ( 1976 ) .next we investigate the effect of air resistance . herewe find as expected that in the case of shot put air resistance gives a negligible contribution instead of ( diameter , radius ) and have therefore a force that is four times as large as the correct one . ] .if the initial velocity , the release height and the release angle are known , the results obtained up to that point are exact .we provide a computer program to determine graphically the trajectory of the shot for a given set of , and including air resistance and wind .+ coming back to the question of the optimal angle of release we give up the assumption of lichtenberg and wills ( 1976 ) , that the initial velocity , the release height and the release angle are uncorrelated .this was suggested earlier in the literature .we include three correlations : * the angle dependence of the release height ; this was discussed in detail by de luca ( 2005 ) .* the angle dependence of the force of the athlete ; this was suggested for javeline throw by red and zogaib ( 1977 ) . in particular a inverse proportionality between the initial velocity and the angle of release was found .this effect was discussed for the case of shot put in mcwatt ( 1982) , mccoy ( 1984) , gregor ( 1990) and linthorne ( 2001) . *the angle dependence of the initial velocity due to the effect of gravity during the period of release ; this was discussed e.g. in tricker and tricker ( 1967 ) , zatsiorski and matveev ( 1969 ) , hay ( 1973 ) and linthorne ( 2001) . to include these three correlationswe still need information about the angle dependence of the force of the athlete . in principlethis has to be obtained by measurements with each invididual athlete . to show the validity of our approach we use a simple model for the angle dependence of the force and obtain realistic values for the optimal angle of release . + our strategy is in parts similar to the nice and extensive work of linthorne ( 2001 ) .while linthorne s approach is based on experimental data on and our approach is more theoretical .we present some toy models that predict the relation found by red and zogaib ( 1977 ) .+ we do not discuss possible deviations between the flight distance of the shot and the official distance . herewere refer the interested reader to the work of linthorne ( 2001 ) .let us start with the simplest model for shot put .the shot is released from a horizontal plane with an initial velocity under the angle relative to the plane .we denote the horizontal distance with and the vertical distance with . the maximal height of the shot is denoted by ; the shot lands again on the plane after travelling the horizontal distance , see fig.[fig1 ] . solving the equations of motions with the initial condition one obtains the maximal horizontal distance is obtained by setting equal to zero from this resultwe can read off that the optimal angle is - this is the result that is obtained in many undergraduate textbooks .it is however considerably above the measured values of top athletes .moreover , eq.([xm0 ] ) shows that the maximal range at shot put depends quadratically on the initial velocity of the shot .next we take the height of the athlete into account , this was described first in lichtenberg and wills ( 1976 ) .( [ eom1 ] ) still holds for that case .we denote the height at which the shot is released with .the maximal horizontal distance is now obtained by setting equal to . this equation holds exactly if the parameters , and are known and if the air resistance is neglected . + assuming that the parameters , and are independent of each other we can determine the optimal angle of release by setting the derivative of with respect to to zero . the optimal angle is now always smaller than . with increasing the optimal angle is getting smaller , therefore taller athletes have to release the shot more flat .the dependence on the initial velocity is more complicated .larger values of favor larger values of .we show the optimal angle for three fixed values of m , m and m in dependence of in fig.[fig2 ] . with the average values from table [ tab1 ] for m and / s we obtain an optimal angle of , while the average of the measured angles from table [ tab1 ] is about .we conclude therefore that the effect of including the height of the athlete goes in the right direction , but the final values for the optimal angle are still larger than the measured ones . in our examplethe initial discrepancy between theory and measurement of is reduced to .these findings coincide with the ones from lichtenberg and wills ( 1976 ) .+ for m and m / s ( ) we can also expand the expression for the optimal angle = \frac{1}{\sqrt{2 } } \left [ 1 - \frac{1}{4 } \frac{e_{pot}}{e_{kin } } + { \cal o } \left ( \frac{hg}{v_0 ^ 2 } \right)^2 \right ] \ , , \ ] ] with the kinetic energy and the potential energy . denotes the mass of the shot . + by eliminating different variables from the problem , lichtenberg and wills ( 1976 ) derived several expressions for the maximum range at shot put : expanding the expression in eq.([xmthel ] ) in one gets here we can make several interesting conclusions * to zeroth order in the maximal horizontal distance is simply given by .this can also be read off from eq .( [ xm0 ] ) with . *to first order in the maximal horizontal distance is . releasing the shot from 10 cm more height results in a 10 cm longer horizontal distance . *the kinetic energy is two times more important than the potential energy .if an athlete has the additional energy at his disposal , it would be advantageous to put the complete amount in kinetic energy compared to potential energy .* effects of small deviations from the optimal angle are not large , since is stationary at the optimal angle .next we investigate the effect of the air resistance .this was considered in tutevich ( 1969 ) , lichtenberg and wills ( 1976 ) .the effect of the air resistance is described by the following force with the density of air , the drag coefficient of the sphere ( about ) , the radius of the sphere and the velocity of the shot .the maximum of in our calculations is about m / s which results in very small accelerations .in addition to the air resistance we included the wind velocity in our calculations . + we confirm the results of tutevich ( 1969 ) .as expected the effect of the air resistance turns out to be very small .tutevich stated that for headwind with a velocity of m / s the shot is about cm reduced for m / s compared to the value of without wind .he also stated that for tailwind one will find an increased value of cm at m / s compared to a windless environment .we could verify the calculations of tutevich ( 1969 ) and obtain some additional information as listed in table [ tab2 ] by incorporating these effects in a small computer program that can be downloaded from the internet , see rappl ( 2010) . an interesting fact is that headwind reduces the shot more than direct wind from above ( which could be seen as small factor corrections ) .if the values of , and are known ( measured ) precisely then the results of our program are exact .+ now one can try to find again the optimal angle of release .lichtenberg and wills ( 1978 ) find that the optimal angle is reduced compared to our previous determination by about for some typical values of and and by still assuming that , and are independent of each other .+ due to the smallness of this effect compared to the remaining discrepancy of about between the predicted optimal angle of release and the measurements we neglect air resistance in the following .next we give up the assumption that the parameters , and are independent variables .this was suggested e.g. in tricker and tricker ( 1967 ) , hay ( 1973 ) , dyson ( 1986 ) , hubbard ( 1988 ) , de mestre ( 1990 ) , de mestre et al .( 1998 ) , maheras ( 1998 ) , yeadon ( 1998 ) , linthorne ( 2001 ) and de luca ( 2005 ) .we will include three effects : the dependence of the height of release from the angle of release , the angle dependence of the force of the athlete and the effect of gravity during the delivery phase .the height of the point , where the shot is released depends obviously on the arm length and on the angle with the height of the shoulder and the length of the arm .clearly this effect will tend to enhance the value of the optimal angle of release , since a larger angle will give a larger value of and this will result in a larger value of .this effect was studied in detail e.g. in de luca ( 2005) .we redid that analysis and confirm the main result of that work .should have a different sign . ]the optimal angle can be expanded in as expected above we can read off from this formula that the optimal angle of release is now enhanced compared to the analysis of lichtenberg and wills ( 1976 ) , for typical values of , and de luca ( 2005) gets an increase of the optimal angle of release in the range of to . with the inclusion of this effectthe problem of predicting the optimal angle of release has become even more severe .the world records in bench - pressing are considerably higher than the world records in clean and jerk .this hints to the fact that athletes have typically most power at the angle compared to larger values of .this effect that is also confirmed by experience in weight training , was suggested and investigated e.g. by mcwatt ( 1982) , mccoy ( 1984) , gregor ( 1990) and linthorne ( 2001) . the angle dependence of the force of the athlete can be measured and then be used as an input in the theoretical investigation . below we will use a very simple model for the dependence to explain the consequences .this effect now tends to favor smaller values for the optimal angle of release .finally one has to take into account the fact , that the energy the athlete can produce is split up in potential energy and in kinetic energy . where .hence , the higher the athlete throws the lower will be the velocity of the shot .since the achieved distance at shot put depends stronger on than on this effect will also tend to giver smaller values for the optimal angle of release .this was investigated e.g. in tricker and tricker ( 1967 ) , zatsiorski and matveev ( 1969 ) , hay ( 1973 ) and linthorne ( 2001) .now we put all effects together . from eq.([arm ] ) and eq.([energy ] ) we get the angle dependence of the force of the athlete will result in an angle dependence of the energy an athlete is able to transmit to the put the function can in principle be determined by measurements with individual athletes . from these two equations and from eq.([arm ] ) we get , \label{v0th } \\h ( \theta ) & = & h_s + b \sin \theta .\end{aligned}\ ] ] inserting these two -dependent functions in eq.([xm ] ) we get the full -dependence of the maximum distance at shot put the optimal angle of release is obtained as the root of the derivative of with respect to . to obtain numerical values for the optimal anglewe need to know . in principlethis function is different for different athletes and it can be determined from measurements with the athlete . to make some general statements we present two simple toy models for .we use the following two simple toy models for this choice results in for and for , which looks reasonable . at this stagewe want to remind the reader again : this ansatz is just supposed to be a toy model , a decisive analysis of the optimal angle of release will have to be done with the measured values for .we extract the normalization from measurements , \\ \frac{e_{2,0}}{m } & = & \frac{e_2(\theta)}{m f_2(\theta ) } = \frac{1}{1- \frac{2}{3 } \frac{\theta}{\pi } } \left [ g b \sin \theta + \frac{v_0 ^ 2}{2 } \right].\end{aligned}\ ] ] with the average values of table [ tab1 ] ( m , m / s and ) and m we get now all parameters in are known .looking for the maximum of we obtain the optimal angle of release to be which lie now perfectly in the measured range ! + next we can also test the findings of maheras ( 1998) that decreases linearly with by plotting from eq.([v0th ] ) against .we find our toy model 1 gives an almost linear decrease , while the decrease of toy model looks exactly linear .+ our simple but reasonable toy models for the angle dependence of the force of the athlete give us values for optimal release angle of about , which coincide perfectly with the measured values .moreover they predict the linear decrease of with increasing as found by maheras ( 1998) this paper we have reinvestigated the biomechanics of shot put in order to determine the optimal angle of release .standard text - book wisdom tells us that the optimal angle is , while measurements of top athletes tend to give values around .including the effect of the height of the athlete reduces the theory prediction for the optimal angle to values of about ( lichtenberg and wills ( 1978 ) ) . as the next stepwe take the correlation between the initial velocity , the height of release and the angle of release into account .therefore we include three effects : 1 .the dependence of the height of release from the angle of release is a simple geometrical relation .it was investigated in detail by de luca ( 2005 ) .we confirm the result and correct a misprint in the final formula of luca ( 2005 ) .this effect favors larger values for the optimal angle of release .the energy the athlete can transmit to the shot is split up in a kinetic part and a potential energy part .this effect favors smaller values for the optimal angle of release .3 . the force the athlete can exert to the shot depends also on the angle of release .this effect favors smaller values for the optimal angle of release .the third effect depends on the individual athlete . to make decisive statements the angle dependence of the angular dependence of the force has to be measured first and then the formalism presented in this papercan be used to determine the optimal angle of release for an individual athlete .+ to make nethertheless some general statements we investigate two simple , reasonable toy models for the angle dependence of the force of an athlete . with these toy modelswe obtain theoretical predictions for the optimal angle of , which coincide exactly with the measured values . for our predictionswe do not need initial measurements of and over a wide range of release angles . in that respect our work represents a further developement of linthorne ( 2001) .moreover our simple toy models predict the linear decrease of with increasing as found by maheras ( 1998) .we thank philipp weishaupt for enlightening discussions , klaus wirth for providing us literature and jrgen rohrwild for pointing out a typo in one of our formulas .5 a. kuhlow , `` die technik des kugelstoens der mnner bei den olympischen spielen 1972 in mnchen , '' leistungssport * 2 * , ( 1975 ) j. dessureault , `` selected kinetic and kinematic factors involved in shot putting , '' in biomechanics vi - b ( editeb by e. asmussen and k. jrgensen ) , p 51 , baltimore , md : university park press .m. w. mccoy , r. j. gregor , w.c .whiting , r.c .rich and p. e. ward , `` kinematic analysis of elite shotputters , '' track technique * 90 * , 2868 ( 1984 ) .p. susanka and j. stepanek , `` biomechanical analysis of the shot put , '' scientific report on the second iaaf world chamionships in athletics , 2nd edn , i/1 ( 1988 ) , monaco : iaaf .k. bartonietz and a. borgstrm , `` the throwing events at the world championships in athletics 1995 , gteborg - technique of the world s best athletes .part 1 : shot put and hammer throw , '' new studies in athletics * 10 * ( 4 ) , 43 ( 1995 ) .d. k. tsirakos , r. m. bartlett and i. a. kollias , `` a comparative study of the release and temporal characteristics of shot put , '' journal of human movement studies * 28 * , 227 ( 1995 ) .p. luhtanen , m. blomquist and t. vnttinen , `` a comparison of two elite putters using the rotational technique , '' new studies in athletics * 12 * ( 4 ) , 25 ( 1997 ) .r. a. r. tricker and b. j. k. tricker , `` the science of movement , '' london : mills and boon ( 1967 ) .b. p. garfoot , `` analysis of the trajectory of the shot , '' track technique * 32 * , 1003 ( 1968 ) .v. m. zatsiorski and e. i. matveev , `` investigation of training level factor structure in throwing events , '' theory and practice of physical culture ( moscow ) * 10 * , 9 ( 1969 ) ( in russian ) .v. n. tutevich , `` teoria sportivnykh metanii , '' fizkultura i sport , moscow ( in russian ) 1969 . j. g. hay , `` biomechanics of sports techniques , '' englewood cliffs , nj : prentice - hall 1973 ( 4th edn 1993 ) .d. b. lichtenberg and j. g. wells , `` maximizing the range of the shot put , '' am .* 46(5 ) * , 546 ( 1978 ) . v. m. zatsiorsky , g. e. lanka and a. a. shalmanov , `` biomechanical analysis of shot putting technique , '' exercise sports sci* 9 * , 353 ( 1981 ) .b. mcwatt , `` angles of release in the shot put , '' modern athlete and coach * 20 * ( 4 ) , 17 ( 1982 ). m. s. townend , `` mathematics in sport , '' 1984 , ellis horwood ltd . ,chichester , isbn 0 - 853 - 12717 - 4 g. h. g. dyson , `` dyson s mechanics of athletics , '' london : hodder stoughton ( 8th edn .1986 ) .m. hubbard , `` the throwing events in track and field , '' in the biomechanics of sport ( edited by c. l. vaughan ) , 213 ( 1988 ) .boca raton , fl : crc press . n. j. de mestre , `` the mathematics of projectiles in sport , '' 1990 , cambridge university press , cambridge , isbn 0 - 521 - 39857 - 6 r. j. gregor , r. mccoy and w. c. whiting , `` biomechanics of the throwing events in athletics , '' proceedings of the first international conference on techniques in athletics , vol . 1 ( edited by g .-brggemann and j. k. ruhl ) , 100 ( 1990 ) .kln : deutsche sporthochschule .n. j. de mestre , m. hubbard and j. scott , `` optimizing the shot put , '' in proceedings of the fourth conference on mathematics and computers in sport ( edited by n. de mestre and k. kumar ) , 249 ( 1998 ) .gold coast , qld : bond university .a. v. maheras , `` shot - put : optimal angles of release , '' track fields coaches review , * 72 * ( 2 ) , 24 ( 1998 ) .f. r. yeadon , `` computer simulations in sports biomechanics . '' in proceedings of the xvi international symposium on biomechanics in sports ( ed . by h.j .riehle and m.m .vieten ) , ( 1998 ) 309 .kln : deutsche sporthochschule .n. p. linthorne , `` optimum release angle in the shot put , '' j. sports sci .* 19 * , 359 ( 2001 ) .m. hubbard , n. j. de mestre and j. scott , `` dependence of release variables in the shot put , '' j. biomech .* 34 * , 449 ( 2001 ) .m. bace , s. illijic and z. narancic , `` maximizing the range of a projectile , '' eur .* 23 * , 409 ( 2002 ) .f. mizera and g. horvth , `` influence of environmental factors on shot put and hammer throw range , '' j. biophys .* 35 * , 785 ( 2002 ) .r. cross , `` physics of overarm throwing , '' am .* 72(3 ) * , 305 ( 2004 ) .u. oswald and h. r. schneebeli , `` mathematische modelle zum kugelstossen , '' vsmp bulletin 94 ( 2004 ) .www.vsmp.ch/de/bulletins/no94/oswald.pdf r. de luca , `` shot - put kinematics , '' eur. j. phys .* 26 * , 1031 ( 2005 ) .nicholas p. linthorne , `` throwing and jumping for maximum horizontal range , '' arxive : physics/0601148 .w. e. red and a. j. zogaib , `` javelin dynamics including body interaction , '' journal of applied mechanics * 44*,496 ( 1977 ) ..[tab1]compendium of some data measured during the summer olympic games 1972 from kuhlow [ cols="<,^,^,^,^,^,^",options="header " , ]
we determine the optimal angle of release in shot put . the simplest model - mostly used in textbooks - gives a value of , while measurements of top athletes cluster around . including simply the height of the athlete the theory prediction goes down to about for typical parameters of top athletes . taking further the correlations of the initial velocity of the shot , the angle of release and the height of release into account we predict values around , which coincide perfectly with the measurements . do - th 10/12
the future power system will be modernized with advanced metering infrastructure , bilateral information communication network , and intelligent monitoring and control system to enable a smarter operation .the transformation to the smart grid is expected to facilitate the deep integration of renewable energy , improve the reliability and stability of the power transmission and distribution system , and increase the efficiency of power generation and energy consumption .demand response program is a core subsystem of the smart grid , which can be employed as a resource option for system operators and planners to balance the power supply and demand .the demand side control of responsive loads has attracted considerable attention in recent years .an intelligent load control scheme should deliver a reliable power resource to the grid , while maintaining a satisfactory level of power usage to the end - user .one of the greatest technical challenges of engaging responsive loads to provide grid services is to develop control schemes that can balance the aforementioned two objectives .to achieve such an objective , a hierarchical load control structure via aggregators is suggested to better integrate the demand - side resources into the power system operation and control . in the hierarchical scheme, the aggregator performs as an interface between the loads and the system operator .it aggregates the flexibility of responsive loads and offers it to the system operator . in the meantime, it receives dispatch signals from the system operator , and execute appropriate control to the loads to track the dispatch signal .therefore , an aggregate flexibility model is fundamentally important to the design of a reliable and effective demand response program . it should be detailed enough to capture the individual constraints while simple enough to facilitate control and optimization tasks . among various modeling options for the adjustable loads such as thermostatically controlled loads ( tcls ), the average thermal battery model aims to quantify the aggregate flexibility , which is the set of the aggregate power profiles that are admissible to the load group .it offers a simple and compact model to the system operator for the provision of various ancillary services .apart from the adjustable loads , deferrable loads such as pools and plug - in vehicles ( pevs ) can also provide significant power flexibility by shifting their power demands to different time periods .however , different from the adjustable loads , it is more difficult to characterize the flexibility of deferrable loads due to the heterogeneity in their time constraints . in this paper , we focus on modeling the aggregate flexibility for control and planning of a large number of deferrable loads .there is an ongoing effort on the characterization of the aggregate flexibility of deferrable loads .an empirical model based on the statistics of the simulation results was proposed in .a necessary characterization was obtained in and further improved in . for a group of deferrable loads with homogeneous power , arrival time , and departure time ,a majorization type exact characterization was reported in . with heterogeneous departure times and energy requirements ,a tractable sufficient and necessary condition was obtained in , and was further utilized to implement the associated energy service market . despite these efforts , a sufficient characterization of the aggregate flexibility for general heterogeneous deferrable loads remains a challenge . to address this issue, we propose a novel geometric approach to extract the aggregate flexibility of heterogeneous deferrable loads .geometrically , the aggregate flexibility modeling amounts to computing the minkowski sum of multiple polytopes , of which each polytope represents the flexibility of individual load .however , calculating the minkowski sum of polytopes under facet representation is generally np - hard .interestingly , we are able to show that for a group of loads with general heterogeneity , the exact aggregate flexibility can be characterized analytically .but the problem remains in the sense that there are generally exponentially many inequalities with respect to the number of loads and the length of the time horizons , which can be intractable when the load population size or the number of steps in the considered time horizon is large .therefore , a tractable characterization of the aggregate flexibility is desired . for deferrable loads with heterogeneous arrival and departure times , the constraint sets are polytopes that are contained in different subspaces .alternative to the original definition of the minkowski sum , we find it beneficial to regard it as a projection operation . from the latter perspective, the aggregate flexibility is considered as the projection of a higher dimensional polytope to the subspace representing the aggregate power of the deferrable loads .therefore , instead of approximating the minkowski sum directly by its definition , we turn to approximating the associated projection operation . to this end , we formulate an optimization problem which approximates the projection of a full dimensional polytope via finding the maximum homothet of a given polytope , i.e. , the dilation and translate of that polytope .the optimization problem can be solved very efficiently by solving an equivalent linear program .furthermore , we propose a `` divide and conquer '' strategy which enables efficient and parallel computation of the aggregate flexibility of the load group .the scheduling policy for each individual load is derived simultaneously along the aggregation process .finally , we apply our model to the pev energy arbitrage problem , where given predicted day - ahead energy prices , the optimal power profile consumed by the load group is calculated to minimize the total energy cost .the simulation results demonstrate that our approach is very effective at characterizing the feasible aggregate power flexibility set , and facilitating finding the optimal power profile .there are several closely - related literature on characterizing flexibility of flexible loads . in our previous work , a geometric approach was proposed to optimally extract the aggregate flexibility of heterogeneous tcls based on the given individual physical models .the simulation demonstrated accurate characterization of the aggregate flexibility which was very close to the exact one .however , this approach can not be applied to the deferrable loads directly .similar to which sought a special class of polytopes to facilitate fast calculation of minkowski sum , the authors in proposed to characterize the power flexibility using zonotopes .different from , this method could deal with the time heterogeneity as appeared in the deferrable loads .in addition , both approaches extracted the flexibility of _ individual _ load . * * in comparison , the approach proposed in this paper is a _ batch _ processing method : it directly approximates the aggregate flexibility of a group of loads , which could mitigate the losses caused by the individual approximation as emphasized in .* notation * : the facet representation of a polytope is a bounded solution set of a system of finite linear inequalities : , where throughout this paper ( or , , ) means elementwise inequality . a polytope is called full dimensional , if it contains an interior point in .given a full dimensional polytope in , a scale factor , and a translate factor , the set is called a homothet of .we use to denote the minkowski sum of multiple sets , and of two sets .we use to represent the dimensional column vector of all ones , the dimensional identity matrix , and the indicator function of the set .the bold denotes the column vector of all s with appropriate dimension . for two column vectors and , we write for the column vector ^{t} ] , where is the state of charge ( soc ) with initial condition , and ] , where we assume that .the load is called deferrable if .a charging power profile ^{t} ] .we differentiate the pev ( ) by using a superscript on the variables introduced above .the charging task of the pev is determined by .let be the set of all the admissible power profiles of the load .it can be described as , ,\forall t\in\mathbb{a}^{i},\\ u_{t}^{i}=0,\forall i\in\mathrm{\mathbb{t}}\backslash\mathbb{\mathbb{a}}^{i}\mbox { and } { \bf 1}_{m}^{t}u^{i}\in[\underline{e}^{i},\bar{e}^{i } ] \end{array}\right\ } \label{eq : pi}\ ] ] it is straightforward to see that each is a convex polytope .in addition , we say is of codimension if its affine hull is a dimensional subspace of . in the smart grid ,the aggregator is responsible for procuring a generation profile from the whole market to service a group of loads .we define the generation profile that meets the charging requirements of all pevs as follows . a generation profile is called _ adequate _ if there exists a decomposition , such that is an admissible power profile for the load , i.e. , .we call the set of all the adequate generation profiles the _ aggregate flexibility _ of the load group. it can be defined as the minkowski sum of the admissible power sets of each load , it is straightforward to show that is also a convex polytope whose codimension is to be determined by the parameters of the deferrable loads .the numerical complexity of the existing algorithms for calculating the minkowski sum is rather expensive ( see for some numerical results ) . in general , calculating when and are polytopes specified by their facets is np - hard .however , for the particular problem of pev charging , it is possible to characterize the exact aggregate flexibility analytically .such characterization is built on the results from the matrix feasibility problem and from the network flow theory , both of which are intrinsically connected with the pev charging problem .[ thm : iff]consider a group of pevs or deferrable loads with heterogeneous parameters , .then the set of adequate generation profiles consists of those ^{t} ] , while the rest of the positions in this row are forbidden positions that can only be filled with s .moreover , , it has the column sum and the row sum in the interval ] is the lift of the vector in by setting the additional dimensions to .hence , for some implies the constraint in ( [ eq : opp1 ] ) .therefore , it is sufficient to pose a more restrictive constraint to obtain a suboptimal solution , and we have where the constraint can be expressed as \leq sc+b\tilde{r}\right\ } .\ ] ] by applying the farkas s lemma ( see the appendix [ subsec : farkaslemma ] ) , the above optimization problem can be transformed into the following linear programming problem , ,\\ & gh\leq b\left[\begin{array}{c } r\\ -\bar{u}_{0 } \end{array}\right]+sc .\end{array}\label{eq : opp3}\ ] ] before proceed , we illustrate the formulation ( [ eq : opp2 ] ) using a simple example borrowed from .[ exa : test]let be given by , and .the polytope is plotted in fig .[ fig : example ] .we solve the problem ( [ eq : opp3 ] ) to find a sufficient approximation of , and obtain , , .the corresponding scale factor is , and translate factor is . from these datawe can have that ^{t}\subset\tilde{\mathcal{p}} ] , which is exactly . in general, the problem ( app ) gives a suboptimal solution for the approximation of with respective to . a possible way to reducethe conservativeness is to employ the quadratic decision rule or other nonlinear decision rules as reported in . in this subsection, the polytopic projection approximation developed in the above section will be employed to aggregate the pevs flexibility .we will discuss several issues including the choice of the nominal model , the preprocessing of the charging constraints , and the strategy for parallel computation .finally , the explicit formulae for the flexibility model and the corresponding scheduling policy are derived .intuitively , one can choose the nominal polytope to be of the similar form of ( [ eq : pi ] ) , and the parameters can be taken as the mean values of the pev group . more generally , we can define the virtual battery model as follows .the set is called a -horizon discrete time virtual battery with parameters , if \}.\ ] ] is called a sufficient battery if .conceptually , the virtual battery model mimics the charging / discharging dynamics of a battery .we can regard as the power draw of the battery , and as its discharging / charging power limits , and and as the energy capacity limits .geometrically , it is a polytope in with facets , which is computationally very efficient when posed as the constraint in various optimization problems .note that the original high - dimensional polytope defined in ( [ eq : ptilde ] ) contains equality constraints , which is not full dimensional .therefore , first we have to remove the equalities by substituting the variables . for simplicity , assume that .more explicitly , let be the pev s charging profile at time , , and , be the generation profile at time .the overall charging constraints of the pev group can be written as follows which is a polytope in , and the coordinate is designated to be .we first need to eliminate the equality constraints containing , i.e. , the first line of ( [ eq : original ] ) .to standardize the elimination process , define , which is the index set of the pevs that can be charged at time . without loss of generality , assume that , , and we substitute , in ( [ eq : original ] ) , where is the first pev in the set .let be the set of time instants at which the substitution of is made .further , we remove the coordinate and obtain , where the new coordinate becomes with and {t\in\mathbb{a}^{i}\backslash s_{i}} ] mwh . from fig .[ fig : bounds ] , we can see that around the midnight ( the hour ) , the charging flexibility of the pevs are the largest in terms of the difference of the charging rate bounds . denoting the energy price by , and the planned energy by , the energy arbitrage problem can be formulated as a linear programming problem as follows , clearly , the above optimization problem can be solved much more efficiently than directly optimizing the power profiles subject to the constraints of 1000 pevs .we plot the obtained power profiles against the price changes in fig .[ fig : planing ] .it can be observed that most of the energy demand are consumed during to in the morning , when the prices are at its lowest .the same curve of the planned power is also plotted in fig . [fig : bounds ] ( the dotted line ) , where note we assume that the time discretization unit is hour .we see that the planned charging rate lies in the charging bounds of , and almost always matches the maximum / minimum bound . using this charging profile ,the total energy being charged to the pevs is mwh which lies in the interval .hence , it is adequate and the charging requirement of individual pev can be guaranteed by using the scheduling policy ( [ eq : schdplcy ] ) .we choose the immediate charging policy as the baseline and use it to compare with the obtained optimal charging profile in fig .[ fig : bounds ] . to ensure a fair comparison, we impose an additional constraint that the total energies consumed by both profiles are the same .the total energy cost for the baseline charging profile is , while the cost for the optimal charging profile is , which reduces the baseline cost by about .this paper proposed a novel polytopic projection approximation method for extracting the aggregate flexibility of a group of heterogeneous deferrable loads .the aggregate flexibility of the entire load group could be extracted parallelly and in multiple stages by solving a number of linear programming problems .the scheduling policy for individual load was simultaneously derived from the aggregation process .finally , a pev energy arbitrage problem was solved to demonstrate the effectiveness of our approach at characterizing the feasible aggregate power flexibility set , and facilitating finding the optimal power profile .our future work includes studying the performances of using other decision rules such as the quadratic decision rule and the nonlinear decision rule , and as compared to the method using zonotopes .in addition , it is interesting to consider a probabilistic description of the aggregate flexibility as in practice the uncertainty of the loads parameters must be considered to reduce the risk of over - estimating or under - estimating the aggregated flexibility . for the sake of completeness , we restate the following version of farkas s lemma as used in , which will be used to derive our algorithm for approximating the polytopic projection . its proof can be found in .[ lem : farkas](farkas lemma ) suppose that the system of inequalities has a solution and that every solution satisfies .then there exists , , such that and .\(1 ) sufficient battery : since is the solution of the app problem , we have , where and .let and it can be shown that , where , and the last equality is derived by using lemma [ lem : mksksumscale ] . since see is sufficient .now suppose , then .denoting by its facet representation , we have , where ^{t},\\ h= & ( \bar{u},-\underline{u},\bar{e},-\underline{e}),\end{aligned}\ ] ] and then parameters of can be obtained from \(2 ) scheduling policy : given a generation profile in , we can decompose it into the individual admissible power profile through two steps .first , we decompose it into the generation profiles for each groups : , by part ( 1 ) we know and further more denoting the generation profile for the group by and hence , which is the aggregate flexibility extracted from the group .it can be further decomposed into each pevs in the group .now we need to use the linear decision rule in ( [ eq : decision ] ) .note that the decision rule ( [ eq : decision ] ) is applied in ( [ eq : oppapp ] ) which actually maps from to , while the decomposition mapping we need is actually from to .the mappings between these four polytopes form a commutative diagram ( see below ) .observe that the linear ratio of the decomposition mapping does not change , and only the translate vector needs to be calculated .therefore , assume that the decomposition takes the form where is the translate vector to be determined . from the above commutative diagram , we must have , solve the above equality and we will have and the overall scheduling policy ( [ eq : schdplcy ] ) follows from the composition of and .j. bhatt , v. shah , and o. jani , `` an instrumentation engineer s review on smart grid : critical applications and parameters , '' _ renewable and sustainable energy reviews _ , vol .12171239 , dec . 2014 .d. s. callaway , `` tapping the energy storage potential in electric loads to deliver load following and regulation with application to wind energy , '' _ energy conversion and management _ , vol .50 , no . 5 , pp .1389 1400 , may 2009 .m. henk , j. richter - gebert , and g. m. ziegler , `` basic properties of convex polytopes , '' in _ handbook of discrete and computational geometry _, j. e. goodman and j. orourke , eds.1em plus 0.5em minus 0.4emboca raton , fl , usa : crc press , inc . , 1997 , ch .243270 .d. hershkowitz , a. j. hoffman , and h. schneider , `` on the existence of sequences and matrices with prescribed partial sums of elements , '' _ linear algebra and its applications _ , vol .265 , pp . 7192 ,m. kamgarpour , c. ellen , s. soudjani , s. gerwinn , j. mathieu , n. mullner , a. abate , d. callaway , m. franzle , and j. lygeros , `` modeling options for demand side participation of thermostatically controlled loads , '' in _ bulk power system dynamics and control - ix optimization , security and control of the emerging power grid , irep symposium _ , aug .2013 , pp . 115 .s. koch , j. mathieu , and d. callaway , `` modeling and control of aggregated heterogeneous thermostatically controlled loads for ancillary services , '' in _17th power system computation conference _ , stockholm , sweden , august 2011 .j. liu , s. li , w. zhang , j. l. mathieu , and g. rizzoni , `` planning and control of electric vehicles using dynamic energy capacity models , '' in _ the 52nd ieee annual conference on decision and control _ , dec .2013 , pp . 379384 .a. nayyar , j. taylor , a. subramanian , k. poolla , and p. varaiya , `` aggregate flexibility of a collection of loads , '' in _ the 52nd ieee annual conference on decision and control _ , dec .2013 , pp . 56005607 .
aggregation of a large number of responsive loads presents great power flexibility for demand response . an effective control and coordination scheme of flexible loads requires an accurate and tractable model that captures their aggregate flexibility . this paper proposes a novel approach to extract the aggregate flexibility of deferrable loads with heterogeneous parameters using polytopic projection approximation . first , an exact characterization of their aggregate flexibility is derived analytically , which in general contains exponentially many inequality constraints with respect to the number of loads . in order to have a tractable solution , we develop a numerical algorithm that gives a sufficient approximation of the exact aggregate flexibility . geometrically , the flexibility of each individual load is a polytope , and their aggregation is the minkowski sum of these polytopes . our method originates from an alternative interpretation of the minkowski sum as projection . the aggregate flexibility can be viewed as the projection of a high - dimensional polytope onto the subspace representing the aggregate power . we formulate a robust optimization problem to optimally approximate the polytopic projection with respect to the homothet of a given polytope . to enable efficient and parallel computation of the aggregate flexibility for a large number of loads , a muti - stage aggregation strategy is proposed . the scheduling policy for individual loads is also derived . finally , an energy arbitrage problem is solved to demonstrate the effectiveness of the proposed method .
atlas will be a particle physics experiment at the future large hadron collider ( lhc ) , which is being built at cern and is expected to start operation in 2007 .the pixel detector is the innermost component of the atlas inner tracker . in the barrelthe detector modules are mounted on staves , while the modules in the end caps are organized in disk sectors .the pixel detector consists of 1774 detector modules ( barrel : 1456 modules ; discs : 318 ) . + the most important components of a detector module are : * 46080 individual pixel sensors with size of * 16 front end read out chips * 1 module controller chipthe pixel stave is the module local support unit of the barrel section of the pixel detector .the components of a stave are the thermal management tile ( tmt ) , the stave carbon structure , and the stave cooling circuit .additionally , every stave has two geometrical reference marks ( ruby balls ) .the stave coordinate system is specified in figure [ fig : altube ] .the tmt itself consists of two parts .both parts have a shingled design with an angle of 1.1 degrees which are glued together . as material for the tmts , carbon - carbon ( c - c ) has been chosen .the reason for this is a thermal conductivity which is 10 - 100 times better than standard carbon fiber reinforced plastic ( cfrp ) , even in the transverse direction to the fibres .it has excellent mechanical properties , stability , and transparency to particles .the tmt is made of 1502 zv 22 c - c material from sgl ( augsburg , germany ) .the raw material is in the form of plates of about 6 mm thickness .the plates consist of 2-d roving fabric carbon fibres layers overlapped in a phenolic resin matrix , densified , and graphitized at high temperature to enhance the thermal properties .the raw tmts are machined to the final stepping shape with a cnc milling machine equipped with high speed spindle and diamond coated millers .the specific properties of the material are summarized in table [ tab : propertiesofsglcc1502zv22 ] ..properties of sgl cc 1502 zv 22 [ cols="<,<,<",options="header " , ] the stave cooling circuit is made of a thin aluminum tube ( see figure [ fig : altube ] ) , shaped to fit inside the inner cross section of the stave carbon structure .the material chosen is 6060 aluminum alloy .this material shows good extrusion properties at small thickness .the cooling system is based on the evaporation of fluorocarbon and provides an operating temperature of about . the stave carbon structure ( `` omega profile '' )is made of three layers of unidirectional ultra high modulus ( uhm ) carbon fibre reinforced cyanate ester resin pre - preg ( preimpregnated ) .the adopted lay - up ( 0 - 90 - 0 ) , 0.3 mm thick , has been optimised through an extensive design and test program . the choice of the pre - preg material and of the lay - up with a central cross - layer has been made in order to match the longitudinal cte to that of the impregnated c - c , thus minimizing distortions resulting during cool - down .the 13 modules are arranged one after the other along the stave axis and they are partially overlapped in order to achieve the required coverage in the axial direction .thus , the surface of the stave in contact with the modules is stepped and the modules are arranged in a shingled layout . however , c - c materials have two main technological drawbacks limiting their application range : porosity and difficulty to achieve complex and accurate geometries due to the high temperature manufacturing process .to overcome the porosity of the c - c material , it was impregnated with resin such that infiltration by the thermal greases and carbon dust release could be avoided .for the assembly of modules on staves a custom made module loading machine was developed and built in wuppertal .the requirements of this machine are : * handling the modules with minimal stress * positioning of modules on the stave with an accurancy better than 50 microns * regular glue deposition + to control the applied stress the bow of each module is measured before and after loading .the main components of the module loading machine are the granite base of and a flatness of from johan fischer . on this baseseveral linear guideways from schneebergertype monorail bm are mounted to allow movement of the central measurement unit , the microscope m 420 from leica .the microscope itself is connected to owismicrometric tables and allows movements in all dimensions .the movements are controlled by heidenhainsealed linear encoders .heidenhain digital readouts type nd 760 are used for displaying and storing the position of the m 420 using a personal computer .the module mounting head is connected to several linear guides , goniometer and rotary tables to reach any position .the module is fixed by vacuum to the mounting head and its position is always monitored by the microscope . for the deposition of the glue a computer controlled dispenser from efdtype 1502the assembly time of each module is about 1 hour , the curing time of the glue is 2 hours .this leads to a production rate of 1 stave per week .the x , y , and z position of the glued modules of each stave is controlled with respect to the stave ruby balls .these provide the reference system for the module position and allow one to assess the accuracy of the loading procedure . as a typical result the z position measurement for 13 staves is provided in figure [ fig : z - deviation_modwise ] . for each stave the deviation from the nominal position is shown for each of the 13 modules . the stated modulei d s are equivalent to defined locations on the stave .the tolerances are and are indicated in the figure by thicker lines .the plot shows that the accuracy of the z positioning is always within the tolerances of . in figure[ fig : z - deviation_distribution ] , the distribution of the z - deviation is given and demonstrates that 50% of the modules are glued with an accuracy which is even better than .one can also see that there is a systematic shift of 6 - 7 . as mentioned previously , the difference between the bow of the module before and after loading is an indicator as to whether stress has been applied during the loading procedure .figure [ fig : bow ] shows the mean bow difference ( mean value ) , averaged over all 13 modules of one stave , as well as the standard deviation ( sd ) and the average of the absolute values ( mean of amounts ) .one can see , that the bow difference is always better than .this is interpreted to show that a minimum of stress is being applied to the modules during the loading procedure .a module loading machine for the atlas pixel detector has been successfully realized in wuppertal .all requirements of the mounting precision are fulfilled .the position of the modules after loading are well within the tolerances .the applied stress during loading is negligible .this work has been supported by the _ bundes - ministerium fr bildung , wissenschaft , forschung und technologie _ ( bmbf ) under grant number05 ha4px1/9 . atlas technical proposal cern / lhcc/94 - 43 , 15 december 1994 .pixel detector technical design report cern / lhcc/98 - 13 .atlas detector and physics performance technical design report cern / lhhc/99 - 14 .b - layer ( atl - ip - cs-0009 ) , layer 1 and 2 specifications ( atl - ip - cs-0007 ) . pitch based carbon fibre ys80 ( data sheet ) , nippon graphite fiber ( ngf ) .available from : http://plaza6.mbn.or.jp/~gf/english/products/space/ysa.html .cyanate ester resin system ex1515 ( data sheet ) , bryte technologies inc .available from : http://www.brytetech.com/pdf-ds/ex-1515.pdf .
the barrel part of the atlas pixel detector will consist of 112 carbon - carbon structures called `` staves '' with 13 hybrid detector modules being glued on each stave . the demands on the glue joints are high , both in terms of mechanical precision and thermal contact . to achieve this precision a custom - made semi - automated mounting machine has been constructed in wuppertal , which provides a precision in the order of tens of microns . as this is the last stage of the detector assembly providing an opportunity for stringent tests , a detailed procedure has been defined for assessing both mechanical and electrical properties . this note gives an overview of the procedure for affixation and tests , and summarizes the first results of the production . , , , , , , , , pixel detector , stave , mounting machine
the tumor suppressor p53 plays a central role in cellular responses to various stress , including oxidative stress , hypoxia , telomere erosion and dna damage . in unstressed cells ,p53 is kept at low level via its negative regulator mdm2 . under stressed conditions , such as dna damage, p53 is stabilized and activated to induce the express of downstream genes , including p21/waf1/cip1 and gadd45 that are involved in cell cycle arrest , and puma , bax and pig3 that can induce apoptosis .the cell fate decisions after dna damage are closely related to the p53 dynamics that is regulated by p53-mdm2 interactions .oscillations of p53 level have been observed upon ir induced dna damage at the population level in several human cell lines and transgenic mice .more interestingly , pulses of p53 level were revealed in individual mcf7 cells , and it was suggested that the cell fate is governed by the number of p53 pulses , i.e. , few pulses promote cell survival , whereas sustained pulses induce apoptosis .fine control of p53 dynamics is crucial for proper cellular response .mutations and deregulation of p53 expression have been found to associate with various cancer types . programmed cell death 5 ( pdcd5 ) , formerly referred to as tfar19 ( tf-1 cell apoptosis - related gene 19 ) , is known to promote apoptosis in different cell types in response to various stimuli . decreased expression of pdcd5 has been detected in various human tumors , and restoration of pdcd5 with recombinant protein or an adenovirus expression vector can significantly sensitive different cancers to chemotherapies .pdcd5 is rapidly upregulated after dna damage , interacts with the apoptosis pathway , and translocates from the cytoplasm to nucleus during the early stages of apoptosis . recently, novel evidence indicates that pdcd5 is a p53 regulator during gene expression and cell cycle .it was shown that pdcd5 interacts with p53 by inhibiting the mdm2-mediated ubiquitination and accelerating the mdm2 degradation .hence , upon dna damage , pdcd5 can function as a co - activator of p53 to regulate cell cycle arrest and apoptosis .a series of computational models have been constructed to investigate the mechanism of p53-mediated cell - fate decision . in these modelsthe p53-mdm2 oscillation was considered to be crucial for cell - fate decision after dna damage . after dna damage , such as double strand breaks ( dsbs ) , active atm monomer ( atm * )become dominant . in the nucleus , atm * active p53 in two ways : atm * promotes the phosphorylation of p53 at ser-15 and accelerates the degradation of mdm2 in nucleus ( ) . in the cytoplasm, active p53 induces the production of , which in turn promotes the translation of _ p53 _ mrna to form a positive feedback .the p53-mdm2 oscillation is a consequence of the two coupled feedback loops : the negative - feedback between p53 and , and the positive - feedback between p53 and .recently , mathematical models of how pdcd5 interacts with the dna damage response pathway to regulate cell fate decisions have been developed . in our previous study , a model for the effect of pdcd5 to the p53 pathway has been established through a nonlinear dynamics model that is developed based on the module of p53-mdm2 oscillator in and experimental findings in .it was shown that the p53 activity can display different dynamics after dna damage depending on the pdcd5 level .the p53 protein shows low activity in case of pdcd5 deletion , sustain intermediate level for medial pdcd5 expression , and pulses when the pdcd5 level is upregulated .nevertheless , little is known about the global p53 dynamics upon pdcd5 interactions with changes in the expression levels of p53 and pdcd5 , which is often seen in tumors . here ,we investigate , based on the mathematical model proposed in , how the dynamics of p53 activity after dna damage depend on changes in the levels of p53 production and pdcd5 .a global bifurcation analysis shows that p53 activity can display various dynamics depending on the p53 production and pdcd5 levels , including monostability , bistability with two stable steady states , oscillations , and co - existence of a stable steady state and an oscillatory state .these dynamics are further investigated through the method of potential landscape .the stability of the oscillation states are characterized by the potential force and the probability flux .we also discuss the effect of pdcd5 efficiency on p53 dynamics .pdcd5 efficiency can induce p53 oscillation by hopf bifurcation , and the transition of p53 dynamics near the hopf bifurcation are charaterized by barrier height and energy dissipation .figure . [ fig:1 ] illustrates the model of p53-mdm2 oscillator with pdcd5 regulation studied in this paper . herewe summarize the model equations and refer for details . , which in turn promote the translation of _ p53 _ mrna . in the nucleus, active p53 is degraded by binding to , and the interaction is disrupted by pdcd5 . both active atm and pdcd5 are able to accelerate the degradation of .mdm2 in the nucleus and cytoplasm can shuttle between the two compartments .refer to the text and for details.,width=302 ] the model equations are composed of three components : active p53 in the nucleus ] and in cytoplasm ] and represents the right hand size of - .the above equation can be extended to include fluctuations by a probability approach : , where is the noise perturbation .the statistical nature of the noise is often assumed as gaussian ( large number theorem ) and white ( no memory ) : and , where is the correlation tensor ( matrix ) measuring the noise strength .the probability of system states evolves following the fokker - plank equation , where the flux vector is defined as . the flux measures the speed of the flow in the concentration space .at the stationary state , , then .there are two possibilities : one is , which implies detailed balance , and the other is so that the detailed balance is broken and the system is at the non - equilibrium state . at the stationary state ,the probability flux is defined by where is the probability density at stationary state .hence , here is defined as the non - equilibrium potential . from ,we have divided the force into two parts : the potential force and curl flux force . at detail balance , the curl flux force is zero . for a non - equilibrium open system ,both potential landscape and the associated flux are essential in characterizing the global stationary state properties and the system dynamics .to investigate how pdcd5 interacts with p53 to regulate cell fate decision dynamics after dna damage , we performed codimension - two bifurcation analysis with respect to the two parameters and .the bifurcation diagrams were computed with auto incorporated in xppaut .the main bifurcation diagram is shown at fig . [fig : bif2 ] and detailed below , with fig .[ fig : bif2]b the enlarge of the dashed square region in fig .[ fig : bif2]a .the parameter plane is divided into ten regions labelled _ i__x _ each with different dynamical profiles and marked with different colors in fig .[ fig : bif2 ] ( refer fig . [fig : bif2]b for regions - ) .typical dynamics of each region is shown at fig .[ fig : dyn ] and described below : 1 .i _ corresponds to monostability with a single stable steady state of low p53 activity despite the expression level of pdcd5 .2 . in region _ ii _there is a stable steady state and two unstable steady states .i _ and _ ii _ are separated by a curve of saddle - node bifurcation , across which an unstable node and a saddle appear due to fold bifurcation .3 . region _ iii _is on the right of region _ii_. crossing the curve from region _ ii _ , the stable node and the saddle collide and disappear , accompanied by the emergence of a stable limit cycle that surrounds an unstable steady state .meanwhile , parameters can also change from region _i _ to _ iii _ crossing the boundary or , respectively ( fig . [fig : bif2]b ) .the curve represents supercritical hopf bifurcation at which the stable steady state in region_ i _ becomes unstable and a stable limit cycle appears .the curve represents subcritical hopf bifurcation . before crossing the curve , two limit cycles , stable and unstable ones ,come out due to fold bifurcation of limit cycles that is represented by the curve ( fig .[ fig : bif2]b ) .next , the unstable limit cycle disappear and the stable steady state lose its stability when at the curve .4 . regions _ iii _ and _ iv _ are separated by a subcritical hopf bifurcation curve by which an unstable limit cycle appears and locates between the stable limit cycle and the steady state , and the steady state that is unstable at region _iii _ becomes stable at region _region _ v _, similar to region _i _ , has a stable steady state which corresponds to a state of high p53 concentration .the two limit cycles at region _ iv _ collide and disappear when parameters cross the fold bifurcation curve that separates regions _ iv _ and _ v_. 6 .three steady states arise in region _vi _ , two of them are unstable and appear while crossing the fold bifurcation curve of equilibria between regions _ vi _ and _ v_. 7 .region _ vii_ gives bistability in p53 activity , including two stable steady states with low or high p53 concentrations , and an unstable steady state .there are two ways to reach this bistable region . from region _vi _ and crossing the subcritical hopf bifurcation , an unstable limit cycle emerges and the unstable steady state with low p53 level becomes stable , but the unstable limit cycle immediately disappear due to homoclinic bifurcation ( refer the curve in fig .[ fig : bif2]a which is very close to ) . from region _i _ and crossing the boundary , a pair of stable and unstable steady states appear due to fold bifurcation of equilibria . 8 .there are three regions ( _ viii_-_x _ ) around which share the same phase diagram with a stable limit cycle and three steady states ( fig .[ fig : bif2]b ) . in region _the three steady states are unstable , two of them ( a saddle and an unstable focus ) come from the fold bifurcation ( ) that separates regions _ iii _ and _ viii_. in region _ix _ the steady state with high p53 level becomes stable and is surrounded by an unstable limit cycle .the change is originated from the subcritical hopf bifurcation from region _viii_. finally , the unstable limit cycle disappear in region _ x _ due to the homoclinic bifurcation between regions _ ix _ and _ x_. we note that in regions _ ix _ and _ x _ , there are bistable states of a stable steady state and a stable limit cycle , similar to the region _ iv_. in addition to the above ten regions , there are four codimension - two bifurcation points denoted by black dots in fig .[ fig : bif2 ] : two _ cusp _ points ( and ) , one generalized hopf bifurcation ( gh ) and one bogdanov - takens bifurcation ( bt ) .the _ cusp _point locates at where the fold bifurcation curve and the saddle - node homoclinic bifurcation curve meet tangentially .the other _ cusp _ point is given by by the two fold bifurcation curves and , where two fold bifurcations coalesce and disappear .the point gh at corresponds to the meeting point of and from where a fold bifurcation curve of the limit cycle ( ) occurs . at the bt point the fold bifurcation curve and meet tangentially to give a homoclinic bifurcation curve .the bt point separates into two segments and . crossing from left to right gives a stable equilibrium and a saddle , while crossing gives an unstable equilibrium and a saddle . in summary , by manipulating the expression level of pdcd5 and the maximum production rate of p53 ,the system displays four types of stable dynamics : a single stable steady state ( regions _ i _ , _ ii _ , _ v _ , and _ vi _ ) , two stable steady states ( region _ vii _ ) , a stable limit cycle ( regions _ iii _ and _ viii _ ) , and coexistence of a stable steady state and a stable limit cycle ( regions _ iv _ , _ ix _ , and _ x _ ) .the bifurcation diagram is zoomed out by codimension - one bifurcations with seven values marked by - in fig .[ fig : bif2 ] and are detailed at the next section . from fig .[ fig : bif2 ] , during dna damage and when the pdcd5 is upregulated ( ) , the system shows either low p53 activity , p53 oscillation , or sustained high p53 level with the increasing of p53 production rate .p53 oscillations have been known to be essential for cellular response to dna damage including dna repair and apoptosis .our analyses suggest that proper cell response to dna damage with p53 oscillation is possible only when pdcd5 is highly expressed and the p53 production rate takes proper value . to further investigate the oscillation dynamics , fig .[ fig:4 ] shows the period and amplitude of the stable limit cycles corresponding to regions _ iii _ and _ iv _ in fig .[ fig : bif2 ] .the results show that the oscillation periods decrease with the maximum p53 production rate ( ) , while the amplitudes increase with .this is consistent with the experimental observations .however both periods and amplitudes are insensitive with the pdcd5 level over a wide parameter range .these results indicates that upregulated pdcd5 and a proper p53 expression level are essential for producing proper p53 oscillation that regulates the cellular response to dna damage .when pdcd5 is downregulated ( ) , the system has either a single steady state with low p53 activity or bistability with either low or high p53 activity after dna damage depending on the maximum p53 production rate .experiments have shown that cells exposed to sustained p53 signaling frequently underwent senescence . ., width=340 ] to get a clear insight into the codimension - two bifurcation diagram in fig .[ fig : bif2 ] , we considered a codimension - one bifurcation of the concentration of p53 with respect to the p53 production rate . the bifurcation diagram is shown at fig .[ fig:5 ] , with panels a - g for given values , respectively , as marked at fig .[ fig : bif2 ] . diagrams for each value is detailed below . 1 . for low pdcd5 level ( ) ,the equilibrium show a s - shaped bifurcation diagram with bistability ( fig . [fig:5]a ) .there are three branches of the equilibrium depending on .the upper branch is composed of stable nodes , states at the middle branch are saddles , while the lower branch consists of stable nodes and foci which are separated near the subcritical hopf bifurcation point ( ) .there is a fold bifurcation of equilibria ( ) where a stable node and a saddle appear , so that bistability occurs with between and . with the increasing of ,an unstable limit cycle appears from a homoclinic bifurcation point at a saddle ( ) , and the unstable limit cycle disappears via the subcritical hopf bifurcation . crossing , stable focus at the lower branch change to unstable one .the saddle and the lower unstable focus collide and disappear via another fold bifurcation at .when increases ( ) ( fig .[ fig:5]b ) , the fold bifurcation becomes larger than the subcritical hopf bifurcation so that the bistability of equilibrium vanishes . with the increasing of ,an unstable limit cycle arises from the hopf bifurcation .the limit cycle becomes stable at the fold bifurcation , and then vanishes at the second fold bifurcation .different from the case , now the fold bifurcation gives a saddle ( the middle branch ) and an unstable focus ( the upper branch ) ( fig .[ fig:5]b inset ) .the unstable focus on the upper branch becomes stable and an unstable limit cycle appears via the subcritical hop bifurcation .the unstable limit cycle develops until it meet the saddle at the homoclinic bifurcation point .the stable focus and stable limit cycle coexist with between and , which gives the region _x _ in fig .[ fig : bif2 ] .3 . when increases to ( fig . [fig:5]c ) , the diagram is similar but the hopf bifurcation changes from subcritical to supercritical , so that stable limit cycles exist with taken values over a wide range from to .4 . when , becomes too close to so that the unstable limit cycle arise from does not collide with a saddle but ends up at the fold bifurcation point of limit cycles ( fig .[ fig:5]d ) . 5 .for a larger ( fig .[ fig:5]e ) , the two fold bifurcation points and coalesce and disappear due to a codimension - two _ cusp _ point in fig .[ fig : bif2 ] . 6 . with the further increasing of ( fig .[ fig:5]f ) , another two fold bifurcation points ( and ) arise from the hopf bifurcation due to the _ cusp _ point in fig . [fig : bif2 ]. 7 . further increasing to higher level ( ) , the supercritical hopf bifurcation point moves to the left andcollides with to give a fold - hopf bifurcation point ( _ zh _ ) , and then changes to a saddle - node homoclinic bifurcation point _ snic _ , from which the saddle and the node collide and disappear to generate a stable limit cycle .the above bifurcation analyses present the global dynamics of the whole system , which is further explored below through the underlying potential landscape . to explore the global dynamics from the potential perspective , we projected the potential function to two independent variables , the p53 concentration ]. the potential landscapes for parameter values taken from the 10 typical regions are shown at fig .[ fig:6 ] .the potential landscapes show that when there is a single stable steady state ( _ i _ , _ ii _ , _ v _ , _ vi _ ) , the potential is funnelled towards a global minimum which corresponds to the global stable steady state . in region _vii _ , there are two stable steady states , and the potential has two local minimum , corresponding to high or low p53 levels , respectively .the landscape at low p53 state has wide attractive region and shallow slop , while at the high p53 state has small attractive region and deep slope .these suggest that a cell from randomly select initial state is more likely to response with low p53 state because of the wider attractive region . in other regions with p53/mdm2 oscillations ( _ iii _ , _ iv _ , _ viii _ , _ ix _ , _ x _ ), the potential landscapes show an irregular and inhomogeneous closed ring valley that corresponds to the deterministic stable limit cycle trajectory .it is obvious that the landscape is not uniformly distributed along the limit cycle path due to the inhomogeneous speed on the limit cycle .we also note that in the landscapes for regions _ iv _ , _ ix _ and _ x _ , in addition to the closed valley , there is a deep funnel towards a local minimum .these local minimum show the coexistence of the stable steady state ( marked by and the arrows in fig .[ fig:6 ] ) .the regions with p53 oscillations are the most interested because the oscillation dynamics are essential for cell fate decision in response to various stresses .to further analyze the potential landscape when the system display stable p53 oscillations , we examined the case at region _iii _ by calculating the potential force ( ) and the probability flux ( ) ( see fig . [ fig:7 ] ) . as shown at fig .[ fig:7 ] , the potential has local minimum valley along the deterministic oscillation trajectory .the values of potential are not uniformly distributed along the cycle .there is a global minimal potential at a state around )=(1 , 0.02) ] or 12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ , vol .( , ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
dynamics of p53 is known to play important roles in the regulation of cell fate decisions in response to various stresses , and pdcd5 functions as a co - activator of p53 to modulate the p53 dynamics . in the present paper , we investigate how p53 dynamics are modulated by pdcd5 during the dna damage response using methods of bifurcation analysis and potential landscape . our results reveal that p53 activities can display rich dynamics under different pdcd5 levels , including monostability , bistability with two stable steady states , oscillations , and co - existence of a stable steady state and an oscillatory state . physical properties of the p53 oscillations are further shown by the potential landscape , in which the potential force attracts the system state to the limit cycle attractor , and the curl flux force drives the coherent oscillation along the cyclic . we also investigate the effect of pdcd5 efficiency on inducing the p53 oscillations . we show that hopf bifurcation is induced by increasing the pdcd5 efficiency , and the system dynamics show clear transition features in both barrier height and energy dissipation when the efficiency is close to the bifurcation point . this study provides a global picture of how pdcd5 regulates p53 dynamics via the interaction with the p53-mdm2 oscillator and can be helpful in understanding the complicate p53 dynamics in a more complete p53 pathway .
in his seminal work , luneburg derived a spherical optical lens with radially varying refractive index that focused a beam of parallel rays to a point at the opposite side of the lens ; a two dimensional variant is straightforward to deduce .of course this relies upon the governing equation being the helmholtz equation , which the full ela in the model configuration presented here the elastic energy is primarily carried by rayleigh surface waves ; they are a particular solution of navier s equation for elastodynamics for a half - space bounded by a traction - free surface , e.g. the earth s surface .well known in seismology , for the idealised situation of isotropic and homogeneous media they are non - dispersive , elliptically polarized and in practical terms they have a velocity very close to that of shear waves : where is the shear modulus and the density so for simplicity we will simply use the shear wave speed in our analysis .shear horizontally polarized waves ( sh ) are also present in our numerical model , and they also propagate with wavespeed ; notably sh waves are governed by a helmholtz equation without any approximation .we do not consider love waves here , which can also be important is seismology , as they only exist for stratified layered media and we assume that our elastic half space is vertically homogeneous , that is , the material parameters do not vary with depth .in cartesian coordinates we take to be the depth coordinate and to be in the plane of the surface , then the rayleigh waves can be represented using a helmholtz equation on the surface and we consider a circular lens on the plane as in fig .1c , is characterized by a radially varying refraction profile .this lens , and the associated material variation , then extends downwards and the material is considered vertical homogeneous ; we distinguish the material outside the lens to have parameters with a subscript and that inside to have subscript .the refraction index between two media , say , material 0 and material 1 can be formulated in terms of the ratio of velocity contrast . for a luneburg lenswe require the refractive index , , to be : where is the radial coordinate and the outer radius of the lens ( fig .we tune the material velocity within the lens to reproduce the index given in [ eq : ref_lune ] so taking a continual material variation is perfect for theory , but from a practical perspective it is not possible to realize a circular structure 10 s of meters in depth and radius , whose soil properties change smoothly ( e.g. on the scale of fig .instead we create a composite soil made of bimaterial cells such that their effective material properties have the variation we desire , this provides a realistic lens using actual soil parameters that could be created using conventional geotechnical techniques . in fig .1c the circular surface of the lens is discretized using equally spaced cells on a periodic square lattice .each cell contains an inclusion of softer material that , in our illustration , is represented by a pillar extending down into the soil ; the exponential decay of the rayleigh wave amplitude with depth means that for the computational model we can truncate this and a depth of 30 m is more than sufficient .the diameter of each pillar is determined using the effective velocity prescribed for each cell based upon its radial position ( ) from the center of the lens .assuming a square section cell of width on the plane the filling fraction is defined using the surface area occupied by the pillar in the cell . for cylindrical pillars with diameter ( fig .1c ) we have a geometrical filling fraction , , with .the maxwell - garnet formula , derived for composites , relates the filling fraction with the corresponding effective property : where is the effective shear velocity in the cell and is the shear velocity of the inclusion ( the pillar ) .we combine the geometrical definition of with ( [ eq : garnett ] ) to obtain the effective velocity as a function of inclusion size .hence , by tuning the pillar diameter we obtain the required velocity variation desired in eq .[ eq : vel_profi ] and use this to define the structure and variation for each of the luneburg lenses one of which is shown in ( fig .we now place four luneburg lenses as shown in fig .1b and use these to protect an object placed in between them .the idea is simply that a plane wave incident along either the or axes will be focussed by the lens to a single point , the point at which the cylinder touches its neighbour , which will then act as source into the next luneburg lens and the plane wave will then reemerge unscathed ; the building to be protected should , in this perfect scheme , be untouched .we are aiming to demonstrate the concept not in a perfect scenario , but using realistic parameters and a setting in which the effective medium approach provide a discrete velocity profile , yet the protection achieved is considerable . to construct the luneburg lenses , to reach the minimum prescribed in eq .[ fig:1 ] , needs be lower than 350 m / s . we choose a of 200 m / s which is a value that is realistic for poorly consolidated soil ( sand or water filled sediments ) . in the lens configuration depicted in figs .1b and c for each lens there are 26 elementary cells ( m ) along the radial axis of the lens and the diameter of the pillars increases towards the center of the lens as discussed earlier . in the frequency range we investigate ( 3 - 8 hz ) , the inclusion is deeply subwavelength and non - resonant .the only parameter of interest for the lens design using composite soil is the filling fraction , so there is no bound on the size of the elementary cell so long as it remains subwavelength to avoid bragg scattering phenomena ; in our simulations the minimum size of each cell is bounded for numerical reasons ( explained in the next section ) . for an actual implementation of the lensthe cell could be chosen to be even smaller than our choice here with corresponding decrease in the pillar diameters .a maximum diameter of approximately 2 m would permit the pillars to be realised with existing geotechnical machineries .three dimensional numerical simulations of seismic elastic surface waves are implemented using specfem3d : a parallelized , time domain , and spectral element solver for 3d elastodynamic problems widely used in the seismology community .the reference computational domain , depicted in fig .1a , is a 40 m depth halfspace 350 x 500 m wide of homogeneous sedimentary material with background velocity set to 400 m / s .the computational elastic region , apart from the surface itself , is surrounded by perfectly matched layers ( pmls ) that mimic infinity and prevent unwanted reflections from the computational boundaries and these are standard in elastic wave simulation .a small building with a flexural fundamental mode of approximately 4 hz is located at the center of the model .we place a line of broadband ricker source time functions centered at 5 hz to generate an almost perfectly plane wavefront ( figs .1a and b ) .the driving force * f * has equal components in all 3 orthogonal directions and hence the resulting excitation is not only made of rayleigh but also of sh waves .body waves leave the computational domain and pass into the pml at the bottom and side boundaries of the computational domain ; their interaction with the structure and the lens is negligible .this configuration could represent the upper layer of a deep sedimentary basin with some strategic structure located at the surface ( power plants , data centers , hospitals ) desired to be shielded from a seismic energy source . in fig .1b representing the building surrounded by the square array of lenses , the pillars are inserted as softer inclusions in the background velocity model .this approach is commonly used ( e.g. * ? ? ?* ; * ? ? ?* ) to simplify the discretization of complex models .we use very fine meshing ( down to 2 m ) and this , combined with order accuracy in space of the spectral element method , allows us to accurately model pillars down to a smallest diameter of 0.3 m. this approach was validated against a model where the pillars were meshed explicitly ; the only notable difference was a factor of 5 increase in the runtime for the explicit case vis - a - vis the regular mesh .specfem3d is a parallel code and simulations are run on 64 cores for a total of approx .30 corehours for a simulated time of 1.5 seconds with the 3d wavefield saved for post - processing ; specfem3d is the standard geophysics code used in academic and industry applications with a long history of development and application .this simulation is shown for wave - field snapshots for different propagation distances in fig .2 both with , and without the lenses . the sources generating the plane wavefront in fig .1 are located at the surface and so most of the seismic energy propagates as rayleigh and sh waves .the vertical component of the displacement shown in fig .2 , is dominated by the elliptically polarized motion of the rayleigh waves .although not visible , sh waves behave very similarly to rayleigh waves for the model here discussed , body waves have far lower amplitude and are not relevant in our analysis . fig .2a shows , as one would expect , the strong scattering by the building and its highly oscillatory motion .when the luneburg lenses are inserted in the model ( fig .2b ) the simulation shows that the rayleigh wave front splits and then progressively converges to the focal points of lenses l1 and l2 .given the square layout , the focal points lie exactly at the touching points of the l1-l3 and l2-l4 lenses .this lens does not support any subwavelength phenomena ( the evanescent part is lost ) hence , the size of the focal spot is diffraction limited at and some energy is backscattered during the focusing process creating some low amplitude reflections .the second lenses ( l3 and l4 ) behave in a reciprocal manner converting the two point ( secondary ) source - like wavefields back into a plane wavefront . during the entire process , the inner region wherethe building is placed has experienced a very low seismic excitation as compared to the reference unprotected case .3 presents the motion of the roof of the reference building on a db scale and it shows the vibration is drastically reduced . the snapshots in the bottom row of fig .2 showing the wavefront as it emerges from the lenses shows that despite the strong alteration of the ray - path , the reconstruction of the wavefront after the lenses is surprisingly good .hence this device combines the some cloaking behaviour with the seismic protection . considering the broad frequency bandwidth of the input signalthis is an interesting result as most cloaks so far proposed have problems with broadband excitation .the velocity structure of the lenses is such that the propagation of the wavefront in fig .2b is slightly slower than the reference configuration of figthus , we observe cloaking functionality to be valid for the wave envelope but not for the phase .this is not particularly relevant in the present seismic context where the only application of cloaking is to avoid very directive scattering , it would be unfortunate and undesirable to scatter or refocus the signal to a neighbouring building , while simultaneously realising seismic protection .a quantitative analysis of the wavefield is presented in figs .3a and 3b which show the energy maps for the reference and protected cases .the energy is calculated at the surface ( ) taking the norm of the three components of the displacement field . in the homogeneous case ( fig .3a ) of the unprotected building the energy is almost uniform across the whole computational domain making the building resonate . in the protected case , the energy is focused towards the axes of symmetry of the lenses , leaving a central region relatively undisturbed ; in the two stripes shown in fig . 3bthe energy ( and equally the amplitude ) is much higher than elsewhere .3c shows the frequency response function of the building with , and without , the lenses . the rooftop horizontal displacement ( roof drift ) is a diagnostic of the amplification phenomena due to the resonance frequency of the structure . over the whole spectrum an average amplitude reduction of 6 dbis achieved which is reduction of almost an order of magnitude in the vibration .complete cancellation of the wavefield is not achieved primarily because the evanescent field slightly couples with the building and as we focus the wavefield to a point source we introduce some back scattering that also interacts with the building ( fig .nonetheless the concept is demonstrated and shows that one can successfully translate ideas from optics into the field of elastic seismic waves ; figs . 2 and 3should inspire the use of these concepts to both reroute surface waves and to reduce their impact on surface structures .we have combined concepts from transformation optics and plasmonics , composite media , and elastic wave theory to create an arrangement of seismic luneburg lenses that can reroute and reconstruct plane seismic surface waves around a region that must remain protected .the lens is made with a composite soil obtained with columns of _ softer _ soil material with varying diameter distributed on a regular lattice .the use of softer soil inclusions emphasises that the methodology we propose is not reflection of waves , or absorption , or damping for which rigid or viscoelastic columns might be more intuitive ; the softer inclusions are designed to progressively alter the material itself so that waves are `` steered '' and the reconstruction of the wavefronts after exiting the arrangement illustrates the mechanism .the luneberg lens arrangement proposed here could be tested in a small scale experiment using elastic materials , such as metals , as the concept itself is very versatile or on larger scale test areas where one could then evaluate effects such as nonlinearity .although presented in the context of seismic engineering there are everyday ground vibration topics that could benefit from this design .the damping of anthropic vibration sources ( e.g. train - lines , subway , heavy industry ) is very important for high precision manufacturing process , to reduce structural damages due to fatigue or simply to decrease domestic or commercial building vibrations .our aim here has been to present the concept of steering elastic surface waves completely in context and with a design that could be built , this should motivate experiments , further design and the implementation of ideas now widely appreciated in electromagnetism and acoustics to this field .one important practical point regarding our design is that we have presented normal incidence to the four luneberg lens and this is practical , of course , if the position of the vibration ( a railway line for instance ) is known .the protection progressively deteriorates as the incidence angle of the plane wave increases ; at , the focal point is in the centre of the region ( fig .4 ) and the energy is steered to this point .the concept of using soil mediation to steer surface elastic waves is clear and the luneburg arrangement is a clear exemplar of the ideas .the proposed design is easily adapted to higher frequency bands and smaller regions .if one is only interested in seismic protection , and less in the wavefront reconstruction , the lenses can be structured differently with only one or two lenses .the other well - known grin lenses offer different extents and types of wave control ( e.g. maxwell , eaton ) can provide an isotropic wave shielding .other types of grin lenses and layout ( for instance using 4 half - maxwell lenses as shown in optics ) can be utilised ; however the luneburg lens requires the lowest velocity contrast between the lens and exterior region and we choose to use it as practically it could be implemented .the main practical difficulty is that these lenses are either singular ( an eaton lens has m / s in the center ) or prescribe stronger velocity contrast ( maxwell ) requiring difficult ( or not yet available ) soil engineering solutions .we are particularly grateful to the specfem3d development team led by dimitri komatitsch , who helped us with the implementation of the pml conditions essential in these simulations .the authors also thank the epsrc , erc and cnrs for their financial support .ac was supported by a marie curie fellowship .84ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) link:\doibase 10.1126/science.1125907 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) link:\doibase 10.1103/physrevlett.103.024301 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.108.014301 [ * * , ( ) ] * * , ( ) * * , ( ) _ _ , ultrasonics technologies ( , ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) _ _ ( , ) _ _ ( , ) link:\doibase 10.1103/physrevlett.85.3966 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) \doibase http://dx.doi.org/10.1063/1.3068491 [ * * , ( ) , http://dx.doi.org/10.1063/1.3068491 ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) http://stacks.iop.org/2040-8986/14/i=7/a=075705 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) \doibase http://dx.doi.org/10.1063/1.4923040 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.4893153 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.4818716 [ * * , ( ) ] * * , ( ) _ _ ( , ) p. * * , ( ) * * , ( ) https://books.google.co.uk/books?id=moj_v-8u0tec[__ ] ( , ) _ _ ( , ) in _ _ ( , ) pp . _ _ ( , ) _ _ ( , ) link:\doibase 10.1785/0120020140 [ * * , ( ) ] * * , ( ) link:\doibase 10.1111/j.1365 - 246x.2011.05044.x [ * * , ( ) ] in link:\doibase 10.1109/sc.2003.10023 [ _ _ ] ( ) pp . in _ , ( , ) pp .link:\doibase 10.1785/0120130106 [ * * , ( ) ] link:\doibase 10.1046/j.1365 - 246x.1999.00967.x [ * * , ( ) ] ( ) * * , ( ) link:\doibase 10.1190/1.2757586 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.jseaes.2014.02.009 [ * * , ( ) ] _ _ ( , , ) https://books.google.fr/books?id=ogsvy9tbmbmc[__ ] , proceedings and monographs in engineering , water and earth sciences ( , ) * * , ( ) * * ( )
metamaterials are artificially structured media that can focus ( lensing ) or reroute ( cloaking ) waves , and typically this is developed for electromagnetic waves at millimetric down to nanometric scales or for acoustics or thin elastic plates at centimeter scales . extending the concepts of we show that the underlying ideas are generic across wave systems and scales by generalizing these concepts to seismic waves at frequencies , and lengthscales of the order of hundreds of meters , relevant to civil engineering . by applying ideas from transformation optics we can manipulate and steer rayleigh surface wave solutions of the vector navier equations of elastodynamics ; this is unexpected as this vector system is , unlike maxwell s electromagnetic equations , not form invariant under transformations . as a paradigm of the conformal geophysics that we are creating , we design a square arrangement of luneburg lenses to reroute and then refocus rayleigh waves around a building with the dual aim of protection and minimizing the effect on the wavefront ( cloaking ) after exiting the lenses . to show that this is practically realisable we deliberately choose to use material parameters readily available and this metalens consists of a composite soil structured with buried pillars made of softer material . the regular lattice of inclusions is homogenized to give an effective material with a radially varying velocity profile that can be directly interpreted as a lens refractive index . we develop the theory and then use full 3d time domain numerical simulations to conclusively demonstrate the validity of the transformation seismology ideas : we demonstrate , at frequencies of seismological relevance hz , and for low speed sedimentary soil ( m / s ) , that the vibration of a structure is reduced by up to 6 db at its resonance frequency . this invites experimental study and opens the way to translating much of the current metamaterial literature into that of seismic surface waves . mathematicians and physicists have long studied the physics of waves at structured interfaces : back in 1898 , lamb wrote a seminal paper on reflection and transmission through metallic gratings that then inspired numerous studies on the control of surface electromagnetic waves . the concept that periodic surface variations could create and guide surface waves has emerged in a variety of guises : rayleigh - bloch waves for diffraction gratings , yagi - uda antenna theory and even edge waves localised to coastlines for analogous water wave systems . most notably in the last decade , the discovery of spoof plasmon polaritons ( spps ) has motivated research not only in plasmonics but also in the neighbouring , younger , field of platonics , devoted to the control of flexural ( lamb ) waves in structured thin elastic plates . the extension of ideas such as cloaking to plasmonics is highly motivational as it suggests that one can take the concepts of wave physics from bulk media , as reviewed in , and apply them to surfaces . in platonics , implementations of ideas from transformation optics show how , all be it for the limiting cases of thin elastic plates , similar concepts can take place in another arena of physics , in this case mechanics and elasticity . unfortunately in much of elastic wave theory , and for many applications , one actually has the opposite physical situation to that of a thin plate , that is , one has an elastic material with infinite depth ; on the surface of such a half - space elastic surface waves , rayleigh waves , exist that exponentially decay with depth and have much in common conceptually with surface plasmons in electromagnetism . it is therefore attractive to investigate whether concepts proven in plasmonics can be translated across despite the underlying governing equations having many fundamental differences . some experimental work has taken place in broadly related scenarios such as the attenuation of rayleigh waves at high frequencies in a marble quarry with cylindrical holes and in piezo - electric substrates with pillars , but with differing aims and not at frequencies relevant for seismic waves . some work on structured elastic media , going beyond elastic plates , to try and create seismic metamaterials is underway with some success either with subwavelength resonators or with periodic structuring within the soil and trying to still utilise flexural wave modelling . our aim here is complementary in that we want to implement ideas from transformation optics into this elastic surface wave area and investigate what can be achieved . the desire , and need , to control the flow of waves is common across wave physics in electromagnetic , acoustic and elastic wave systems . acoustics is , mathematically , a simplified version of full elasticity with only compressional waves present : full elasticity has both shear and compression , with different wave speeds , and full coupling between - leading to a formidable vector system . the quest for a perfect flat lens with unlimited resolution and an invisibility cloak that could conceal an object from incident light via transformation optics are exemplars for the level of control that can , at least , in theory be achieved for light or sound . an invisibility cloak for mechanical waves is also envisioned for bulk waves , and experiments using latest developments in nanotechnology support these theoretical concepts . however , some form of negative refractive index appears to be necessary to achieve an almost perfect cloak . unfortunately , a negative index material emerges from a very complex microstructure that is not feasible for the physical length - scales , for seismic waves at frequency lower than 10 hz this means wavelengths of the order of a hundred meters , in geophysical applications . notably one can design locally resonant metamaterials that feature deeply subwavelength bandgaps and so could be used in seismic and acoustic contexts , but these effects are observed over a limited frequency band ; this is being improved to allow for simultaneous protection and cloaking and has potential . however , here we look at protection and cloaking , applied to low frequency seismic waves , using the concept of lensing . elastic waves , in the same way as light , are subject to snell s law of refraction , and an appropriate choice of material properties leads to spectacular effects like those created by gradient index ( grin ) lenses . compared to classic lens where ray - paths are bent through discontinuous interfaces causing losses and aberrations , grin lenses are obtained with a smooth refractive index transition . rayleigh and maxwell themselves studied grin lenses , notably the maxwell s fisheye whose spatially varying refractive index was later shown to be associated with a stereographic projection of a sphere on a plane . as noted in grin lenses have been mainly studied and implemented for optical applications , or wave systems governed by the helmholtz equation . the ideas behind transformation optics are not limited to metamaterial - like applications and have contributed to recent advances in plasmonic light confinement by touching metallic nanostructures . a classical example of a grin lens is the circular luneburg lens , once a plane wave enters the lens , its radially varying refraction index steers the ray - path towards a focal point located on the opposite side of the lens . the eponymous fisheye lens of maxwell is another well documented and fascinating grin lens and it has been proposed as a non - euclidean cloak . to date , applications have been mainly limited to scalar wave systems governed by helmholtz s type operators for transversely polarized electromagnetic waves , for pressure waves or platonics where composite and thickness modulated plates have recently been proposed . in full elasticity , where the propagation is described by the vector navier equation that is not simply a helmholtz equation or scalar , a proof of concept is still missing . experimentally , for seismic waves , the realization of any proposed lensing arrangement becomes a real challenge because wavelengths range from m to m. for real applications wave control must be achieved over a broad frequency band , to cover the various wavelengths of interest ; unlike other metamaterial cloak designs such as those based around subwavelength resonators , the lens proposed here is effective across a very broad spectrum of frequencies . in civil engineering the structuring or reinforcement of soils is commonplace with geotechnical solutions aimed at improving soil seismic performances ( e.g. soil improvement by dynamic compaction , deep mixing and jet grouting ) , typically implemented prior to construction of structures , aimed to rigidify or decouple the building response and not at rerouting the seismic input . in the conceptual lens we design , the structuring of the soil is feasible and we use material parameters typical of poorly compacted sediments . using the ideas of transformation optics , and detailed 3d numerical simulations , we show that a square arrangement of four luneburg lenses can completely reroute waves around an area for seismic waves coming with perpendicular incident directions ( e.g. and in fig . 1b ) . not only are the waves rerouted leaving the inner region protected , but the wave - front is reconstructed coherently after leaving the arrangement of lenses .
if borrowers have only assets that when liquidated generate cash in a local currency different from the currency in which their debt is due , their default risk will be higher than in the one currency case , as a consequence of the additional exchange rate risk .the increase in default risk is reflected both in higher probabilities of default ( pds ) as well as in higher asset correlations between the borrowers . in this note , by modifying merton s model of the default of a firm , we derive some simple relations between the pds without and with exchange rate risk , between the borrowers asset correlations without and with exchange rate risk , and pds and asset correlations when taking account of exchange rate risk . in general , the formulae we derive include as parameters the borrowers asset volatilities , the exchange rate volatility , and the mean logarithmic ratio of the exchange rates at times 1 and 0 .however , assuming independence of the exchange rate and the borrowers asset values as well as zero mean logarithmic ratio of exchange rates at times 1 and 0 yields a relation between the borrowers asset correlation without and with exchange rate risk and the borrowers pds without and with exchange rate risk that does not require knowledge of additional parameters ( see equation ) .in the special case of borrowers with identical individual risk characteristics (= pds ) , relation can be stated as follows : where and denote the original pd and asset correlation without exchange rate risk and and denote the pd and asset correlation when there is additional exchange rate risk .both and can be understood as consistency conditions that should be satisfied when the risk parameters pd and asset correlation are to be adjusted for incorporating exchange rate risk .we describe in section [ sec : just ] the background of the model we use . in section [se : metho ] , it is shown how the results are derived from the model .the note concludes with a brief discussion of what has been reached .as in merton s model for the default of a firm , we assume that , the borrower s asset value as a function of time , can be described by a geometric brownian motion , i.e. where is the asset value at time ( today ) , is the drift of the asset value process , is its volatility , and denotes a standard brownian motion that explains the randomness of the future asset values . similar to , we assume that , the exchange rate of the two currencies at time , can be described as another geometric brownian motion , i.e. where is the exchange rate at time , is the drift of the exchange rate process , is its volatility , and denotes another standard brownian motion that explains the randomness of the future exchange rates .the brownian motions are correlated with correlation parameter , i.e. \ = \ r , \quad 0 \le s < t.\ ] ] as in merton s model of the default of a firm , the borrower defaults after one year ( i.e. ) if her or his asset value by then has fallen below her or his level of due debt . however , debt is due in a currency different from the currency in which the asset value is denominated .hence the asset value must be multiplied with the exchange rate at time 1 : from an economic point of view , it is convenient to divide both sides of by .this leads to with the advantage of compared to is the fact that on the one hand the debt level is expressed as a value in the local currency of the borrower s assets with an exchange rate as observed today . on the other hand ,compared to the one currency case the volatility of the left - hand side of is higher because it includes the factor that reflects the change of the exchange rate between today and time 1 .this effect might be mitigated to some extent by the difference of the interest rates in the two countries .for the purpose of this note , however , it is assumed that mitigation by interest rates differences can be neglected .this assumption seems justified in particular when the debt is composed of fixed rate loans or is short - term . taking the logarithm of both sides of and standardisation of the random variable lead to now , , and to arrive at in , is the logarithmic ratio of the exchange rates at times 1 and 0 and is jointly normally distributed with . as a consequence from ,the correlation of and is given by \ = \r.\ ] ] note that , due to the convexity of the exponential function , = 1 ] but to = - \tau^2/2 ] on the other hand , then = \tau^2/2 ] ) , then the following equation links up the adjusted pds and asset correlations : to see that corollary [ co:1 ] follows from proposition [ pr:1 ] , solve for and insert the result into .equation can be generalised easily for .generalisation for the case , however , would again involve asset volatilities and exchange rate volatility as additional parameters .considerable simplification can be reached in when it is assumed that the individual risk characteristics of the borrowers are identical ( homogeneous portfolio assumption ) , i.e. in this case can be equivalently written as .figure [ fig:1 ] presents for some fixed combinations of original pds and original asset correlations the relationship of adjusted pd and adjusted asset correlation .note that implies that as long as if and .hence the curves in figure [ fig:1 ] start in the original pds .moreover , all the curves converge towards correlation 1 when the original pd approaches 0.5 .in general , the curves are strictly concave such that the asset correlations grow over - proportionally as long as the adjusted pds are small , and under - proportionally when the adjusted pds are high ( close to 0.5 ) .[ fig:1 ] + 270taking recourse to well - known models by , , and we have derived a simple model for incorporating exchange rate risk into the pds and asset correlations of borrowers whose assets and debt are denominated in different currencies . on principle , the results can be used to derive values of pds and asset correlations that are adjusted to take account of exchange rate risk .however , this requires knowledge of parameters like the borrowers asset value volatility that are hard to estimate .another type of result is potentially more useful because thanks to additional assumptions like the independence of exchange rate and asset value processes it represents a consistency condition for the joint change of pds and asset correlations when exchange rate risk is taken into account .this latter result does not involve asset value volatilities or other inaccessible parameters .
intuitively , the default risk of a single borrower is higher when her or his assets and debt are denominated in different currencies . additionally , the default dependence of borrowers with assets and debt in different currencies should be stronger than in the one - currency case . by combining well - known models by , , and we develop simple representations of pds and asset correlations that take into account exchange rate risk . from these results , consistency conditions can be derived that link the changes in pd and asset correlation and do not require knowledge of hard - to - estimate parameters like asset value volatility .
the _ heat kernel approach _ ( hka for short ) , which will be presented and discussed in this paper , is a systematic method to construct state price densities which are analytically tractable . to be precise on analytic tractability , we mean with this notion that bond prices can be calculated explicitly , and that caps , swaptions or other derivatives on bonds can be calculated up to one integration with respect to the law of the underlying factor markov process. therefore such models can be easily calibrated to market data and are useful for pricing and hedging purposes , but also for purposes of risk management .the original motivation of introducing the hka was in modelling of interest rates with jumps .in the hjm framework , the drift condition becomes quite complicated ( see h. shirakawa s pioneering work , see also and references therein ) if taking jumps into account , while in the spot rate approach , one will find it hard to obtain explicit expressions of the bond prices ( the affine class is almost the only exception ) . in , the state price density approach .] is applied and by means of transition probability densities of some lvy processes explicit expressions of the zero coupon bond prices are obtained .the hka is actually a full extension of the method , and thanks to the generalization its applications are now not limited to jump term structure models as we will show in the sequel . before the presentation of the theorem , we will give a brief survey of the state price density approaches such as the potential approach by l.c.g .rogers or hughston s approach , etc , in section [ spda ] and section [ lreview ] .our models are within the class of _ markov functional models _ proposed by p. hunt , j. kennedy and a. pelsser , and therefore compatible with their practical implementations .one of the contributions of the hka to the literature could be to give a systematic way to produce markov functional models . as a whole , this paper is meant to be an introduction to the hka , together with some examples .topics from practical viewpoint like fitting to the real market , tractable calibration , or econometric empirical studies , are not treated in this paper .nonetheless we note that hka is an alternative approach to practical as well as theoretical problems in interest rate modeling .we will start from a brief survey of state price densities . by a _ market _ , we mean a family of price - dividend pairs , , which are adapted processes defined on a filtered probability space .a strictly positive process is a _ state price density _ with respect to the market if for any , it holds that ,\ ] ] or for any , .\ ] ] in other words , gives a ( random ) discount factor of a cash flow at time multiplied by a radon nikodym derivative .if we denote by the market value at time of zero - coupon bond with maturity , then the formula ( [ cff2 ] ) gives .\ ] ] from a perspective of modeling term structure of interest rates , the formula ( [ spd ] ) says that , given a filtration , each strictly positive process generates an _ arbitrage - free _ interest rate model .on the basis of this observation , we can construct arbitrage - free interest rate models . notice that we do not assume to be a submartingale ,i.e. in economic terms _ we do not assume positive short rates_. the rational log - normal model by flesaker and hughston was a first successful interest rate model derived from the state price density approach .they put where and are deterministic decreasing process , is a one dimensional standard brownian motion and is a constant .the model is an analytically tractable , leads to positive - short rates , and is fitting any initial yield curve .furthermore closed - form solutions for both caps and swaptions are available .several extensions of this approach are known . in ,rogers introduced a more systematic method to construct positive - rate models which he called the _potential approach_. actually he introduced two approaches ; in the first one , where is the resolvent operator of a markov process on a state space , is a positive function on , and is a positive constant . in the second one, is specified as a linear combination of the eigenfunctions of the generator of .note that when is a brownian motion , for any is an eigenfunction of its generator , and the model gives another perspective to the rational lognormal models above . in l.hughston and a. rafailidis proposed yet another framework for interest rate modelling based on the state price density approach , which they call a _chaotic approach to interest rate modelling_. the wiener chaos expansion technique is then used to formulate a systematic analysis of the structure and classification of interest rate models .actually , m.r .grasselli and t.r .hurd revisit the cox - ingersoll - ross model of interest rates in the view of the chaotic representation and they obtain a simple expression for the fundamental random variable as )^2 | \mathcal{f}_t ] ] for any and , we have if the fourier transform exists and is bounded in , then by the symmetry of it holds that , and has the following expression : on the other hand , let be the lebesgue measure of .if the density of exists in , we have that is , is the heat kernel .we are now able to prove the following theorem on supermartingales constructed from heat kernels : [ mainth ] let be a positive constant greater than and let be any positive constant. set .then is a supermartingale with respect to the natural filtration of . by the expression ( [ realu ] ) , we have since and is positive , thus we obtain and then on the other hand , by an elementary inequality we have and since is positive , we obtain combining ( [ 2ndineq ] ) and ( [ 3rdineq ] ) , we have the right - hand - side of ( [ 4thineq ] ) is non - negative since and is decreasing in . finally , since we have = u ( \lambda+2t - t , x^x_{t } ) + c u ( t+\lambda,0),\ ] ] we now see that is a supermartingale . by the above theorem , we can now construct a positive interest rate model by setting to be a state price density , or equivalently here we give some explicit examples . in all examples , is a constant greater than and some positive constant .a simple example is obtained by letting where the bond prices are expressed as a similar model can be obtained by setting , for , a visualization of this model is shown in figure 2 .let be a cauchy process in starting from .the explicit form of the transition density of is known to be where and ( see e.g. ) .note that cauchy processes are strictly stable ( or self - similar ) process with parameter ; namely , this property is seen from their lvy symbol : actually we have = e^ { - t ( \theta |\xi| - i \langle \xi , \gamma \rangle ) } .\ ] ] if we set , the density turns symmetric and therefore , setting , we obtain an explicit positive term structure model .we give the explicit form of the bond prices in the case of : the model is a modification of the cauchy tsms given in .let be lvy process and be a subordinator independent of .recall that the process is usually called the subordination of by the subordinator . among the subordinations of the wiener process , two classes are often used in the financial context ; one is the _ variance gamma _ processes and the other is _ normal inverse gaussian _ processes , since they are analytically tractable ( see and references therein ) .the subordinator of a variance gamma process ( resp .normal inverse gaussian process ) is a gamma process ( resp .inverse gaussian process ) .if we let the drift of the wiener process be zero , then the densities of the subordinated processes satisfies the condition for theorem [ mainth ] .the gamma subordinator is a lvy process with where and are positive constants .then the heat kernel of the subordinated wiener process is from this expression we have the heat kernel ( [ vgdensity ] ) can be expressed in terms of the modified bessel functions , , which has an integral representation actually we have the heat kernel of the inverse gaussian subordinator is where and are positive constants .the heat kernel of the subordinated wiener process is applying ( [ mbessel ] ) , we obtain in particular , we have [ a1 ] for , , and , it holds where , _ lvy processes and infinitely divisible distributions _ , vol .68 of _ cambridge studies in advanced mathematics _ , cambridge university press , 1999 .translated from the 1990 japanese original ; revised by the author .
we construct default - free interest rate models in the spirit of the well - known markov funcional models : our focus is analytic tractability of the models and generality of the approach . we work in the setting of state price densities and construct models by means of the so called propagation property . the propagation property can be found implicitly in all of the popular state price density approaches , in particular heat kernels share the propagation property ( wherefrom we deduced the name of the approach ) . as a related matter , an interesting property of heat kernels is presented , too . * key wordes * : interest rate models , markov - functional , state price density , heat kernel .
the problem of determining the angle at which a point mass launched from ground level with a given speed will reach a maximum distance is a standard exercise in mechanics .there are many possible ways of solving this problem , leading to the well - known answer of , producing a maximum range of , with being the free - fall acceleration .conceptually and calculationally more difficult problems have been suggested to improve student proficiency in projectile motion , with the most famous example being the tarzan swing problem .the problem of determining the maximum distance of a point mass thrown from constant - speed circular motion is presented and analyzed in detail in this text .the calculational results confirm several conceptually derived conclusions regarding the initial throw position and provide some details on the angles and the way of throwing ( underhand or overhand ) which produce the maximum throw distance . the situation analyzed in this textcan be defined as follows : _ `` suppose you want to throw a stone ( approximated by a point mass ) as far horizontally as possible .the stone rotates in a vertical circle with constant speed . at which point during the motionshould the stone be released ?should it rotate clockwise ( an overhand throw ) or counter - clockwise ( an underhand throw ) ?the center of rotation is at height , where is the radius of rotation .the horizontal distance is measured from the point on the ground directly below the center of rotation . ''_ an illustration of the problem is given in fig.[figure1 ] .this problem poses several conceptual difficulties . during motion , the initial height , the horizontal distance with respect to the reference point on the ground and the launch angle all change . since all of these influence the final horizontal distance , it is not easy to deduce exactly what kind of throw should be executed to attain the maximum distance for a given speed .let s assume that the throw is executed to the right ( this does not restrict the solution in any way ) . for an overhand throw ,the stone needs to be released during movement through the upper part of the circle , since then it is traveling to the right . during the first part of the motion , the angle of the stone velocity with the horizon is positive , sothe stone is thrown upwards , but the initial horizontal distance from the reference point on the ground is negative . during the second part of the motion , the opposite holds true .it is clear that if the initial speed is nearly zero , the stone should be released when it is as forward as possible , since then it is practically released from rest and it drops vertically and lands at a distance away from the reference point . on the other hand , if the initial speed of the stone is very large , in the sense that the initial displacement from the reference point on the ground is very small compared to the range of the throw , one would expect that the classical result of an angle equal to produces the largest horizontal distance . for an underhand throw ,the the stone needs to be released during movement through the lower part of the circle , since then it is traveling to the right . in this case , it is more obvious that the release should happen during the second part of the motion , since then the throw is executed upwards and the initial horizontal displacement of the stone is positive .again , for a low speed , the stone should be released when it is as forward as possible and for a high speed , it should be released when the throw angle with respect to the horizon is equal to .interestingly , it is unclear whether the throw should be made overhand or underhand to obtain the maximum throw distance for a general value of speed . to answer this question, some knowledge of elementary kinematics and numerical calculation is necessary .we can define the coordinate system as in fig.[figure2 ] .clearly , there are two cases to consider : the overhand and the underhand throw .marks the angle the stone velocity makes with the horizontal line.,scaledwidth=70.0% ] let mark the angle with the horizontal line when the stone is released ( set equal to ) .the initial coordinates of the stone are : the moment when the stone hits the ground is found by setting in the general equation of motion , which gives one physical solution for . inserting that solution into ,the throw horizontal distance becomes : the absolute sign is required to take into account possible negative values of angle .the notation in which the upper sign represents the overhand throw and the lower represents the underhand throw is introduced .the trajectory equations here assume no air drag . the maximum distance of the throw can be found numerically or graphically by plotting as a function of the inital speed andthrow angle .another approach is to set for a certain intial speed , which often has the benefit of emphasizing the dimensionless variables relevant to the problem . taking the derivative one obtains the following condition , after some algebra : where the shorthands and were introduced and denotes the throw angle at which the range is maximal . at this point, we will use the simplification and note that in that case is twice the ratio of the kinetic energy of the stone to its gravitational potential energy at the lowest point of rotation in the chosen coordinate system . even though numerical solving was skipped in ( [ eq2 ] ) , here it needs to be employed .the maximum angle results are obtained when varying the value of from 0 to 50 , separately for the overhand and the underhand cases .the value of corresponds to the case where the stone is released from rest , while corresponds to the case in which the kinetic energy of the stone is much larger than its initial potential energy . at the end of the calculation, the throw distance can be found for a specific angle value .the results of numerically solving ( [ eq_condition ] ) are given on fig.[figure3 ] .as suspected earlier , the angles obtained for an underhand throw approach for small values of and for large values of .interestingly , the decline from to is not uniform . with increasing from zero , the angle of maximum distance decreases until reaches a minimum value at , corresponding to a minimum angle of , and then starts increasing again until it asymptotically reaches . for an overhand throw , the maximum distance for very small values of obtained when , as concluded earlier .with increasing , the throw angle decreases to zero .a throw at , which happens at the top of the motion , is achieved when , i.e. when . just as in the case of an underhand throw ,when further increasing , the throw velocity angle approaches .at which the throw distance is maximum for a given initial speed in cases of an underhand and an overhand throw .the parameter , where is the initial speed .the range of in the graphs is from 0.03 to 20.,title="fig:",scaledwidth=45.0% ] at which the throw distance is maximum for a given initial speed in cases of an underhand and an overhand throw .the parameter , where is the initial speed .the range of in the graphs is from 0.03 to 20.,title="fig:",scaledwidth=45.0% ] at which the throw distance is maximum for a given initial speed .right : the scaled maximum horizontal distance covered by the stone when launched with a given initial speed .the dashed red line represents an underhand throw , the solid blue line an overhand throw . on the right graph, ranges from 0.03 to 2.,title="fig:",scaledwidth=45.0% ] at which the throw distance is maximum for a given initial speed .right : the scaled maximum horizontal distance covered by the stone when launched with a given initial speed .the dashed red line represents an underhand throw , the solid blue line an overhand throw . on the right graph, ranges from 0.03 to 2.,title="fig:",scaledwidth=45.0% ] using the values for the obtained angles , one can calculate the maximum throw distances , as shown on fig.[figure4 ] .it is interesting to note that for speeds for which and , the maximum distances are the same in the cases of an overhand and an underhand throw . in the interval , the maximum attainable distance is greater in the case of an overhand throw .this result could not be simply deduced without calculation , since after an overhand throw for which the stone has an initial velocity pointing downwards , decreasing its fall time and thus its final horizontal distance .the results of the calculation show several interesting features . for a throw speed equal to zero, the stone should be held as far forward as possible before release , performing a simple vertical drop .the distance it reaches in this case is equal to . when increasing the speed in the interval , it is more beneficial to perform an overhand throw , even though the stone release is performed when its velocity is pointing downwards . for ,the maximum distances for an overhand and an underhand throw become exactly the same , but the throws are performed at different angles .for example , in the limiting case of , an overhand throw is performed at the top of the motion ( when ) , while an underhand throw is performed when . with increasing speed ,the maximum distance increases almost linearly with and the angles for both types of throws tend towards . for an underhand throw , the angle at which the maximum distance is attained dips below its asymptotic value to the value of before it starts increasing towards .this result might present a conceptual difficulty after all , for a large enough speed , is it not always more beneficial that the throw is performed at , since in that case the stone is both more forward and higher ? even though this statement is true , one should remember that the stone is not thrown from the ground level , and so the angle at which it attains maximal distance can not be exactly .however , it is worth noting that in the case of an underhand throw , the convergence to seems very rapid ( when compared to the overhand throw ) .all the results are summarized on fig.[figure5 ] .the circumference of the circle shows the points through which the stone can move .the angles given are the angles which the stone has with respect to the horizontal line when released from a certain point .the top part of the circle represents overhand throws and the bottom part underhand throws . using the color scale , fora given value of one can find two corresponding angles on the circle which produce the maximum throw distance .the color scale is logarithmic since large arcs of the circle correspond to small changes in .it can be seen that for the stone is released when it is furthest to the right . with increasing , the angle grows for an overhead throw , asymptotically reaching . for underhand throws ,the angle first decreases below and then increases back towards it , as discussed .the decrease and the increase are drawn separately on the graphic to avoid overlap . for overhand throws ( top part ) and underhand throws ( bottom part of the circle).,scaledwidth=70.0% ]the problem of throwing a point particle from a vertical circular constant - speed motion was considered .the optimal throw angle which maximizes the throw distance was found as a function of and of the throw type .the results confirmed some conceptually derived predictions , but also provided some new insights . to obtain the maximum throw distancewhen it is more beneficial to use the overhand throw .interestingly , for the maximum throw distance becomes the same for an overhand and an underhand throw , although the throws are executed at different angles with respect to horizon .for very large values of , the optimal angle in both cases equals and the throw distance becomes .the result is approximate since the throw is not performed from ground level .if one wants to throw a stone as far as possible in real life , many effects such as air drag and the spin of the stone , among others , need to be considered .furthermore , vertical circular motion is not a good model for human overhand throwing which is a complex sequence of torques and levers that can generate more initial speed than underhand throwing .nevertheless , it would be interesting to construct an apparatus to experimentally demonstrate the results outlined here , as well as evaluate the effects due to air drag and other factors , which is considered as a future step of this article .9 john s. thomsen , `` maxima and minima without calculus '' , am.j.phys . * 52*(10 ) , 881 - 883 ( 1984 ) r. palffy - muhoray and d. balzarini , `` maximizing the range of the shot put without calculus '' , am . j. phys .* 52 * , 181 - 181 ( 1982 ) william s. porter , `` the range of a projectile '' , phys .teach * 15 * , 358 - 358 ( 1977 ) harvard u. problem of the week 71 , https://goo.gl/0h6jzx ronald a. brown , `` maximizing the range of a projectile '' , phys .* 30 * , 344 ( 1992 ) d. bittel , `` maximizing the range of a projectile thrown by a simple pendulum '' , phys .teach * 43 * , 98 ( 2005 ) m. rave , m. sayers , `` tarzan s dilemma : a challenging problem for introductory physics students '' , phys . teach * 51* , 456 ( 2013 )
the problem of determining the angle at which a point mass launched from ground level with a given speed is a standard exercise in mechanics . similar , yet conceptually and calculationally more difficult problems have been suggested to improve student proficiency in projectile motion . the problem of determining the maximum distance of a rock thrown from a rotating arm motion is presented and analyzed in detail in this text . the calculational results confirm several conceptually derived conclusions regarding the initial throw position and provide some details on the angles and the way of throwing ( underhand or overhand ) which produce the maximum throw distance . + + to be published in * the physics teacher . *
recently , a great deal of attention has been payed to wireless sensor networks whose nodes sample a physical phenomenon ( hereinafter referred to as field ) , i.e. , air temperature , light intensity , pollution levels or rain falls , and send their measurements to a central processing unit ( or _ sink _ node ) .the sink is in charge of reconstructing the sensed field : if the field can be approximated as bandlimited in the time and space domain , then an estimate of the discrete spectrum can be obtained .however , the sensors measurements typically represent an irregular sampling of the field of interest , thus the sink operates based on a set of field samples that are not regularly spaced in the time and space domain .the reasons for such an irregular sampling are multifold .( i ) the sensors may be irregularly deployed in the geographical region of interest , either due to the adopted deployment procedure ( e.g. , sensors thrown out of an airplane ) , or due to the presence of terrain asperities and obstacles .( ii ) the transmission of the measurements from the sensors to the central controller may fail due to bad channel propagation conditions ( e.g. , fading ) , or because collisions occur among the transmissions by sensors simultaneously attempting to access the channel . in this case , although the sample has been collected by the sensor , it will not be delivered to the central controller .( iii ) the sensors may enter a low - power operational state ( sleep mode ) , in order to save energy . while in sleep mode , the nodes neither perform sensing operations nor transmit / receive any measurement .( iv ) the sensors may be loosely synchronized , hence sense the field at different time instants .clearly , sampling irregularities may result in a degradation of the reconstructed signal .the work in investigates this issue in the context of sensor networks .other interesting studies can be found in and , just to name a few , which address the perturbations of regular sampling in shift - invariant spaces and the reconstruction of irregularly sampled images in presence of measure noise . in this work ,our objective is to evaluate the performance of the field reconstruction when the coordinates in the -dimensional domain of the field samples , which reach the sink node , are randomly , independently distributed and the sensors measurements are noisy . we take as performance metric the mean square error ( mse ) on the reconstructed field . as a reconstruction technique , we use linear filtering and we adopt the filter that minimizes the mse ( i.e. , the lmmse filter ) . the matrix representing the sampling system , in the following denoted by ,results to be a -fold vandermonde matrix matrix is vandermonde if its entry , can be written as , . ] . by drawing on the results in , we derive both the moments and an expression of the limiting spectral distribution ( lsd ) of , as the size of goes to infinity and its aspect ratio has a finite limit bounded away from zero .then , by using such an asymptotic model , we approximate the mse on the reconstructed field through the -transform of , and derive an expression for it .we apply our results to the study of network scenarios of practical interest , such as sensor sensor deployments with coverage holes , communication in presence of a fading channel , massively dense networks , and networks using contention - based channel access techniques .the rest of the paper is organized as follows .section [ sec : related ] reviews previous work , while section [ sec : system ] describes the system model under study . in section [ sec : preliminaries ] , we first provide some useful definitions and introduce our performance metric , then we recall previous results on which we build our analysis . in section [ sec : results - vandermonde ] , we derive asymptotic results concerning the moments and the lsd of .such results are applied to different practical scenarios in section [ sec : applications ] .finally , section [ sec : conclusions ] concludes the paper .in the context of sensor networks , several works have studied the field reconstruction at the sink node in presence of spatial and temporal correlation among sensor measurements . in particular , in observed field is a discrete vector of target positions and sensor observations are dependent . by modeling the sensor network as a channel encoder and exploiting some concepts from coding theory , the network capacity , defined as the maximum value of the ratio of the target positions to the number of sensors , is studied as a function of the noise , the sensing function and the sensor connectivity level .the paper by dong and tong considers a dense sensor network where a mac protocol is responsible to collect samples from network nodes .the work analyzes the impact of deterministic and random data collection strategies on the quality of field reconstruction . as a performance measure ,the maximum of the reconstruction square error over the sensed field is employed , as opposed to our work where the mean square error is considered .also , in the field is a gaussian random process and the sink always receives a sufficiently large number of samples so as to reconstruct the field with the required accuracy .the problem of reconstructing a bandlimited field from a set of irregular samples at unknown locations , instead , has been addressed in .there , the field is oversampled by irregularly spaced sensors ; sensor positions are unknown but always equal to an integer multiple of the sampling interval .different solution methods are proposed , and the conditions for which there exist multiple solutions or a unique solution are discussed . differently from , we assume that the sink can either acquire or estimate the sensor locations and that the coordinates of the sampling points are randomly located over a finite -dimensional domain . as for previous results on vandermonde matrices , in ryan and debbah considered a vandermonde matrix with and complex exponential entries , whose phases are i.i.d . with continuous distribution .under such hypothesis , they obtained the important results that , given the phases distribution , the moments of can be derived once the moments for the case with uniformly distributed phases are known .also , a method for computing the moments of sums and products of vandermonde matrices , for the non - folded case ( i.e. , ) , has recently appeared in ; further insights on the extremal eigenvalues behavior , still for the case of non - folded vandermonde matrices , can be found in .moreover , in it has been shown that the lsd of converges to the marenko - pastur distribution when is -fold vandermonde with uniformly distributed phases and . note that , with respect to previous studies on vandermonde matrices with entries that are randomly distributed on the complex unit circle , in this work we obtain results on the lsd of where the entries of have phases drawn from a _ generic continuous distribution_. by relying on the results in , we show that such an lsd can be related to that of when the phases of are _ uniformly _ distributed on the complex unit circle .we also provide some numerical results that show the validity of our analysis . to our knowledge, these results have not been previously derived .we then apply them to the study of several practical scenarios in the context of sensor networks , although our findings can be useful for the study of other aspects of communications as well .we consider a network composed of wireless sensors , which measure the value of a spatially - finite physical field defined over dimensions , ( ) .we denote by the hypercube over which the sampling points fall , and we assume that the sampling points are i.i.d . randomly distributed variables , whose value is known to the sink node . note that this is a fair assumption , as one can think of sensor nodes randomly deployed over the geographical region that has to be monitored , or , even in the case where the network topology is intended to have a regular structure , the actual node deployment may turn out to be random due to obstacles or terrain asperities . in addition , now and then the sensors may enter a low - operational mode ( hence become inactive ) in order to save energy , and they may be loosely synchronized .all the above conditions yield a set of randomly distributed samples of the field under observation , in both the time and the space domain . by truncating its fourier series expansion , a physical field defined over dimensions and with finite energycan be approximated in the region as where is the approximate one - sided bandwidth ( per dimension ) of the field , \tran ] . denotes the -th entry of the vector of size , which represents the approximated field spectrum , while the real vectors , represent the coordinates of the -dimensional sampling points . in this work , we assume that , , are i.i.d .random vectors having a generic continuous distribution , . in the specific casewhere are i.i.d with i.i.d .entries , uniformly distributed in , we denote the distribution of by .the coordinates of the -dimensional sampling points , however , are known to the sink node , because _ ( i ) _ either sensors are located at pre - defined positions or their position can be estimated through a localization technique , and _( ii ) _ the sampling time is either periodic or included in the information sent to the sink . now , let ^{\rm t} ] , respectively . following , we can write the vector as a function of the field spectrum : where is the -fold vandermonde matrix with entries randomly distributed on the complex circle of radius , and is the ratio of the rows to the columns of , i.e. , in general, the entries of can be correlated with covariance matrix ] . in the casewhere the sensor measurements , \tran ] .note that the additive white noise affecting the sensor measurements may be due to quantization , round - off errors or quality of the sensing device .in this section , we report some definitions and previous results that are useful for our study .let us consider an non - negative definite random matrix , whose eigenvalues are denoted by .the average empirical cumulative distribution of the eigenvalues of is defined as ] , where is the generic asymptotic eigenvalue of , whose distribution is , and the average is computed with respect to . next ,consider the matrix as defined in ( [ def : v_multifold ] ) and that the lmmse filter is used for field reconstruction . then, the estimate of the unknown vector in ( [ eq : p ] ) , given and , is obtained by computing { \mathbb{e}}[{{\bf p}}{{\bf p}}\herm]^{-1 } { { \bf p}} ] , is a rational number that can be analytically computed from following the procedure described in . the subscript in and indicates that a uniform distribution of the entries of is considered in the matrix . in it was also shown that when , with having a finite limit , the eigenvalue distribution converges to the marcenko - pastur law .a similar result also applies when the vectors ( ) are independent but not i.i.d ., with equally spaced averages .more recently , ryan and debbah in considered and the case where the random variables , , are i.i.d . with continuous distribution , . under such hypothesis, it was shown that the asymptotic moments of can be written as where the terms depend on the phase distribution and are given by for .the subscript in indicates that in the matrix the random variables have a generic continuous distribution .note that for the uniform distribution we have , for all .the important result in ( [ eq : moments_1d ] ) states that , given , if the moments of are known for uniformly distributed phases , they can be readily obtained for any continuous phase distribution .in this work , we extend the above results by considering a sampling system defined over dimensions with nonuniform sample distribution , where samples may be irregularly spaced in the time and spatial domains , as it occurs in wireless sensor networks .being our goal the estimation of the quality of the reconstructed field , we aim at deriving the asymptotic mse ( i.e. , ) .we start by considering a generic continuous distribution , , of the samples measured by the sensors over the -dimensional domain .we state the theorem below , which gives the asymptotic expression of the generic moment of , for .[ th:1 ] let a -fold vandermonde matrix with entries given by ( [ def : v_multifold ] ) where the vectors , , are i.i.d . and have continuous distribution .then , for , with having a finite limit , the -th moment of is given by where and the terms are defined as in . the proof is given in appendix [ app : th1 ] .using theorem [ th:1 ] and the definition of , it it possible to show the theorem below , which provides the lsd of .[ th:2 ] let * be a -fold vandermonde matrix with entries given by ( [ def : v_multifold ] ) where the vectors , , are i.i.d . and have continuous distribution , * be the set where is strictly positive , i.e. , * the cumulative density function defined denotes the measure of the set for and let be its corresponding probability density function .then , the lsd of , for with having a finite limit , is given by the proof can be found in appendix [ app : th2 ] . from theorem [ th:2 ] , the corollary below follows .[ cor - scal ] consider such that .then , let us denote by a scaled version of this function , so that where .it can be shown that where .the proof can be found in appendix [ app : cor - scal ] . as an example of the result given in corollary [ cor - scal ] , consider that a unidimensional ( ) sensor network monitors the segment ] ( with ) .we therefore have for and 0 elsewhere .moreover , .the expression of is given by ( [ eq : f - scal ] ) , by replacing and the subscript with the subscript .this result is well supported by simulations as shown in figures [ fig : beta08_x08 ] and [ fig : beta02_x05 ] . in the plots ,we compare the asymptotic empirical spectral distribution ( aesd ) and instead of the lsds and since an analytic expression of is still unknown .however , in it is shown that , already for small values of , the aesd appears to rapidly converge to a limiting distribution .figure [ fig : beta08_x08 ] refers to the case and .the solid and dashed lines represent , respectively , the functions and , for . note that the probability mass of at is not shown for simplicity .similarly , figure [ fig : beta02_x05 ] shows the case and . as evident from these plots , the match between the two functions is excellent for any parameter setting , thus supporting our findings .since we are interested in evaluating the mse , taking into account the result in ( [ eq : mseinf ] ) , we now apply the definition of the -transform to ( [ eq : th2 ] ) .the corollary below immediately follows .[ cor2 ] the -transform of is given by hence , the asymptotic mse on the reconstructed field , defined in ( [ eq : mseinf ] ) , is given by the proof can be found in appendix [ app : cor2 ] . in ( [ eq : eta ] ) , in order to avoid a heavy notation we referred to as when the phases of the entries of follow a generic random continuous distribution , while refers to the case where the phases are uniformly distributed .since and , the integral in the right hand side of ( [ eq : eta ] ) is positive , then .it follows that the mse is lower - bounded by the measure of the total area where the probability of finding a sensor is zero .this clearly suggests that , in order to obtain a good quality of the field reconstructed at the sink node , this area must be a small fraction of the region under observation .next , we observe that , in the case of massively dense networks where the number of sampling sensors is much larger than the number of harmonics considered in the approximated field , i.e. , , an interesting result holds : [ cor1 ] let be the set where is strictly positive ; then the proof can be found in appendix [ app : cor1 ] .thus , as evident from corollary [ cor1 ] , for the limit of , the lsd of is the density of the density of the phase distribution .furthermore , for massively dense networks , we have : [ cor3 ] let be the set where is strictly positive ; then the proof can be found in appendix [ app : cor3 ] .the result in ( [ eq : cor3.2 ] ) shows that even for massively dense networks is the minimum achievable , when an area can not covered by sensors .here , we provide examples of how our results can be used in wireless sensor networks to investigate the impact of a random distribution of the coordinates of the sampling points on the quality of the reconstructed field . in particular , we first consider a wireless channel affected by fading , and then the effects of contention - based channel access .we consider a wireless sensor network whose nodes are uniformly distributed over a geographical region . without loss of generality , we assume a square region of unitary side ( , ^ 2 ] .it follows that the probability that a packet is successfully delivered to the corresponding -layer cluster - head within area ( ) can be obtained as .then , the probability that a measurement generated by a sensor located in ( ) is successfully delivered to the sink is given by : next , denoting by the measure of , we define as the normalized probability that a message is successfully delivered to the sink . then , the spatial density of the sensors successfully sending their message is as follows : the density of is therefore given by and the asymptotic mse is given by ) and the case where all measurements successfully reach the sink ( ) .the mse is shown as a function of and for different values of signal - to - noise ratio ( , , , , , ) . ] ) and the case where all measurement transmissions are successful ( ) .the mse is shown as varies and for different values of signal - to - noise ratio ( , , , , , ) . ] figures [ fig : lambda2 ] and [ fig : lambda1 ] show the impact of collisions due to the contention - based channel access , on the quality of the reconstructed field .in particular , they compare the mse of the reconstructed field when collisions are taken into account ( ) with the one obtained in the idealistic case where all messages ( measurements ) sent by the sensors successfully reach the sink ( ) .the results refer to a square region of unitary side , where there are four areas of equal size ( , ) but corresponding to different resolution levels in the measurements collection ( i.e. , they are characterized by different traffic loads ) ; the number of hierarchical levels is set to .we set , , , in figure [ fig : lambda2 ] , and a higher traffic load in figure [ fig : lambda1 ] , i.e. , , , , . looking at the plots , we observe that both and have a significant impact of the obtained mse , with the mse increasing as grows and smaller values of are considered . most interestingly , by comparing the two figures, we can see that as the traffic load , hence the collision probability , increases , the performance derived taking into account the contention - based channel access significantly differs from the idealistic one .furthermore , the latter effect is particularly evident as increases , since the higher the signal - to - noise ratio , the more valuable the samples sent by the sensors toward the sink .we studied the performance of a wireless network whose nodes sense a multi - dimensional field and transfer their measurements to a sink node .as often happens in practical cases , we assumed the sensors to be randomly deployed over ( the whole or only a portion of ) the region of interest , and that their measurements may be lost due to fading or transmission collisions over the wireless channel .we modeled the sampling system through a multi - folded vandermonde matrix and , by using asymptotic analysis , we approximated the mse of the field , which the sink node reconstructs from the received sensor measurements with the -transform of .our results clearly indicate that the percentage of region where sensors can not be deployed must be extremely small if an accurate field estimation has to be obtained .also , the effect of losses due to fading or transmission collisions can be greatly mitigated provided that a suitable value for the ratio between the number of harmonics approximating the field bandwidth and the number of sensors is selected .the -th moment of the asymptotic eigenvalue distribution of can be expressed as \ ] ] where is the normalized matrix trace operator .the matrix power can be expanded as a multiple sum over the entries of : \ ] ] where , are integer indices and , \tran ] induces a partition of the set , in subsets , , under the equality relation . in the following ,we denote by the set of partitions of in subsets , .since there are possible vectors inducing a given partition , we can write the -th moment as \non & = & \lim_{n , m \rightarrow \infty}\sum_{k=1}^p\beta_{n , m}^{p - k}\sum_{{\boldsymbol{\omega}}\in \omega_{p , k } } \frac{{\mathbb{e}}\left[\phi_{{\boldsymbol{\omega } } } ( { { \bf x}}_1,\ldots,{{\bf x}}_k ) \right]}{n^{d(p - k+1 ) } } \label{eq : moment2}\end{aligned}\ ] ] where , , and is the index of the subset of containing . recall that and that . moreover , since the vectors are i.i.d . , we removed the dependence on the subscript .following the same steps as in ( * ? ? ?* appendix h ) , we compute the limit }{n^{d(p - k+1 ) } } & = & \lim_{n \rightarrow \infty}\int_{{{\cal h}}^k}f_x({{\bf x}}_1)\cdots f_x({{\bf x}}_k)\frac{\phi_{{\boldsymbol{\omega}}}({{\bf x}}_1,\ldots,{{\bf x}}_k)}{n^{d(p+1-k ) } } { { \rm\,d}}{{\bf x}}_1 \cdots { { \rm\,d}}{{\bf x}}_k \non & = & \lim_{n \rightarrow \infty}\int_{{{\cal h}}^k}f_x({{\bf x}}_1)\cdots f_x({{\bf x}}_k)\prod_{j=1}^d \frac{f_{{\boldsymbol{\omega}}}(x_{1j},\ldots , x_{kj})}{n^{p+1-k } } { { \rm\,d}}x_{1j}\cdots { { \rm\,d}}x_{kj } \nonumber\end{aligned}\ ] ] we then define ] , for and we integrate first with respect to the variables obtaining }{n^{d(p - k+1 ) } } & = & \lim_{n \rightarrow \infty}\int_{[-\frac{1}{2},\frac{1}{2}]^{(d-1)k } } g_{{\boldsymbol{\omega}}}({{\bf y}}_1,\ldots,{{\bf y}}_k)\prod_{j=2}^d \frac{f_{{\boldsymbol{\omega}}}(x_{1j},\ldots , x_{kj})}{n^{p - k+1 } } { { \rm\,d}}x_{1j}\cdots { { \rm\,d}}x_{kj } \nonumber\end{aligned}\ ] ] where ^{k}}\frac{f_{{\boldsymbol{\omega}}}(x_{11},\ldots , x_{k1})}{n^{p - k+1}}f_x([x_{11 } , { { \bf y}}_1])\cdots f_x([x_{k1 } , { { \bf y}}_k]){{\rm\,d}}x_{11}\cdots { { \rm\,d}}x_{k1 } \label{eq : g_omega}\ ] ] in ( * ? ? ?* appendix h ) it was shown that , because of the properties of , where for any .this means that the integral in ( [ eq : g_omega ] ) can be limited to the on the diagonal where .therefore ^{k}}\frac{f_{{\boldsymbol{\omega}}}(x_{11},\ldots , x_{k1})}{n^{p - k+1 } } f_x([x_{k1 } , { { \bf y}}_1])\cdots f_x([x_{k1 } , { { \bf y}}_k]){{\rm\,d}}x_{11}\cdots { { \rm\,d}}x_{k1 } \non & = & \int_{[-\frac{1}{2},\frac{1}{2 } ] } \prod_{h=1}^k f_x([x_{k1 } , { { \bf y}}_h ] ) \lim_{n \rightarrow \infty}\left(\int_{[-\frac{1}{2},\frac{1}{2}]^{k-1}}\frac{f_{{\boldsymbol{\omega}}}(x_{11},\ldots , x_{k1})}{n^{p - k+1}}\prod_{h=1}^{k-1}{{\rm\,d}}x_{h1}\right ) { { \rm\,d}}x_{k1 } \non & = & v({\boldsymbol{\omega } } ) \int_{[-\frac{1}{2},\frac{1}{2 } ] } \prod_{h=1}^k f_x([x_{k1 } , { { \bf y}}_h ] ) { { \rm\,d}}x_{k1 } \end{aligned}\ ] ] note that the limit ^{k-1}}\frac{f_{{\boldsymbol{\omega}}}(x_{11},\ldots , x_{k1})}{n^{p - k+1}}\prod_{h=1}^{k-1}{{\rm\,d}}x_{h1}\ ] ] does not depend on and the coefficient ] are independent with continuous distribution such that , we have with } f_{x_j}(x)^k{{\rm\,d}}x$ ] .from theorem [ th:1 ] and the definition of , we have that next , we define the set where is strictly positive as note that for the contribution to the integral in ( [ eq : moment_psi ] ) is zero .thus , where for any , is the -th moment of when the phases are uniformly distributed in and the ratio is given by note also that ( [ eq : moments_psi2 ] ) holds for since , by definition , the zero - th moment of any distribution is equal to 1 .expression ( [ eq : moments_psi2 ] ) allows us to write the moments of for any distribution , given the moments for uniformly distributed phases .likewise , it is possible to describe the lsd of , for any continuous , in terms of the lsd obtained for uniformly distributed phases . indeed , let us denote the laplace transform of by if it exists .then , whenever the sum converges since for any distribution , using ( [ eq : moments_psi2 ] ) we obtain : where is the measure of and is the laplace transform of . by using the properties of the laplacetransform and by taking its inverse , we finally get we can rewrite the second term of ( [ eq : distribution_psi ] ) by defining the cumulative density function for . by using the corresponding probability density function , , and lebesgue integration , we can rewrite ( [ eq : distribution_psi ] ) as in ( [ eq : th2 ] ) .from the result in ( [ eq : th2 ] ) and from the assumption ( i.e. , ) , we have then , from the definition of given in ( [ eq : fxi_scaled ] ) it follows that and , by consequence , .therefore , from ( [ eq : f - scal ] ) we have : the expression of the moments given in theorem [ th:1 ] and the results in , it is easy to show that for uniformly distributed phases , we have : for any .it immediately follows that where is the dirac s delta function . by applying this result to ( [ eq : th2 ] ), we get using the definition of the -transform and the result in ( [ eq : th2 ] ) , we obtain : \non & = & \int_0^{\infty } \frac{1}{\gamma z+1}f_{\lambda , x}(d,\beta , z ) { { \rm\,d}}z \non & = & \int_0^{\infty } \frac{1-|{{\cal a}}|}{\gamma z+1}\delta(z ) + |{{\cal a}}| \int_0^{\infty}\frac{g_x(y)}{y } \int_0^{\infty } \frac{1}{\gamma z+1}f_{\lambda , u}\left(d,\frac{\beta}{y},\frac{z}{y}\right){{\rm\,d}}z { { \rm\,d}}y \non & = & 1-|{{\cal a}}|+ |{{\cal a}}| \int_0^{\infty } g_x(y ) \int_0^{\infty } \frac{1}{\gamma y z+1}f_{\lambda , u}\left(d,\frac{\beta}{y } , z\right){{\rm\,d}}z { { \rm\,d}}y \non & = & 1-|{{\cal a}}|+ |{{\cal a}}| \int_0^{\infty } g_x(y ) \eta_u\left(d,\frac{\beta}{y } , \gamma y\right ) { { \rm\,d}}y .\end{aligned}\ ] ] then , by considering that , the expression of the asymptotic mse immediately follows .in appendix [ cor1 ] we have shown that . from the definition of the -transform, it follows that and .thus , from ( [ eq : eta ] ) we have : where we defined . as a consequence , m. perillo , z. ignjatovic , and w. heinzelman , `` an energy conservation method for wireless sensor networks employing a blue noise spatial sampling technique , '' _ international symposium on information processing in sensor networks ( ipsn 2004 ) _ , apr.2004. d.s .early and d.g .long , `` image reconstruction and enhanced resolution imaging from irregular samples , '' _ ieee transactions on geoscience and remote sensing , _vol.39 , no.2 , pp.291302 , feb.2001 .a. nordio , c .- f .chiasserini , and e. viterbo , `` performance of linear field reconstruction techniques with noise and uncertain sensor locations , '' _ ieee transactions on signal processing , _ vol.56 , no.8 , pp.35353547 , aug.2008 .a. nordio , c .- f .chiasserini , and e. viterbo , `` reconstruction of multidimensional signals from irregular noisy samples , '' _ ieee transactions on signal processing , _vol.56 , no.9 , pp.42744285 , sept.2008 .802.15.4 : ieee standard for information technology - telecommunications and information exchange between systems - local and metropolitan area networks - specific requirements part 15.4 : wireless medium access control( mac ) and physical layer ( phy ) specifications for low - rate wireless personal area networks ( wpans ) , 2009 .r. cristescu and m. vetterli , `` on the optimal density for real - time data gathering of spatio - temporal processes in sensor networks , '' _ international symposium on information processing in sensor networks ( ipsn 05 ) , _ los angeles , ca , apr.2005 .y. sung , l. tong , and h. v. poor , `` sensor activation and scheduling for field detection in large sensor arrays '' , _ international symposium on information processing in sensor networks ( ipsn 05 ) , _ los angeles , ca , apr.2005 .y. rachlin , r. negi , and p. khosla , `` sensing capacity for discrete sensor network applications , '' _ international symposium on information processing in sensor networks ( ipsn05 ) , _ los angeles , ca , apr.2005 .m. dong , l. tong , and b. m. sadler , `` impact of data retrieval pattern on homogeneous signal field reconstruction in dense sensor networks , '' _ ieee transactions on signal processing _ ,vol.54 , no.11 , pp.43524364 , nov.2006 .p. marziliano and m. vetterli , `` reconstruction of irregularly sampled discrete - time bandlimited signals with unknown sampling locations , '' _ ieee transactions on signal processing , _vol.48 , no.12 , dec.2000 , pp.34623471 .d. moore , j. leonard , d. rus , and s. teller , `` robust distributed network localization with noisy range measurements , '' _ acm conference on embedded networked sensor systems ( sensys ) _ , baltimore , md , pp.5061 , nov.2004 .c. y. jung , h. y. hwang , d. k. sung , and g. u. hwang , `` enhanced markov chain model and throughput analysis of the slotted csma / ca for ieee 802.15.4 under unsaturated traffic conditions , '' _ ieee transactions on vehicular technology _ , vol.58 , no.1 , pp.473478 , jan.2009 .
environmental monitoring is often performed through a wireless sensor network , whose nodes are randomly deployed over the geographical region of interest . sensors sample a physical phenomenon ( the so - called field ) and send their measurements to a _ sink _ node , which is in charge of reconstructing the field from such irregular samples . in this work , we focus on scenarios of practical interest where the sensor deployment is unfeasible in certain areas of the geographical region , e.g. , due to terrain asperities , and the delivery of sensor measurements to the sink may fail due to fading or to transmission collisions among sensors simultaneously accessing the wireless medium . under these conditions , we carry out an asymptotic analysis and evaluate the quality of the estimation of a -dimensional field ( ) when the sink uses linear filtering as a reconstruction technique . specifically , given the matrix representing the sampling system , , we derive both the moments and an expression of the limiting spectral distribution of , as the size of goes to infinity and its aspect ratio has a finite limit bounded away from zero . by using such asymptotic results , we approximate the mean square error on the estimated field through the -transform of , and derive the sensor network performance under the conditions described above .
the core object of our investigation is an _e. coli _ network representation of its combined gene regulation and metabolism , which can be thought of as functionally divided into three _ domains _ : the representation captures both gene regulatory and metabolic processes , with these processes being connected by an intermediate layer that models both , the enzymatic influence of genes on metabolic processes , as well as signaling - effects of the metabolism on the activation or inhibition of the expression of certain genes .the underlying interaction graph with its set of nodes ( vertices ) and links ( edges ) consists of three interconnected subgraphs , the gene regulatory domain , the interface domain and the metabolic domain . from the functional perspective , is the union of gene regulatory ( ) and metabolic processes ( ) , and their interactions and preparatory steps form the interface ( ). figure [ fig : sketch ] shows a sketch of the network model . a sketch of the domain organization of the integrative _ e. coli _ network .the regulatory domain , , at the top is connected to the metabolic domain , , shown at the bottom via a protein - interface layer , .the figure shows the number of vertices within and the number of edges within and between the domains . for illustrative purposes snapshots of the largest weakly - connected components of , and included in the figure . ]the integrated network representation has been assembled based on the database ( version 18.5 ; ) which offers both data about metabolic processes and ( gene ) regulatory events incorporated from the corresponding regulondb release 8.7 .the extensive metadata allows for the assignment of the vertices to one of the three functional domains .details of this process and a detailed characterization of the resulting model will be described elsewhere .the corresponding graph representation consists of vertices and directed edges .since we are interested in the propagation of a signal between the domains , in the following we will refer to the domains of the source and target vertices of the edge as _ source domain _ , sd , and _ target domain _ , td , respectively .the metadata can be used to assign properties to the nodes and edges of the graph beyond the domain structure , some of which are used in the following analysis , namely in the construction of the propagation rules of the system and of the randomization schemes .we distinguish between biological categories of edges ( capturing the diverse biological roles of the edges ) and the logical categories ( determining the rules of the percolation process ) . according to their biological role in the system , both vertices and edgesare assigned to a _biological _ category ; we abbreviate the _ biological category of a vertex _ as bcv and the _ biological category of an edge _ as bce ( for details see supplementary materials ) .each of the eight bces can then be mapped uniquely to one of only three _ logical categories of an edge _ , lce , which are of central importance for the spreading dynamics in our system : c , conjunct : : the target vertex of an edge with this logical ` and ` property depends on the source node , i.e. , it will fail once the source node fails . for example, for a reaction to take place , all of its educts have to be available .d , disjunct : : edges with this logical ` or ` property are considered redundant in the sense that a vertex only fails if the source vertices of all of its incoming d - edges fail . for instance , a compound will only become unavailable once all of its producing reactions have been canceled .r , regulation : : edges of this category cover different kinds of regulatory events ( described in detail in the supplementary materials ) .as shown below , in terms of the propagation dynamics we treat these edges similar to the conjunct ones .next we describe the dynamical rules for the propagation of an initial perturbation in the network in terms of the logical categories of an edge ( lce ) , which distinguish between the different roles a given edge has in the update of a target vertex .every vertex is assigned a boolean state variable ; since we intend to mimic the propagation of a perturbation ( rather than simulate a trajectory of actual biological states ) we identify the state with _ not yet affected by the perturbation _ while the state corresponds to _ affected by the perturbation_. we stress that the trajectory does not correspond to the time evolution of the abundance of gene products and metabolic compounds , but the rules have been chosen such that the final set of affected nodes provides an estimate of all nodes potentially being affected by the initial perturbation .a node not in this set is topologically very unlikely of being affected by the perturbation at hand ( given the biological processes contained in our model ) .a stepwise update can now be defined for vertex with in - neighbours in order to study the spreading of perturbations through the system by initially switching off a fraction of vertices : {.5\columnwidth}{ } \\ 0 & \text{otherwise } \end{cases}\end{aligned}\ ] ] where is if is connected to via a c - link and otherwise ; and are defined analogously .thus , a vertex will be considered unaffected by the perturbation if none of its in - neighbours connected via either a or an edge have failed ( regardless of the sign of the regulatory interaction ) , and at least one of its in - neighbours connected via a edge is still intact . with an additional ruleit is ensured that an initially switched off vertex stays off .the choice of the update rules ensures that the unperturbed system state is conserved under the dynamics : . as a side remark ,the spreading of a perturbation according to the rules defined above could also be considered as an epidemic process with one set of connections with a very large , and a second set of connections with a very low probability of infection .in systems which can be described without explicit dependencies between its constituents but with a notion of functionality that coincides with connectivity , percolation theory is a method of first choice to investigate the system s response to average perturbations of a given size that can be modelled as failing vertices or edges the fractional size of the giant connected component as a function of the occupation probability of a constituent typically vanishes at some critical value , the percolation threshold . in the following, we will mostly use the complementary quantity so that can be interpreted as the critical size of the initial attack or perturbation of the system .the strong fluctuations of the system s responses in the vicinity of this point can serve as a proxy for the percolation threshold , which is especially useful in finite systems in which the transition appears smoothed out . in our analysis , the susceptibility , where is the size of the largest cluster , is used . upon the introduction of explicit dependencies between the system s constituents ,the percolation properties can change dramatically .the order parameter no longer vanishes continuously but typically jumps at in a discontinuous transition as cascades of failures fragment the system .a broader degree distribution now enhances a graph s vulnerability to random failures , in opposition to the behavior in isolated graphs .details of the corresponding theoretical framework have been worked out by parshani _ _ et al . _ _ , son _ _ et al . _ , baxter __ et al . __ and more recently the notion of networks of networks has been included .there have also been attempts to integrate this class of models into the framework of multilayer networks .in addition to random node failure other procedures for initial node removal have been explored , e.g. , node removal with respect to their degree ( targeted attacks ) .currently , two notions of _ localized _ attacks have been described .attacks of the first sort are defined on spatially embedded networks and are local with respect to a distance in this embedding , i.e. in a geographical sense .the second approach considers locality in terms of connectivity : around a randomly chosen seed , neighbours are removed layer by layer . in contrast , as described below in our approach , attacks are localized with respect to the three network domains , while within the domains nodes are chosen randomly .at this point we would like to shortly comment on the applicability of the mathematical concepts of interdependent networks to real - world data .aiming at analytical tractability , typical model systems need to choose a rather high level of abstraction .while certainly many systems can be accurately addressed in that way , we argue that especially in the case of biological systems the theoretical concepts can require substantial adjustment to cover essential properties of the system at hand .when asking for the systemic consequences of interdependency , the distinction between several classes of nodes and links may be required .effectively , some classes of links may then represent simple connectivity , while others can rather be seen as dependence links . in biology ,such dependencies are typically mediated by specific molecules ( e.g. , a small metabolite affecting a transcription factor , or a gene encoding an enzyme catalyzing a biochemical reaction ) . such implementations of dependence links are no longer just associations and it is hard to formally distinguish them from the functional links . in contrast to the explicitly alternating percolation and dependency steps in typical computational models in which the decoupling of nodes from the largest component yields dependent nodes to fail , in our directed model both , connectivity and dependency links are evaluated in every time step and ( apart from nodes failing due to dependency ) only fully decoupled vertices cause further dependency failures .due to the functional three - domain partition of our _ e. coli _ gene regulatory and metabolic network reconstruction , we have the possibility to classify perturbations not only according to their size , but also with respect to their localization in one of the domains comprising the full interdependent system , thereby enabling us to address the balance of sensitivity and robustness of the interdependent network of gene regulation and metabolism . herewe introduce the concept of network response to localized perturbations analysis .this analysis will reveal that perturbations in gene regulation affect the system in a dramatically different way than perturbations in metabolism .thus we study the response to _ localized _ perturbations .we denote such perturbations by , where is the domain , in which the perturbation is localized ( , with representing the total network , i.e. , the case of non - localized perturbations ) .the perturbation size is measured in fractions of the total network size .a perturbation thus is a perturbation in the metabolic domain with ( on average ) nodes initially affected .note that sizes of such localized perturbations are limited by the domain sizes , e.g. , for a perturbation in the gene regulatory domain .after the initial removal of a fraction of the vertices from the domain the stepwise dynamics described above will lead to the deactivation of further nodes and run into a frozen state in which only a fraction of the vertices are unaffected by the perturbation ( i.e. , are still in state ) .in addition to being directly affected by failing neighbors , in the process of network fragmentation nodes may also become parts of small components disconnected from the network s core , and could in this sense be considered non - functional ; we therefore also monitor the relative sizes of the largest ( weakly ) connected component in the frozen state , , for different initial perturbation sizes . in the limit of infinite system size we could expect a direct investigation of as a function of to yield the critical perturbation size at which vanishes . in our finite system, however , we have to estimate ; following radicchi and castellano and radicchi we measure the fluctuation of which serves as our order parameter and look for the peak position of the susceptibility as a function of parameter in order to estimate the transition point from the finite system data .supplementary figure [ fig : overview ] schematically illustrates the analysis .in order to interpret the actual responses of a given network to perturbations , one usually contrasts them to those of suitably randomized versions of the network at hand .thereby , the often dominant effect of the node degree distribution of a network can be accounted for and the effects of higher - order topological features that shape the response of the network to perturbations can be studied systematically . the same is true for the localized perturbation response analysis introduced here .in fact , due to the substantially larger number of links from gene regulation to metabolism ( both , directly and via the interface component of the interdependent network ) than from metabolism to gene regulation we can already expect the response to such localized perturbations to vary . herewe employ a sequence of ever more stringent randomization schemes to generate sets of randomized networks serving as null models for the localized perturbation response analysis . in all of the four schemes the edge - switching procedure introduced by maslov and sneppen employed which conserves the in- and out - degrees of all vertices .our most flexible randomization scheme ( domain ) only considers the domains of the source and target vertices of an edge ( sd and td ) : only pairs of edges are flipped which share both , the source and the target domain ( e.g. , both link a vertex in the metabolic domain to a vertex in the interface ) .the remaining three randomization schemes all add an additional constraint .the domain_lce randomization further requires the edges to be of the same logical categories of an edge ( i.e. , c , d , or r ) , while the domain_bcv scheme only switches edges whose target vertices also share the same biological category of a vertex , bcv .the strictest randomization , domain_bce , finally , only considers edges with , additionally , the same biological category of an edge , bce. a tabular overview of the four schemes is given in supplementary table [ tab : randschemes ] .the main feature of our reconstructed network , the three - domain structure based on the biological role of its constituents , allows us to study the influence of localizing the initial perturbation .thus , although we will not focus on ( topological ) details of the graph here ( which will be presented elsewhere ) , already from the vertex and edge counts in figure [ fig : sketch ] we see that the domains are of different structure . while the regulatory and the metabolic subgraphs , and have average ( internal ) degrees of about and , the interface subgraph , , is very sparse with and we can expect it to be fragmented .hence , in the following we decide to only perturb in and . in a first step we sample some full cascade trajectories in order to check our expectation of different responses of the system to small perturbations applied in either or ; two rather large values of chosen and the raw number of unaffected nodes is logged during the cascade .indeed , already this first approach implies a different robustness of the gene regulatory and the metabolic domains in terms of the transmission of perturbation cascades to the other domains .cascades seeded in the metabolic domain of the network tend to be rather restricted to this domain , while the system seems much more susceptible to small perturbations applied in the gene regulatory domain .this effect can be seen both in the overall sizes of the aggregated cascades as well as in the domain which shows the largest change with respect to the previous time step , which we indicate by black markers in figure [ fig : states ] .more sample trajectories are shown in the supplementary materials and , although they illustrate occasionally large fluctuations between the behaviors of single trajectories , they are consistent with this first observation . they also show that considerably larger metabolic perturbations are needed for large cascades and back - and - forth propagation between domains to emerge .after this first glance at the system we aim for a more systematic approach and apply our analysis as described above : we compute cascade steady - states but now we choose the largest ( weakly ) connected component as the order parameter and compute the susceptibility according to equation , the peak - position of which , when considered as a function of , we use as a proxy for the perturbation size at which the interdependent system breaks down .the results for different initially perturbed domains illustrate that , indeed , a considerably lower ( i.e. , larger critical perturbation size ) is estimated in the case of metabolic perturbations compared to regulatory or non - localized ones ( figure [ fig : suscept ] , panel a ) . for each pointwe average runs for the corresponding set of parameters . in order to assess whether the above - described behavior is due to specific properties of the network we use the sets of randomized graphsfor each of the four randomization schemes we prepared graph instances and repeated the analysis for each of them as done before for the single original graph .the corresponding results for the susceptibility ( figure [ fig : suscept ] , panels b e ) yield two major observations : firstly , metabolic perturbations still lead to , albeit only slightly , higher estimates ( with exception of domain randomization ) .thus , the system s tendency to be more robust towards metabolic perturbations is largely preserved .secondly , we see that overall the original network seems to be much more robust than the randomized networks ; very small perturbations are sufficient to break the latter ones .the robustness of the original graph , thus , can not be solely due to the edge and vertex properties kept in the randomization schemes .the susceptibility of the integrative _ e. coli _ model and its randomized versions for perturbations in different domains .the results for the original graph are shown in the _ top _ frame , while the subsequent frames show results for the four different randomization schemes with the least strict on top and the strictest one at the bottom .the original system is more robust than its randomized versions ; perturbations in the metabolism consistently need to be larger than in the regulatory part in order to reach the maximum susceptibility . ] finally , let us focus on the practical aspect of these findings . beyond the careful statistical analysis described above , a quantity of practical relevance is the average size of the unaffected part of the system under a perturbation . for this purpose, we examine the fractions of unaffected vertices , , after cascades emanating from perturbations of different sizes and seeded in different domains , regardless of the resulting component structure and for both , the original graph and the shuffled ones ( figure [ fig : frac ] ) .mean fractions ( averaged over runs ) of unaffected vertices after initial perturbations of different sizes seeded in the gene regulatory ( filled green dots ) and in the metabolic domain ( filled red squares ) .the curves at the bottom show the averaged results over shuffles for each randomization scheme . ]the number of unaffected vertices for the real network is much larger than for all four randomization schemes , suggesting a strong overall robustness of the biological system .distinguishing , however , between the metabolic and the gene regulatory components reveals that the metabolic part is substantially more robust than the regulatory part ( for not too large initial perturbations , ) .we investigated the spreading of perturbations through the three domains of a graph representation of the integrated system of _e. coli _ s gene regulation and metabolism .our results quantify the resulting cascading failures as a function of size and localization of the initial perturbation .our findings show that the interdependent network of gene regulation and metabolism unites sensitivity and robustness by showing different magnitudes of damage dependent on the site of perturbation .while the interdependent network of these two domains is in general much more robust than its randomized variants ( retaining domain structure , degree sequence , and major biological aspects of the original system ) , a pronounced difference between the gene regulatory and metabolic domain is found : small perturbations originating in the gene regulatory domain typically trigger far - reaching system - wide cascades , while small perturbations in the metabolic domain tend to remain more local and trigger much smaller cascades of perturbations . in order to arrive at a more mechanistic understanding of this statistical observation ,we estimated the percolation threshold of the system , , and found that it is much lower ( i.e , larger perturbations , , are required ) for perturbations seeded in the metabolic domain than for those applied to the gene regulatory domain .this is in accordance with the intuition that the metabolic system is more directly coupled to the environment ( via the uptake and secretion of metabolic compounds ) than the gene regulatory domain .the distinct perturbation thresholds therefore allow for implementing a functionally relevant balance between robustness and sensitivity : the biological system can achieve a robustness towards environmental changes , while via the more sensitive gene regulatory domain it still reacts flexibly to other systemic perturbations . discovering this design principle of the biological system required establishing a novel method of analyzing the robustness of interdependent networks , the network response to localized perturbations : an interdependent network can have markedly different percolation thresholds , when probed with perturbations localized in one network component compared to another .lastly , we would like to emphasize that the application of the theoretical concepts of interdependent networks to real - life systems involves several non - trivial decisions : in the vast majority of ( theoretical ) investigations , two definitions of interdependent networks coincide : the one derived from a distinction between dependency links and connectivity - representing links and the one based on two functionally distinguishable , but interconnected subnetworks . herewe have three classes of nodes : those involved in gene regulation , metabolic nodes , and nodes associated with the ( protein ) interface between these two main domains .these nodes are interconnected with ( functionally ) different classes of links .these link classes are necessary to define meaningful update rules for perturbations . as a consequence , the notion of dependency links vs. connectivity links is no longer applicable .we expect that such adjustments of the conceptual framework will often be required when applying the notion of interdependent networks to real - life systems .as mentioned above , one major task when dealing with biological data is to abstract from the minor but keep the essential details ; we have outlined that in this study we chose to keep a rather high level of detail . with only incomplete information available ,a challenge is to find the right balance between radical simplifications of systemic descriptions and an appropriate level of detail still allowing for a meaningful evaluation of biological information .here we incorporate high level of detail in the structural description , distinguishing between a comparatively large number of node and link types .this rich structural description , together with a set of update rules motivated by general biological knowledge , allows us to assess the dynamical / functional level with the comparatively simple methods derived from percolation theory .an important question is , whether the analysis of the fragmentation of such a network under random removal of nodes can provide a reliable assessment of functional properties , since the response of such a molecular network clearly follows far more intricate dynamical rules than the percolation of perturbations can suggest .a future step could include the construction of a boolean network model for the full transcriptional regulatory network and the connection of this model to flux predictions obtained via flux balance analysis , a first attempt of which is given in samal and jain ( where the model of covert __ et al . __ with still fewer interdependence links has been used ) .our perturbation spreading approach might help bridging the gap between theoretical concepts from statistical physics and biological data integration : integrating diverse biological information into networks , estimating response patterns to systemic perturbations and understanding the multiple systemic manifestations of perturbed , pathological states is perceived as the main challenge in systems medicine ( see , e.g. , bauer __ et al . __ ) .concepts from statistical physics of complex networks may be of enormous importance for this line of research . while the simulation of the full dynamics is still problematic as our knowledge of the networks is still incomplete , our present strategy extracts first dynamical properties of the interdependent networks . at a later time point, we can expect qualitatively advances from full dynamical simulations , however , dependent on the quality of the data sets . on the theoretical side, future studies might shift the focus onto recasting the system into an appropriate spreading model , e.g. , in the form of an unordered binary avalanche , or as an instance of the linear threshold model with a set of links with a very high and a second set with a very low transmission probability ( c / r and d - links , respectively ) .radicchi presents an approach for the investigation of the percolation properties of finite size interdependent networks with a specific adjacency matrix with the goal of loosening some of the assumptions underlying the usual models ( e.g. , infinite system limit , graphs as instances of network model ) . while this formalism allows for the investigation of many real - world systems there are still restrictions as to the possible level of detail . in our special case ,for instance , a considerable amount of information would be lost if the system was restricted to vertices with connections in both the c / r- and d - layers .the existence of different percolation thresholds for localized perturbations in interdependent networks may reveal itself as a universal principle for balancing sensitivity and robustness in complex systems .the application of these concepts to a wide range of real - life systems is required to make progress in this direction .s.b . and m.h .acknowledge the support of deutsche forschungsgemeinschaft ( dfg ) , grants bo 1242/6 and hu 937/9 .m.h . and s.b .designed and supervised the study ; d.k . and a.g .performed the reconstruction , simulations and analyses ; and d.k ., s.b . and m.h. wrote the manuscript .the authors declare no competing financial interests .10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 . _ _ * * , ( ) . , & . _ _ * * , ( ) . ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) . & ._ _ * * , ( ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) . & ._ _ * * , ( ) . , , & ._ _ * * , ( ) . ._ _ * * , ( ) . & ._ _ * * , ( ) . , , , &_ _ * * , ( ) . & ._ _ * * , ( ) . , , , & ._ _ * * , ( ) . & ._ _ * * , ( ) . , &_ _ * * , ( ) . , & ._ _ * * , ( ) . , , , & ._ _ * * , ( ) . & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) . ._ _ * * , ( ) . , , , &_ _ * * , ( ) . , , , & ._ _ * * , ( ) . , , & ._ _ * * ( ) . & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ et al . _ . __ * * , ( ) ., , & ( ) . ._ et al . _ . __ * * , ( ) . ._ _ * * , ( ) . , , & ._ _ * * , ( ) . & ._ _ * * , ( ) . , & ._ _ * * , ( ) ., , & . _ _ * * , ( ) . , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ _ * * , ( ) . , , , & ._ _ * * , ( ) ._ _ * * , ( ) . , , & ._ _ * * , ( ) . ._ _ * * , ( ) . & ._ _ * * , ( ) ._ et al . _ . _, & _ _ * * , ( ) . ._ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) .* supplementary material to + the interdependent network of gene regulation and metabolism is robust where it needs to be * here , we provide supplementary information to the main manuscript : 1 . a description of the analysis including a schematic overview , 2 .an explanation of the notation of vertex and edge categories , 3 .tables summarizing the four custom - built randomization schemes , the quantities and parameters shown / used in the particular figures as well as for the introduced vertex and edge categories 4 . a collection of sample trajectories , , to be compared to fig .[ fig : states ] in the main manuscript .the network response to localized perturbations analysis presented here ( see figure [ fig : overview ] ) , is a multi - step method entailing several runs of the perturbation algorithm , , and the statistical evaluation of these runs .more precisely , for each analysis 500 runs of the perturbation algorithm have been performed and the set of the remaining vertices have been evalueted , e.g. , in terms of the susceptibility , ( part a in figure [ fig : overview ] ) . for a single run of the perturbation algorithm , , a domain has to be chosen where a perturbation of size ( measured as a fraction of vertices ) will be applied , based on and the set of initially perturbed vertices , , is randomly selected .the state of the system can also be described as a vector of boolean state variables , , where 0 denotes a perturbed vertex and 1 an unaffected one .running the perturbation dynamics described in the main manuscript will ( probably ) cause the failure of further vertices resulting in a time series of affected vertices , ( or , equivalently , to the trajectory ) .the size of the affected network after propagation steps can be described as from the set of affected vertices in the in the asymptotic regime , , the size of the unaffected network , , and the size of the largest ( weakly ) connected component ( ) of the unaffected network , are computed ( figure [ fig : overview ] , part b ) , \right|\text{. } \end{aligned}\ ] ] randomized networks ( we used sets of instances for each of the randomization schemes ) can be passed to the algorithm instead ( figure [ fig : overview ] , part c ) .( t1 ) original network ; ( p1 ) ; ( t2 ) statistical evaluation : ; ( t3 ) * susceptibility + * variance ] ; ( l1 ) * ( a ) * ; let 1=(l1),2=(box.north ) in coordinate ( l2c ) at ( 1,2 ) ; ; ; ( p1.south west ) to ( box.north west ) ; ( p1.south east ) to ( box.north east ) ; ( t4a ) to ( t4 ) ; ( t1 ) to ( p1 ) ; ( p1 ) to node[above ] multiple runs ( t2 ) ; ( ) to ( ) to ( d.west ) ; ( b1.north ) to ( d.west ) ; ( d.east ) to ( b2 ) ; ( b2 ) to ( b3 ) ; ( b3 ) to ( b4 ) ; ( b3 ) to ( b3a ) ; ( b4 ) to ( b4a ) ; ( t5 ) to ( t6 ) ; ( t6 ) to ( p2 ) ; ( p2 ) to node[above ] multiple runs ( t7 ) ;the following notation concerning vertices and edges and their properties have been used . in our integrated network model a vertexis characterized by its biological function yielding seven unique _ biological categories of a vertex _ ( bcvs ) , reaction ( ) , compound ( ) , gene ( ) , protein monomer ( ) , protein - protein complex ( ) , protein - compound complex ( ) , and protein - rna complex ( ; see table [ tab : bcvs ] ) , introducing an additional vertex classification facilitates the assignment to one of the three functional domains as well as the edge characterization .domain - related categories of a vertex _ ( dcvs ) are eight - fold : gene ( ) , protein ( ) , complex ( ) , enzyme ( ) , reaction ( ) , compound ( ) , educt ( ) , product ( ) , while the categories and are a one - to - one translations of the corresponding bcvs , i.e. , and , the domain - related category only comprises vertices of bcv but the inverse does not hold .the remaining five categories are ambiguous assignments for both bcvs to dcvs and vice versa .the complete mapping of bcvs onto dcvs is given in table [ tab : dcvs ] .an edge is characterized by its source and target vertices , and , and their corresponding domains ( source domain , sd and target domain , td ) , as well as the edge s _ logical category _ and its _ biological category _ ; an edge is given by thus determining sd and td .the _ logical category of an edge _ ( lce ) determines , qualitatively speaking , whether a perturbation will propagate along this edge via a logical ` and ` or a logical ` or ` ; the three categories are conjunct ( ) , disjunct ( ) and regulation ( ) , as an illustration of the potential linkages , two case examples are presented in figure [ fig : biologicalcaseexamples ] .\(e ) ; ( cd ) ; ( c ) ; ( d ) ; ( c ) ; ( d ) ; ( succ ) ; ( coa ) ; ( atp ) ; ( succcoa ) ; ( adp ) ; ( pi ) ; ( c ) to node[right , pos=0.7 ] d ( c ) ; ( c.south ) to node[left , pos=0.2 ] 2 node[above , pos=0.8 ] c ( cd.north ) ; ( d ) to node[left , pos=0.7 ] d ( d ) ; ( d.south ) to node[right , pos=0.2 ] 2 node[above , pos=0.8 ] c ( cd.north ) ; ( cd ) to node[right , pos=0.75 ] d ( e ) ; ( succ.east ) to node[above right , yshift=-0.5em ] c ( e ) ; ( coa.east ) to node[above right , yshift=-0.2em ] c ( e ) ; ( atp.east ) to node[above right , xshift=-0.2em ] c ( e ) ; ( e ) to node[above right , xshift=-0.2em ] d ( succcoa.west ) ; ( e ) to node[above right , yshift=-0.2em ] d ( adp.west ) ; ( e ) to node[above right , yshift=-0.5em ]d ( pi.west ) ; ( adp.30 ) to[out=5,in=0,looseness=3.5 ] node[above , pos=0.96 ] r ( cd.43 ) ; ( ch ) ; ( e1 ) ; ( i ) ; ( e2 ) ; \(x ) ; ( menf ) ; ( menf ) ; ( entc ) ; ( entc ) ; ( menf ) to node[right , pos=0.7 ] d ( menf ) ; ( menf ) to node[right , pos=0.25 ] 2 node[right , pos=0.8 ] c ( x ) ; ( x ) to node[right , pos=0.75 ] d ( e1 ) ; ( ch.east ) to node[above , pos=0.7 ] c ( e1 ) ; ( e1 ) to node[above , pos=0.8 ] d ( i.west ) ; ( entc ) to node[right , pos=0.7 ]d ( entc ) ; ( entc ) to node[right , pos=0.75 ] d ( e2 ) ; ( ch.east ) to node[below , pos=0.7 ] c ( e2 ) ; ( e2 ) to node[below , pos=0.8 ] d ( i.west ) ; let 1=(coa.west),2=(menf.north ) in coordinate ( l1 ) at ( 1,2 ) ; ; let 1=(ch.west),2=(menf.north ) in coordinate ( l2 ) at ( 1,2 ) ; ; the biological categories of an edge ( bces ) are derived from combinations of the domain - related categories of a vertex ( dcvs ) , plus transport ( ) and regulation ( ) , the mapping of biological categories of edges onto logical categories of edges is given in table [ tab : bces ] ..overview of the four custom - built randomization schemes in order of their strictness . in particular , the degree of freedeom is denoted by the possible pairs of edges available for the randomization and by the conserved quantities of the graph : source domain ( sd ) , target domain ( td ) , logical category of an edge ( lce ) , biological category of a vertex ( bcv ) , and biological category of an edge ( bce ) . [ cols="<,>,<",options="header " , ] lcl16em bces & lces & vertex linkages + gene protein ( ) & & ( g);(p)at(0.8,0);(g)to( ) ; + protein complex ( ) & & ( p);(c)at(0.8,0);()to( ) ; ( p);(c)at(0.8,0);()to( ) ; ( p);(c)at(0.8,0);()to( ) ; + compound complex ( ) & & ( c);(p)at(0.8,0);(c)to( ) ; + enzyme reaction ( ) & & ( p);(r)at(0.8,0);()to(r ) ; ( p);(r)at(0.8,0);()to(r ) ; + educt reaction ( ) & & ( p);(r)at(0.8,0);()to(r ) ; ( p);(r)at(0.8,0);()to(r ) ; ( p);(r)at(0.8,0);()to(r ) ; ( c);(r)at(0.8,0);(c)to(r ) ; + reaction product ( ) & & ( r);(c)at(0.8,0);(r)to(c ) ; ( r);(p)at(0.8,0);(r)to( ) ; ( r);(p)at(0.8,0);(r)to( ) ; ( r);(p)at(0.8,0);(r)to( ) ; + transport ( ) & & ( c1);(c2)at(0.8,0);(c1)to(c2 ) ; + regulation ( ) & & ( g1);(g2)at(0.8,0);(g1)to(g2 ) ; ( p);(g)at(0.8,0);()to(g ) ; ( p1);(p2)at(0.8,0);()to( ) ; ( p1);(p2)at(0.8,0);()to( ) ; ( p);(r)at(0.8,0);()to(r ) ; ( c);(g)at(0.8,0);()to(g ) ; ( p1);(p2)at(0.8,0);()to( ) ; ( p1);(p2)at(0.8,0);()to( ) ; ( p);(r)at(0.8,0);()to(r ) ; ( c);(g)at(0.8,0);()to(g ) ; ( c);(p)at(0.8,0);()to( ) ; ( c);(g)at(0.8,0);()to(g ) ; ( c);(g)at(0.8,0);(c)to(g ) ; ( c);(p)at(0.8,0);(c)to( ) ; ( c);(c)at(0.8,0);(c)to( ) ; ( c);(r)at(0.8,0);(c)to(r ) ; +here , further sample trajectories are given similar to the ones in fig . in the main manuscript .
the major biochemical networks of the living cell , the network of interacting genes and the network of biochemical reactions , are highly interdependent , however , they have been studied mostly as separate systems so far . in the last years an appropriate theoretical framework for studying interdependent networks has been developed in the context of statistical physics . here we study the interdependent network of gene regulation and metabolism of the model organism _ escherichia coli _ using the theoretical framework of interdependent networks . in particular we aim at understanding how the biological system can consolidate the conflicting tasks of reacting rapidly to ( internal and external ) perturbations , while being robust to minor environmental fluctuations , at the same time . for this purpose we study the network response to localized perturbations and find that the interdependent network is sensitive to gene regulatory and protein - level perturbations , yet robust against metabolic changes . this first quantitative application of the theory of interdependent networks to systems biology shows how studying network responses to localized perturbations can serve as a useful strategy for analyzing a wide range of other interdependent networks . a main conceptual approach of current research in the life sciences is to advance from a detailed analysis of individual molecular components and processes towards a description of biological _ systems _ and to understand the emergence of biological function from the interdependencies on the molecular level . supported by the diverse high - throughput omics technologies , the relatively recent discipline of systems biology has been the major driving force behind this new perspective which becomes manifest , for example , in the effort to compile extensive databases of biological information to be used in genome - scale models . despite its holistic game plan , however , systems biology frequently operates on the level of subsystems : even when considering cell - wide transcriptional regulatory networks , as , e.g. , in a network motif analysis , this is only one of the cell s networks . likewise , the popular approach to studying metabolic networks in systems biology , constraint - based modeling , accounts for steady - state predictions of metabolic fluxes of genome - scale metabolic networks , which again , is only one of the other networks of the cell . in the analysis of such large networks , systems biology draws its tools considerably from the science of complex networks which provides a mathematical framework especially suitable for addressing interdisciplinary questions . combining the mathematical subdiscipline of graph theory with methods from statistical physics , this new field greatly contributed to the understanding of , e.g. , the percolation properties of networks , potential processes of network formation or the spreading of disease on networks . in the early 2000s , gene regulation and metabolism have been among the first applications of the formalisms of network biology . among the diverse studies of network structure for these systems , the most prominent ones on the gene regulatory side are the statistical observation and functional interpretation of small over - represented subgraphs ( network motifs ) and the hierarchical organization of gene regulatory networks . on the metabolic side , the broad degree distribution of metabolic networks stands out , with the caveat , however , that currency metabolites ( like atp or h ) can severely affect network properties , as well as the hierarchical modular organization of metabolic networks . over the last decade , the field of complex networks moved its focus from the investigation of single - network representations of systems to the interplay of networks that interact with and/or depend on each other . strikingly , it turned out that explicit interdependence between network constituents can fundamentally alter the percolation properties of the resulting _ interdependent networks _ , which can show a discontinuous percolation transition in contrast to the continuous behavior in single - network percolation . it has also been found that , contrary to the isolated - network case , networks with broader degree distribution become remarkably fragile as interdependent networks . however , this set of recent developments in network science still lacks application to systems biology . arguably , the most prominent representative of interdependent networks in a biological cell is the combined system of gene regulation and metabolism which are interconnected by various forms of protein interactions , e.g. , enzyme catalysis of biochemical reactions couples the regulatory to metabolic network , while the activation or deactivation of transcription factors by certain metabolic compounds provides a coupling in the opposite direction . although it is well - known that gene regulatory and metabolic processes are highly dependent on one another only few studies addressed the interplay of gene regulation and metabolism on a larger scale and from a systemic perspective . the first two studies have aimed at finding consistent metabolic - regulatory steady states by translating the influence of metabolic processes on gene activity into metabolic flux predicates and incorporating high - throughput gene expression data . this can be considered as an extension of the constraints - based modeling framework beyond the metabolic network subsystem . in the paper of samal and jain , on the other hand , the transcriptional regulatory network of _ escherichia coli _ ( _ e . coli _ ) metabolism has been studied as a boolean network model into which flux predicates can be included as additional interactions . these models were first important steps towards integrating the subsystems of gene regulation and metabolism from a systems perspective . the formalism of interdependent networks now allows us to go beyond these pioneering works on integrative models , by analyzing the robustness of the combined system in terms of the maximal effect a small perturbation can have on such interdependent systems . in particular , the findings can be interpreted in the context of cascading failures and percolation theory . we here undertake a first application of the new methodological perspective to the combined networks of gene regulation and metabolism in _ e. coli_. using various biological databases , particularly as the main core , we have compiled a graph representation of gene regulatory and metabolic processes of _ e. coli _ including a high level of detail in the structural description , distinguishing between a comparatively large number of node and link types according to their biological functionality . a structural analysis of this compilation reveals that , in addition to a small set of direct links , the gene - regulatory and the metabolic domains are predominantly coupled via a third network domain consisting of proteins and their interactions . figure [ fig : sketch ] shows this three - domain functional division . details about the data compilation , the network reconstruction and the domain - level analysis are given in grimbs _ _ et al . _ _ . this rich structural description , together with purpose - built , biologically plausible propagation rules allows us to assess the functional level with methods derived from percolation theory . more precisely , we will investigate cascading failures in the three - domain system , emanating from small perturbations , localized in one of the domains . by network response to localized perturbations analysis we will observe below that ( i ) randomized versions of the graph are much less robust than the original graph and ( ii ) that the integrated system is much more susceptible to small perturbations in the gene regulatory domain than in the metabolic one .
in signal processing , a classical uncertainty principle limits the time - bandwidth product of a signal , where is the measure of the support of the signal , and is the measure of the support of its fourier transform ( cf ., e.g. , ) .a very elementary formulation of that principle is whenever and . in the inverse source problem ,the far field radiated by a source is its _ restricted _ ( to the unit sphere ) _ fourier transform _ , and the operator that maps the restricted fourier transform of to the restricted fourier transform of its translate is called the _ far field translation operator_. we will prove an uncertainty principle analogous to , where the role of the fourier transform is replaced by the far field translation operator . combining this principle with a _regularized picard criterion _ , which characterizes the non - evanescent ( i.e. , detectable ) far fields radiated by a ( limited power ) source supported in a ball provides simple proofs and extensions of several results about locating the support of a source and about splitting a far field radiated by well - separated sources into the far fields radiated by each source component .we also combine the regularized picard criterion with a more conventional uncertainty principle for the map from a far field in to its fourier coefficients .this leads to a data completion algorithm which tells us that we can deduce missing data ( i.e. on part of ) if we know _ a priori _ that the source has small support .all of these results can be combined so that we can simultaneously complete the data and split the far fields into the components radiated by well - separated sources .we discuss both ( least squares ) and ( basis pursuit ) algorithms to accomplish this .perhaps the most significant point is that all of these algorithms come with bounds on their condition numbers ( both the splitting and data completion problems are linear ) which we show are sharp in their dependence on geometry and wavenumber .these results highlight an important difference between the inverse source problem and the inverse scattering problem .the conditioning of the linearized inverse scattering problem does not depend on wavenumber , which means that the conditioning does not deteriorate as we increase the wavenumber in order to increase resolution .the conditioning for splitting and data completion for the inverse source problem does , however , deteriorate with increased wavenumber , which means the dynamic range of the sensors must increase with wavenumber to obtain higher resolution .we note that applications of classical uncertainty principles for the one - dimensional fourier transform to data completion for band - limited signals have been developed in . in this classicalsetting a problem that is somewhat similar to far field splitting is the representation of highly sparse signals in overcomplete dictionaries .corresponding stability results for basis pursuit reconstruction algorithms have been established in . the numerical algorithms for far field splitting that we are going to discuss have been developed and analyzed in .the novel mathematical contribution of the present work is the stability analysis for these algorithms based on new uncertainty principles , and their application to data completion . for alternate approaches to far field splitting that however , so far , lack a rigorous stability analysis we refer to ( see also for a method to separate time - dependent wave fields due to multiple sources ) .this paper is organized as follows . in the next section we provide the theoretical background for the direct and inverse source problem for the two - dimensional helmholtz equation with compactly supported sources . in section [ sec : regpicard ]we discuss the singular value decomposition of the restricted far field operator mapping sources supported in a ball to their radiated far fields , and we formulate the regularized picard criterion to characterize non - evanescent far fields . in section [ sec : uncertainty ] we discuss uncertainty principles for the far field translation operator and for the fourier expansion of far fields , and in section [ sec : l2corollaries ] we utilize those to analyze the stability of least squares algorithms for far field splitting and data completion .section [ sec : l1corollaries ] focuses on corresponding results for algorithms .consequences of these stability estimates related to conditioning and resolution of reconstruction algorithms for inverse source and inverse scattering problems are considered in section [ sec : conditioning ] , and in section [ sec : analyticexample][sec : numericalexamples ] we provide some analytic and numerical examples .suppose that represents a compactly supported acoustic or electromagnetic source in the plane .then the time - harmonic wave radiated by at _ wave number _ solves the _ source problem _ for the helmholtz equation and satisfies the _ sommerfeld radiation condition _ we include the extra factor of on the right hand side so that both and scale ( under dilations ) as functions ; i.e. , if and , then with this scaling , distances are measured in wavelengths wavelengths . ] , and this allows us to set in our calculations , and then easily restore the dependence on wavelength when we are done . the _ fundamental solution _ of the helmholtz equation ( with ) in two dimensions is so the solution to can be written as a volume potential the asymptotics of the hankel function tell us that where with , and the function is called the _ far field _ radiated by the source , and equation shows that the _ far field operator _ , which maps to is a _ restricted fourier transform _ , i.e. the goal of the inverse source problem is to deduce properties of an unknown source from observations of the far field .clearly , any compactly supported source with fourier transform that vanishes on the unit circle is in the nullspace of the far field operator .we call a _ non - radiating source _ because a corollary of rellich s lemma and unique continuation is that , if the far field vanishes , then the wave vanishes on the unbounded connected component of the complement of the support of .the nullspace of is exactly neither the source nor its support is uniquely determined by the far field , and , as non - radiating sources can have arbitrarily large supports , no upper bound on the support is possible .there are , however , well defined notions of lower bounds .we say that a compact set _ carries _ , if every open neighborhood of supports a source that radiates .convex scattering support _ of , as defined in ( see also ) , is the intersection of all compact convex sets that carry .the set itself carries , so that is the smallest convex set which carries the far field , and the convex hull of the support of the `` true '' source must contain . because two disjoint compact sets with connected complements can not carry the same far field pattern ( cf .* lemma 6 ) ) , it follows that intersects any connected component of , as long as the corresponding source component is not non - radiating . in , an analogous notion , the _ uwscs support _, was defined , showing that any far field with a compactly supported source is carried by a smallest union of well - separated convex sets ( well - separated means that the distance between any two connected convex components is strictly greater than the diameter of any component ) .a corollary is that it makes theoretical sense to look for the support of a source with components that are small compared to the distance between them . here , as in previous investigations , we study the well - posedness issues surrounding numerical algorithms to compute that support .if we consider the restriction of the source to far field map from to sources supported in the ball of radius centered at the origin , i.e. , we can write out a full singular value decomposition .we decompose as where , , span the closed subspace of _ free sources _ , which satisfy and belongs to the orthogonal complement of that subspace ; i.e. , is a non - radiating source . with its continuationto by zero whenever appropriate . ]the restricted far field operator maps where denoting the fourier coefficients of a far field by so that and by parseval s identity , an immediate consequence of is that which has -norm is the source with smallest 2-norm that is supported in and radiates the far field .we refer to as the _ minimal power source _ because , in electromagnetic applications , is proportional to current density , so that , in a system with a constant internal resistance , is proportional to the input power required to radiate a far field .similarly , measures the radiated power of the far field .the squared singular values of the restricted fourier transform have a number of interesting properties with immediate consequences for the inverse source problem ; full proofs of the results discussed in the following can be found in appendix [ app : sn2 ] .the squared singular values satisfy and decays rapidly as a function of as soon as , moreover , the odd and even squared singular values , , are decreasing ( increasing ) as functions of ( ) , and asymptotically where denotes the smallest integer that is greater than or equal to .this can also be seen in figure [ fig : sn2 ] , where we include plots of ( solid line ) together with plots of the asymptote ( dashed line ) for ( left ) and ( right ) .( solid line ) and asymptote ( dashed line ) for ( left ) and ( right).,title="fig:",height=207 ] ( solid line ) and asymptote ( dashed line ) for ( left ) and ( right).,title="fig:",height=207 ] the asymptotic regime in is already reached for moderate values of . the forgoing yields a very explicit understanding of the restricted fourier transform .for the singular values are uniformly large , while for the are close to zero , and it is seen from as well as from figure [ fig : sn2 ] that as gets large the width of the -interval in which falls from uniformly large to zero decreases .similar properties are known for the singular values of more classical restricted fourier transforms ( see ) .a physical source has _ limited power _ , which we denote by , and a receiver has a _ power threshold _ , which we denote by .if the radiated far field has power less than , the receiver can not detect it .because and the odd and even squared singular values , , are decreasing as functions of , we may define : , if is a far field radiated by a limited power source supported in with , then , for accordingly , is below the power threshold .so the subspace of detectable far fields , that can be radiated by a power limited source supported in is : we refer to as the subspace of _ non - evanescent far fields _ , and to the orthogonal projection of a far field onto this subspace as the _ non - evanescent _ part of the far field .we use the term _ non - evanescent _ because it is the phenomenon of evanescence that explains why the the singular values decrease rapidly for , resulting in the fact that , for a wide range of and , , if is sufficiently large . as function of for different values of .dotted lines correspond to and .,title="fig:",height=207 ] as function of for different values of .dotted lines correspond to and .,title="fig:",height=207 ] this is also illustrated in figure [ fig : n - rpp ] , where we include plots of from for , , and and for varying .the dotted lines in these plots correspond to and , respectively .in the inverse source problem , we seek to recover information about the size and location of the support of a source from observations of its far field . because the far field is a restricted fourier transform , the formula for the fourier transform of the translation of a function : plays an important role .we use to denote the map from to itself given by the mapping acts on the fourier coefficients of as a convolution operator , i.e. , the fourier coefficients of satisfy where and are the polar coordinates of . employing a slight abuse of notation , we also use to denote the corresponding operator from to itself that maps note that is a unitary operator on 2 , i.e. . the following theorem , which we call an _ uncertainty principle for the translation operator _, will be the main ingredient in our analysis of far field splitting .[ th : uncertainty1 ] let such that the corresponding fourier coefficients and satisfy and with , and let .then , we will frequently be discussing properties of a far field and those of its fourier coefficients . the following notation will be a useful shorthand : the notation emphasizes that we treat the representation of the function by its values , or by the sequence of its fourier coefficients as simply a way of inducing different norms .that is , both and describe different norms of the same function on .note that , because of the plancherel equality , , so we may just write , and we write for the corresponding inner product . we will extend the notation a little more and refer to the support of in as its -support and denote by the measure of .we will call the indices of the nonzero fourier coefficients in its fourier series expansion the -support of , and use to denote the number of non - zero coefficients . with this notation , theorem [ th : uncertainty1 ]becomes [ th : uncertainty2 ] let , and let . then, we refer to theorem [ th : uncertainty2 ] as an uncertainty principle , because , if we could take in , it would yield as stated , is is true but not useful , because and can not simultaneously be finite .could have been radiated by a source supported in an arbitrarily small ball centered at the origin , or centered at , but rellich s lemma and unique continuation show that no nonzero far field can have two sources with disjoint supports .] we present the corollary only to illustrate the close analogy to the theorem 1 in , which treats the discrete fourier transform ( dft ) on sequences of length : if represents the sequence for and its dft , then this is a lower bound on the _ time - bandwidth product_. in donoho and stark present two important corollaries of uncertainty principles for the fourier transform .one is the uniqueness of sparse representations of a signal as a superposition of vectors taken from both the standard basis and the basis of fourier modes , and the second is the recovery of this representation by 1 minimization .the main observation we make here is that , if we phrase our uncertainty principle as in theorem [ th : uncertainty2 ] , then the far field translation operator , as well as the map from to its fourier coefficients , satisfy an uncertainty principle . combining the uncertainty principle with the regularized picard criterion from section [ sec : regpicard ] yields analogs of both results in the context of the inverse source problem .these include previous results about the splitting of far fields from and , which can be simplified and extended by viewing them as consequences of the uncertainty principle and the regularized picard criterion .the proof of theorem [ th : uncertainty2 ] is a simple corollary of the lemma below : [ lmm : tcestimate ] let and let be the operator introduced in and .then , the operator norm of , , satisfies whereas fulfills recalling , we see that is multiplication by a function of modulus one , so is immediate . on the other hand , combining with the last inequality from page 199 of ; more precisely , shows that using hlder s inequality and we obtain that we can improve the dependence on in under hypotheses on and that are more restrictive , but well suited to the inverse source problem .[ th : uncertainty3 ] suppose that , with , and let such that . then because the 0-support of is contained in ] , and estimate the second , yielding on the other hand , for , according to the principle of stationary phase ( there are stationary points at ) which shows that for which decays no faster than that predicted by theorem [ th : l2splitls ] .next we consider the numerical implementation of the 2 approach from section [ sec : l2corollaries ] and the 1 approach from section [ sec : l1corollaries ] for far field splitting and data completion simultaneously ( cf .theorem [ th : l2completeandsplitmultiplels ] and theorem [ th : l1completeandsplitmultiple ] ) .since both schemes are extensions of corresponding algorithms for far field splitting as described in ( least squares ) and ( basis pursuit ) , we just briefly comment on modifications that have to be made to include data completion and refer to for further details .given a far field that is a superposition of far fields radiated from balls , for some and , we assume in the following that we are unable to observe all of and that a subset is unobserved .the aim is to recover from and a priori information on the location of the supports of the individual source components , .we first consider the 2 approach from section [ sec : l2corollaries ] and write for the observed far field data and .accordingly , i.e. , we are in the setting of theorem [ th : l2completeandsplitmultiplels ] . using the shorthand and , ,the least squares problem is equivalent to seeking approximations and , , satisfying the galerkin condition the size of the individual subspaces depends on the a priori information on .following our discussion at the end of section [ sec : regpicard ] we choose in our numerical example below . denoting by and the orthogonal projections onto and , respectively ,is equivalent to the linear system p_ip_\omega{{\widetilde\beta}}+ p_ip_1t_{c_1}^*{{\widetilde\alpha}}_1 + \cdots + t_{c_i}^*{{\widetilde\alpha}}_i & \,=\ , p_i\gamma \ , .\end{split}\ ] ] explicit matrix representations of the individual matrix blocks in follow directly from ( see ( * ? ? ?* lemma 3.3 ) for details ) for and by applying a discrete fourier transform to the characteristic function on for .accordingly , the block matrix corresponding to the entire linear system can be assembled , and the linear system can be solved directly .the estimates from theorem [ th : l2completeandsplitmultiplels ] give bounds on the absolute condition number of the system matrix .the main advantage of the 1 approach from section [ sec : l1corollaries ] is that no a priori information on the radii of the balls , , containing the individual source components is required .however , we still assume that a priori knowledge of the centers of such balls is available . using the orthogonal projection onto , the basis pursuit formulation from theorem [ th : l1completeandsplitmultiple ]can be rewritten as accordingly , is an approximation of the missing data segment .it is well known that the minimization problem from is equivalent to minimizing the tikhonov functional \in \ell^2\times\cdots\times\ell^2 \,,\ ] ] for a suitably chosen regularization parameter ( see , e.g. , ( * ? ? ?* proposition 2.2 ) ) .the unique minimizer of this functional can be approximated using ( fast ) iterative soft thresholding ( cf . ) . apart from the projection , which can be implemented straightforwardly , our numerical implementation analogously to the implementation for the splitting problem described in , and also the convergence analysis from carries over .imply that the solution of and is very close to the true split . ]we consider a scattering problem with three obstacles as shown in figure [ fig : numexa1 ] ( left ) , which are illuminated by a plane wave , , with incident direction and wave number ( i.e. , the wave length is ) ..,title="fig:",height=188 ] .,title="fig:",height=188 ] assuming that the ellipse is sound soft whereas the kite and the nut are sound hard , the scattered field satisfies the homogeneous helmholtz equation outside the obstacles , the sommerfeld radiation condition at infinity , and dirichlet ( ellipse ) or neumann boundary conditions ( kite and nut ) on the boundaries of the obstacles .we simulate the corresponding far field of on an equidistant grid with points on the unit sphere using a nystrm method ( cf . ) . figure [ fig : numexa1 ] ( middle ) shows the real part ( solid line ) and the imaginary part ( dashed line ) of . since the far field can be written as a superposition of three far fields radiated by three individual smooth sources supported in arbitrarily small neighborhoods of the scattering obstacles ( cf . ,e.g. , ( * ? ? ?* lemma 3.6 ) ) , this example fits into the framework of the previous sections .we assume that the far field can not be measured on the segment i.e. , .we first apply the least squares procedure and use the dashed circles shown in figure [ fig : numexa1 ] ( left ) as a priori information on the approximate source locations , . more precisely , , , and , and .accordingly we choose , and , and solve the linear system .( left ) , reconstruction of the missing part ( middle ) , and difference between exact far field and reconstructed far field ( right).,title="fig:",height=188 ] ( left ) , reconstruction of the missing part ( middle ) , and difference between exact far field and reconstructed far field ( right).,title="fig:",height=188 ] ( left ) , reconstruction of the missing part ( middle ) , and difference between exact far field and reconstructed far field ( right).,title="fig:",height=188 ] figure [ fig : numexa2 ] shows a plot of the observed data ( left ) , of the reconstruction of the missing data segment obtained by the least squares algorithm and of the difference between the exact far field and the reconstructed far field . againthe solid line corresponds to the real part while the dashed line corresponds to the imaginary part .the condition number of the matrix is .we note that the missing data component in this example is actually too large for the assumptions of theorem [ th : l2completeandsplitmultiplels ] to be satisfied .nevertheless the least squares approach still gives good results .applying the ( fast ) iterative soft shrinkage algorithm to this example ( with regularization parameter in ) does not give a useful reconstruction .as indicated by the estimates in theorem [ th : l1completeandsplitmultiple ] the 1 approach seems to be a bit less stable .hence we halve the missing data segment , consider in the following i.e. , , and apply the 1 reconstruction scheme to this data .( left ) , reconstruction of the missing part ( middle ) , and difference between exact far field and reconstructed far field ( right).,title="fig:",height=188 ] ( left ) , reconstruction of the missing part ( middle ) , and difference between exact far field and reconstructed far field ( right).,title="fig:",height=188 ] ( left ) , reconstruction of the missing part ( middle ) , and difference between exact far field and reconstructed far field ( right).,title="fig:",height=188 ] figure [ fig : numexa3 ] shows a plot of the observed data ( left ) , of the reconstruction of the missing data segment obtained by the fast iterative soft shrinkage algorithm ( with ) after iterations ( the initial guess is zero ) and of the difference between the exact far field and the reconstructed far field .the behavior of both algorithms in the presence of noise in the data depends crucially on the geometrical setup of the problem ( i.e. on its conditioning ) .the smaller the missing data segment is and the smaller the dimensions of the individual source components are relative to their distances , the more noise these algorithms can handlewe have considered the source problem for the two - dimensional helmholtz equation when the source is a superposition of finitely many well - separated compactly supported source components .we have presented stability estimates for numerical algorithms to split the far field radiated by this source into the far fields corresponding to the individual source components and to restore missing data segments .analytic and numerical examples confirm the sharpness of these estimates and illustrate the potential and limitations of the numerical schemes .the most significant observations are : * the conditioning of far field splitting and data completion depends on the dimensions of the source components , their relative distances with respect to wavelength and the size of the missing data segment .the results clearly suggest combining data completion with splitting whenever possible in order to improve the conditioning of the data completion problem . *the conditioning of far field splitting and data completion depends on wave length and deteriorates with increasing wave number .therefore , in order to increase resolution one not only has to increase the wave number but also the dynamic range of the sensors used to measure the far field data .in the following we collect some interesting properties of the squared singular values , as introduced in , of the restricted fourier transform from .we first note that implies that the squared singular values from satisfy and simple manipulations using recurrence relations for bessel functions [ eq : recurrencebessel ] ( cf . ,e.g. , ) ) show that , for [ lmm : sumsn2 ] since ( cf . ) , the definition yields the next lemma shows that odd and even squared singular values , , are monotonically decreasing for and monotonically increasing for .[ lmm : monotonicitysn2 ] using the recurrence relations we find that thus , integrating sharp estimates for from , we obtain upper bounds for when . since , , it is sufficient to consider .[ lmm : decaysn2 ] suppose that .then from theorem 2 of we obtain for that substituting this into yields since is monotonically increasing for , we see on the other hand , the squared singular values are not small for .[ th : besselasy ] suppose that , define by , and therefore , and assume .then where the constant depends on the lower bound but is otherwise independent of and . by definition , with the phase function has stationary points at , and vanishes at and .we will apply stationary phase in a neighborhood of each stationary point .the neighborhood must be small enough to guarantee that is bounded from below there .integration by parts will be used to estimate integral in regions where is bounded below .the hypothesis that will guarantee that the union of these two regions covers the whole interval . to separate the two regions ,let , , be a cutoff function satisfying with the independent of .define , , then theorem 7.7.1 of tells us that for any integer with only depending on an upper bound for the higher order derivatives of . for the second inequality we have used andthe fact that all higher derivatives of are bounded by 1 .we will estimate the remainder of the integral using theorem 7.7.5 of , which tells us that , if is the unique stationary point of in the support of a smooth function , and on the support of , then with depending only on the lower bound for and an upper bound for higher derivatives of on the support of .we will set , which is supported in two intervals , one containing and the other containing , so , becomes as long as is chosen so that on the support of .the following lemma suggests a proper choice of .[ lmm : statphase ] let and belong to $ ] then implies that since we deduce consequently we choose and assume that , then lemma [ lmm : statphase ] gives .we use this value of in , i.e. accordingly , now , adding and establishes . the calculation for is analogous with replaced by which has the same phase and hence the same stationary points .the only difference is that the term in at will be rather than 1 .we now combine ( [ eq : estbessel1 ] ) and ( [ eq : estbessel2 ] ) with the equality to obtain , for since equation is only a valid definition of the bessel function when is an integer is not an integer . ], we denote in the following by is smallest integer that is greater than or equal to , so that we can state a convergence result .let and . theorem 2 of establishes that for , the following lemma shows that , under the assumptions of theorem [ th : uncertainty3 ] , the estimate implies .let and , then since we may assume w.l.o.g .that .let .then , i.e. and therefore our assumption implies for that accordingly , this shows that the assumptions of theorem 2 of are satisfied .next we consider and further estimate its right hand side : since , applying and yields whence we prove some elementary inequalities that we havent been able to find in the literature .
starting with far field data of time - harmonic acoustic or electromagnetic waves radiated by a collection of compactly supported sources in two - dimensional free space , we develop criteria and algorithms for the recovery of the far field components radiated by each of the individual sources , and the simultaneous restoration of missing data segments . although both parts of this inverse problem are severely ill - conditioned in general , we give precise conditions relating the wavelength , the diameters of the supports of the individual source components and the distances between them , and the size of the missing data segments , which guarantee that stable recovery in presence of noise is possible . the only additional requirement is that a priori information on the approximate location of the individual sources is available . we give analytic and numerical examples to confirm the sharpness of our results and to illustrate the performance of corresponding reconstruction algorithms , and we discuss consequences for stability and resolution in inverse source and inverse scattering problems . mathematics subject classifications ( msc2010 ) : 35r30 , ( 65n21 ) + keywords : inverse source problem , helmholtz equation , uncertainty principles , far field splitting , data completion , stable recovery + short title : uncertainty principles for inverse source problems
a substantial part of the energy carried by the solar wind can be transfered into the terrestrial magnetosphere and it is associated with the passage of southward directed interplanetary magnetic fields , bs , by the earth for sufficiently long intervals of time . discussed the energy transfer process as a conversion of the directed mechanical energy from the solar wind into magnetic energy stored in the magnetotail of earth s magnetosphere and its reconversion into thermal mechanical energy in the plasma sheet , auroral particles , ring current , and joule heating of the ionosphere .the increase on the solar wind pressure is responsible for the energy injections and induces global effects in the magnetosphere called geomagnetic storms .the characteristic signature of geomagnetic storms can be described as a depression on the horizontal component of the earth s magnetic field measured at low and middle latitude ground stations .the decrease in the magnetic horizontal field component is due to an enhancement of the trapped magnetospheric particle population , consequently an enhanced ring of current .this perturbation of the h - component could last from several hours to several days ( as described by * ? ? ?the geomagnetic storms can consist of four phases : sudden commencement , initial phase , main phase and recovery phase .the sudden commencement when it exists , corresponds to the moment when the initial impact of the increased solar wind pressure over the magnetopause occurs .the initial phase at ground appears as a rapid increase on the h - component over less than 1 h almost simultaneously worldwide .the main phase of the geomagnetic storm lasts a few hours and is characterized by a decrease in the h - component .the recovery time corresponds to the gradual increase of the h - component value to its average level .a detailed description of the morphology of magnetic storms is , for instance , in .the intensity of the geomagnetic disturbance in each day is described by indices .the indices are very useful to provide the global diagnostic of the degree of disturbance level .there are different indices that can be used depending on the character and the latitude influences in focus . considering only the main latitudinal contributions, the ring current dominates at low and middle latitudes and an auroral ionospheric current systems dominates at higher latitudes .kp , ae and dst and their derivations are the most used geomagnetic indices. the kp index is obtained from the h - component and it is divided in ten levels from 0 to 9 corresponding to the mean value of the disturbance levels within 3-h intervals observed at 13 subauroral magnetic stations ( see * ? ? ?. however , the k index is the most difficult to be physically interpreted due to its variations be caused by any geophysical current system including magnetopause currents , field - aligned currents , and the auroral electrojets .the minutely ae index ( sometimes minute interval ) is also obtained by the h - component measured from magnetic stations ( 5 to 11 in number ) located at auroral zones and widely distributed in longitude .the ae index provides a measure of the overall horizontal auroral oval current strength .the index most used in low and mid - latitudes is the dst index .it represents the variations of the h - component due to changes of the ring current and is calculated every hour .the dst index is described as a measure of the worldwide derivation of the h - component at mid - latitude ground stations from their quiet days values . at mid - latitude ,the h - component is a function of the magnetopause currents , the ring current and tail currents . calculated the dst index as a average of the records from mid - latitude magnetic stations following , where is a local time h average , is the h - component measured at disturbed days and , on quiet days .other contributions beyond the ring current could be extracted or eliminated with the idea presented by .those authors described the evolution of the ring current by a simple first order differential equation , where .the contribution of the magnetopause currents to is proportional to the square root of the solar wind dynamic pressure ( ) , represents the injection of particles to the ring current , represents the loss of particles with an e - folding time and the constant terms , and are determine by the quiet days values of the magnetopause and ring currents .the dst index is available on the kyoto world data center at http:// wdc.kugi.kyoto-u.ac.jp/dstdir/index.html .it is traditionally calculated from four magnetic observatories : hermanus , kakioka , honolulu , and san juan .these observatories are located at latitudes below which are sufficiently distant from the auroral electrojets .the derivation of the dst index corresponds to three main steps : the removal of the secular variation , the elimination of the sq variation and the calculation of the hourly equatorial dst index ( see http://wdc.kugi.kyoto-u.ac.jp/dstdir/dst2/ondstindex.html ) .the traditional method of calculating the baseline for the quiet day variations uses the five quietest day for each month for each magnetic observatory . in this work, we propose a way to deal with sq variations by suggesting a method using principal component with the wavelet correlation matrix .this method eliminates the disturbed days using a multiscale process . also , we developed an algorithm for extracting the solar quiet variations recorded in the magnetic stations time series , in order words , a way of estimation of the quiet - time baseline . to accomplish this task , we separate the solar diurnal variations using hourly data of the h - component using the technique ( described in * ? ? ?afterward we applied the principal component wavelet analysis to identify the global patterns in the solar diurnal variations .the rest of the paper is organized as follows : section [ the dst index calculation procedure ] is devoted to explain the main issues of the dst index calculation procedure . in section [ magnetic data ] the analyzed period and data are presented .section [ methodology ] describes the principal component analysis and it is devoted to introduce the suggested method of principal component analysis ( pca ) using gapped wavelet transform and wavelet correlation .it also establishes the identification of the disturbed days .the results are discussed in section [ results and discussion ] , and finally , section [ summary ] brings the conclusions of this work .recently , reconstructing the dst and removing the quiet - time baseline have been a motivation of several works . despite of these disagreements in the dst constrution ,today , the dst index remains an important tool in the space weather analysis . reconstructed the dst index ( dxt ) following the original formula presented at http://wdc.kugi.kyoto-u.ac.jp /dstdir/ dst2/ondstindex.html .however , they encountered a few issues as : the availability and the data quality , some shifts in the baseline level of the h - component , erroneous data points and some data gaps at all / some magnetic observatories .the dst could not be fully reproduced using the original formula because the inadequate information above effects the treatment of the related issues and therefore remains partly unscientific as described by .a new corrected and extended version ( dcx ) of the dxt index was proposed by .they corrected the dst index for the excessive seasonal varying quiet - time level which was unrelated to magnetic storms as previously discussed in .they also showed that the considerable amount of quiet - time variation is included in the dxt index but none in the dcx index .another issue related to the derivation of the dst index is that no treatment is made to normalize the different latitudinal location of the magnetic observatories .suggested the normalization of the magnetic disturbances at the four dst stations with different latitudes by the cosine of the geomagnetic latitude of the respective station .if no correction is made , they showed for the lowest geomagnetic station , honolulu , the largest deviations and for the highest station , hermanus , the lowest deviations of the four station .the standard deviations reflect the annually averaged effect of the ( mainly ring current related ) disturbances at each station , ( see * ? ? ?* for more details ) .evaluated the effect of using more than four magnetic stations and shorter time intervals to calculate the dst index .the obtained dst index profiles using 12 , 6 or 4 magnetic station did not show significant discrepancies and the best agreement with the standart dst was obtained using magnetic stations located at latitudes lower than in both hemispheres .although , the increase of symmetrically world - wide distributed magnetic stations did not effect significantly the dst index , the longitudinal asymmetries of the ring current contributes for the average disturbances of the four dst stations be systematically different . using an extended network of 17 stations , demonstrated that the local disturbances are ordered according to the station s geographic longitude , where the westernmost station ( honolulu ) presented the largest disturbances and contributions to dst index and the easternmost ( kakioka ) the smallest. studied the characteristics of the sq variations at a brazilian station and compared to the features from other magnetic stations to better understand the dynamics of the diurnal variations involved in the monitoring of the earth s magnetic field .they used gapped wavelet analysis and the wavelet cross - correlation technique to verify the latitudinal and longitudinal dependence of the diurnal variations .as previously mentioned by , also verified that magnetic stations located at lower latitudes and westernmost ( honolulu and san juan ) presented larger correlation to vassouras than the easternmost stations ( as kakioka ) .some important aspects for the construction of the dst index as described by are : the utilization of the original data , the inspection in time and frequency domains ( removal of diurnal variation ) and the consideration of the distinction between stationary and non - stationary time series ingredients which applies to the secular variation .also as mentioned by , some patterns of the global magnetic disturbance field are well understood and some are not which means that there is still a lot to learn about the magnetosphere , magnetic storm and earth - sun relationship .in this paper , we use ground magnetic measurements to estimate the quiet - time baseline .we select the four magnetic observatories used to calculate the dst index : hermanus ( her ) , kakioka ( kak ) , honolulu ( hon ) , and san juan ( sjg ) , plus other different magnetic observatories reasonably homogeneously distributed world wide .one of these nine chosen stations is vassouras ( vss ) located under the south atlantic magnetic anomaly ( minimum of the geomagnetic field intensity ) .the geomagnetic data use in this work relied on data collections provided by the intermagnet programme ( http://www.intermagnet.org ) .the distribution of the magnetic stations , with their iaga codes , is given in fig .[ fig : mapstations ] . the corresponding codes and locationsare given in table [ table : abbcode ] .these selection of magnetic stations correspond to the same selection used in a previous work o for the same reasons ( exclusion of the major influence of the auroral and equatorial electrojets ) . in this work ,we only use the data interval corresponding to the year 2007 .we also apply the same methodology to identify geomagnetically quiet days used by .we consider quiet days , only those days in which the kp index is not higher than 3 + . as at low latitudes the horizontal component ( h )is mostly affected by the intensity of the ring current , we decided to use only the hourly mean value series of this component .the magnetic stations present available data in cartesian components ( xyz system ) .the conversion to horizontal - polar components ( hdz system ) is very simple ( see * ? ? ?* for more details ) .the system s conversion was performed in all the chosen magnetic stations ..intermagnet network of geomagnetic stations used in this study .[ cols="^,^,^,^,^ " , ] + source : http://wdc.kugi.kyoto-u.ac.jp/igrf/gggm/index.html ( 2010 ) [ table : abbcode ]the method used in this study is based on the principal component analysis ( pca ) using gapped wavelet transform and wavelet correlation to characterize the global diurnal variation behavior . to identify periods of magnetic disturbance, we use the discrete wavelet transform .this technique is employed to analyze the removal of disturbed days from the magnetograms , and consequently , from the reconstructed sq signal . also in this section , a combined methodology using the pca and gapped wavelet transform is briefly described .following , we present an identification method to distinguish the disturbed days using discrete wavelet coefficients . among the several available methods of analysis , pca is a particularly useful tool in studying large quantities of multi - variate data .pca is used to decompose a time - series into its orthogonal component modes , the first of which can be used to describe the dominant patterns of variance in the time series .the pca is able also to reduce the original data set of two or more observed variables by identifying the significant information from the data .principal components ( pcs ) are derived as the eigenvectors of the correlation matrix between the variables .their forms depend directly on the interrelationships existing within the data itself .the first pc is a linear combination of the original variables , which when used as a linear predictor of these variables , explains the largest fraction of the total variance .the second , third pc , etc ., explain the largest parts of the remaining variance .as explained by , the interpretation of the eigenvectors and the eigenvalues can be described as follow , the eigenvectors are the normalized orthogonal basis in phase space , and also , the set of vectors of the new coordinate system in the space , different from the coordinate system of the original variables ; the eigenvalues are the corresponding variance of the distribution of the projections in the new basis . in order to isolate the global contributions of each pcs mode , we applied pca using the wavelet correlation matrix computed by gapped wavelet transform .this wavelet correlation matrix was introduced in .we joined the properties of the pca , which are the compression of large databases and the simplification by the pcs modes , and properties of the wavelet correlation matrix , which is the correlation at a given scale , , in this case , the scale corresponded to the pseudo - period of hours .the wavelet analysis has the following propriety : the larger amplitudes of the wavelet coefficients are associated with locally abrupt signal changes or `` details '' of higher frequency . in the work of and the following work of , a method for the detection of the transition region and the exactly location of this discontinuities due to geomagnetic storms was implemented . in these cases ,the highest amplitudes of the wavelet coefficients indicate the singularities on the geomagnetic signal in association with the disturbed periods . on the other hand ,when the magnetosphere is under quiet conditions for the geomagnetic signal , the wavelet coefficients show very small amplitudes . in this work, we applied this methodology with daubechies orthogonal wavelet function of order 2 on the one minute time resolution with the pseudo - periods of the first three levels of 3 , 6 and 12 minutes .in this section , we will present the results of reconstructed baseline for the global quiet days variation using pca technique implemented with gapped wavelet transform and wavelet correlation .also , we will apply dwt to evaluate the day - by - day level of geomagnetic disturbance using kak magnetic station as reference . , and at bottom right , the global wavelet spectrum.,title="fig:",width=529 ] + fig .[ fig : kakescalogram ] shows an example of the geomagnetic behavior presented at the june , 2007 magnetogram of kakioka using continuous gapped wavelet transform ( gwt ) .the gwt can be used in the analysis of non - stationary signal to obtain information on the frequency or scale variations and to detect its structures localization in time and/or in space ( see * ? ? ? * for more details ) .it is possible to analyze a signal in a time - scale plane , called so the wavelet scalogram . in analogy with the fourier analysis ,the square modulus of the wavelet coefficient , , is used to provide the energy distribution in the time - scale plane . in the gwt analysis, we can also explore the central frequencies of the time series through the global wavelet spectrum which is the variance average at each scale over the whole time series , to compare the spectral power at different scales .this figure shows the h - component ( top ) , the wavelet square modulus ( bottom left ) and the global wavelet spectrum ( total energy in each scale bottom right ) . in the scalogram ,areas of stronger wavelet power are shown in dark red on a plot of time ( horizontally ) and time scale ( vertically ) .the areas of low wavelet power are shown in dark blue . in fig .[ fig : kakescalogram ] , it is possible to notice peaks of wavelet power on the scalogram at the time scale corresponding to to minutes of period .this periods are associated to pc5 pulsations during disturbed periods . also , it is possible to notice a maximum of wavelet power at the time scale corresponding to harmonic periods of the 24 hours such as 6 , 8 , 12 hours .those periods are related to the diurnal variations .the gwt technique is able to analyze all the informations present on the magnetograms .it is an auxiliary tool to localize on time / space the pc1pc5 pulsations .however , the scalogram provides a very redundancy information which difficult the analysis of each decoupling phenomenum .for that reason , we preferred to use dwt to evaluate the day - by - day level of geomagnetic disturbance .geomagnetically quietest days ( blue ) and most disturbed days ( red ) , the h - component average variation for kak obtained at june , 2007 and the square root wavelet coefficients amplitudes , and with the pseudo - periods of 3 , 6 and 12 minutes.,title="fig:",width=529 ] + fig .[ fig : sqjun ] is composed of two graphs , the first one presents the reconstructed line of the sq variation where the first geomagnetically quietest days of each month are highlighted in blue and the most disturbed days in red and the second one presents the discrete wavelet analysis of the geomagnetic horizontal component obtained at kakioka station , japan .using our criteria of removing disturbed days , we consider as gaps of 3rd , 14th , 21th and 29th day . in our case ,the gapped wavelet technique is very helpful because it reduces two effects : the presence of gaps and the boundary effects due to the finite length of the data , for more details see .the first graph shows the amplitude range between and nt and presents a complex pattern .it is possible to notice that the larger amplitudes of the reconstructed sq signal correspond to the periods between the days 810 , 1315 , 2123 and 2930 .these periods correspond to the disturbed days .table [ table : sq10calmos ] shows these quietest days and most disturbed days of each month set by the _ geoforschungszentrum ( gfz ) potsdam _ through the analysis of the kp index that are highlighted on the second graph .the year of is a representative year of minimum solar activity and it is used in this analysis due to have less disturbed periods ( see our considerations in section [ magnetic data ] ) . by analyzing these highlighted days , we expect to find out if there is a correlation between the days classified as quiet days and a small sq amplitude variation .[ table : sq10calmos ] the second graph shows the discrete wavelet analysis applied to geomagnetic minutely signal from kak using daubechies orthogonal wavelet family 2 . from top to bottom in this graph , the h - component of the geomagnetic field and the first three levels of the square wavelet coefficients denoted by d1 , d2 and d3 .this analysis uses the methodology developed by , and posteriorly applied by and . in order to facilitate the evaluation of the quiet periods obtained by the discrete wavelet analysis applied to geomagnetic signal from kak, we also developed a methodology ( effectiveness wavelet coefficients ( ewc ) ) to interpret the results shown in fig .[ fig : sumsqjun ] .the ewc corresponds to the weighted geometric mean of the square wavelet coefficients per hour .it is accomplished by weighting the square wavelet coefficients means in each level of decomposition as following where is equal to because our time series has one minute resolution .[ c][][0.8] [ c][][0.8] + through fig .[ fig : sumsqjun ] , it is possible to compare the global sq behavior ( top graph ) with the analysis of quiet and disturbed days obtained by one representative magnetic station of medium / low latitudes ( kak bottom graph ) in order to verify situations in which the global sq behavior presents less or more variability .this analysis allows us to validate the quietest days , and evaluate the most disturbed days in order to establish a reliable method of global sq analysis obtained from medium / low latitude magnetic stations influenced only by the ionosphere . in fig .[ fig : sumsqjun ] , the global sq behavior ( top graph ) shows larger amplitudes during the periods between the days 810 , 1216 , 2223 and 2930 .most of these periods correspond to the days where the ewcs have an increase of their values as shown kak analysis ( bottom graph ) .the increase of the ewcs values occurs during the periods between the days 14 , 810 , 1318 , 2124 and 2930 .the ewcs can help us also to interpret the results obtained in each day , and can help us to evaluate the quietest and most disturbed days measure by the selected magnetic station of medium / low latitudes , kak , as shown in table [ table : sq10calmosdwt ] .[ table : sq10calmosdwt ] the same methodology and analysis comparing the global sq behavior and the ewcs from kak is done for the month of march , september and december,2007 .[ fig : sumsqmar ] shows the comparative of the global sq behavior and the ewcs from kak for the month of march , 2007 .the amplitude range of global sq signal is between and nt , and , it also shows a complex pattern .it is possible to notice that the larger amplitudes of the reconstructed sq signal correspond to the periods between the days 67 , 1117 and 2328 .once more , these periods correspond to the disturbed days .the increase of the ewcs values occurs during the periods between the days 1 , 67 , 1117 , 2328 and 3031 .we can notice that the increase of the amplitude of the reconstructed sq signal correspond to the increase of the ewcs magnitude .[ c][][0.8] [ c][][0.8] + the amplitude range for september , 2007 , is between and nt and the larger amplitudes of the reconstructed sq signal correspond to the periods between the days 17 , 1923 and 2630 , see fig , [ fig : sumsqset ] .the increase of the ewcs values occurs during the periods between the days 13 , 2024 and 2730 . comparing these two analysis , the global and the ewcs, we verify that the kak magnetic behavior represents well the increase of global sq oscillations .[ c][][0.8] [ c][][0.8] + in fig .[ fig : sumsqdec ] , the amplitude range is between and nt and the larger amplitudes correspond to the periods between the days 1523 .also , the increase of the ewcs values occur during the periods between the days 1011 and 1721 .once more , when we compare these two analysis , the global and the ewcs , we verify that the kak magnetic behavior represents well the increase of global sq oscillations .[ c][][0.8] [ c][][0.8] + we observe in figs .[ fig : sumsqjun ] , [ fig : sumsqmar ] , [ fig : sumsqset ] and [ fig : sumsqdec ] , in most of the cases , the major amplitude fluctuations of the reconstructed sq signal correspond to the most disturbed days and minor fluctuations , to the quietest days .however , the diurnal global variability shows a complexity on the amplitude variation pattern even during geomagnetically quiet periods . through this studywe compare the amplitude variation of the reconstructed sq signal to effectiveness wavelet coefficients obtained at kak with the purpose of understanding the complexity of the diurnal global variability .in this work , we suggest an alternative approach for the calculation of the sq baseline using wavelet and pca techniques .this new approach address some issues , such as , the availability and the quality of data , abrupt changes in the level of the h - component , erroneous points in the database and the presence of gaps in almost all the magnetic observatories . to fulfill this purpose ,we reconstruct the sq baseline using the wavelet correlation matrix with scale of hours ( pseudo - period ) .the pca / wavelet method uses the global variation of first pca mode that also corresponds to phenomena with periods of hours .this study shows that the largest amplitude oscillation of the reconstructed signal ( sq baseline ) corresponded to the most disturbed days and the smaller oscillations to the quietest days .this result is consistent with the expected sq variations .v. klausner wishes to thanks capes for the financial support of her phd ( capes grants 465/2008 ) and her postdoctoral research ( fapesp 2011/20588 - 7 ) .this work was supported by cnpq ( grants 309017/2007 - 6 , 486165/2006 - 0 , 308680/2007 - 3 , 478707/2003 , 477819/2003 - 6 , 382465/01 - 6 ) , fapesp ( grants 2007/07723 - 7 ) and capes ( grants 86/2010 - 29 , 0880/08 - 6 , 86/2010 - 29 , 551006/2011 - 0 , 17002/2012 - 8 ) .also , the authors would like to thank the intermagnet programme for the datasets used in this work .gonzalez , w. d. , joselyn , j. a. , kamide , y. , kroehl , h. w. , rostoker , g. , tsurutani , b. t. , vasyliunas , v .m. , 1994 .what is a geomagnetic storm ?journal of geophysical research , 99 ( a4 ) , pp .57715792 .kamide , y. , baumjohann , w. , daglis , i. a. , gonzalez , w. d. , grande , m. , joselyn , j. a. , mcpherron , r. l. , phillips , j. l. , reeves , e. g. d. , rostoker , g. , sharma , a. s. , singer , h. j. , tsurutani , b. t. , and vasyliunas , v. m. , 1998 .current understanding of magnetic storms : storm - substorm relationships .journal of geophysical research , 103 ( a8 ) , pp .1770517713 . klausner , v. , papa , a. r. r. , mendes , o. , domingues , m. o. , frick , p. , 2011a. characteristics of solar diurnal variations : a case study based on records from the ground magnetic observatory at vassouras , brazil . arxiv:1108.4594 [ physics.space-ph ] .mendes , o. j. , domingues , m. o. , mendes da costa , a. , cla de gonzalez , a. l. , 2005a .wavelet analysis applied to magnetograms : singularity detections related to geomagnetic storms .journal of atmospheric and solar - terrestrial physics , 67 , pp .18271836 .mendes , o. j. , mendes da costa , a. , domingues , m. o. , 2005b .introduction to planetary electrodynamics : a view of electric fields , currents and related magnetic fields .advances in space research , 35 ( 5 ) , pp .812 - 818 . ,a. , domingues , m. o. , mendes , o. , brum , c. g. m. , 2011 .interplanetary medium condition effects in the south atlantic magnetic anomaly : a case study .journal of atmospheric and solar - terrestrial physics , 73 ( 1112 ) , pp .14781491 .murray , g. w. ; mueller , j. c. ; zwally , h. j. , 1984 matrix partitioning and eof / principal components analysis of antartic sea ice brightness temperatures .greenbelt , md . : national aeronautics and space administration , goddard space flight center , 1 v. , nasa technical memorandum , 83916 .nesme - ribes , e. , frick , p. , sokoloff , d. , zakharov , v. , ribes , j. c. , vigouroux , a. , laclare , f. , 1995 .wavelet analysis of the maunder minimum as recorded in solar diameter data .comptes rendus de lacadmie des sciences .srie ii , mcanique , physique , chimie , astronomie 321 ( 12 ) , 525532 .
this paper describes a methodology ( or treatment ) to establish a representative signal of the global magnetic diurnal variation based on a spatial distribution in both longitude and latitude of a set of magnetic stations as well as their magnetic behavior on a time basis . for that , we apply the principal component analysis ( pca ) technique implemented using gapped wavelet transform and wavelet correlation . the continuous gapped wavelet and the wavelet correlation techniques were used to describe the features of the magnetic variations at vassouras ( brazil ) and other magnetic stations spread around the terrestrial globe . the aim of this paper is to reconstruct the original geomagnetic data series of the h - component taking into account only the diurnal variations with periods of hours on geomagnetically quiet days . with the developed work , we advance a proposal to reconstruct the baseline for the quiet day variations ( sq ) from the pca using the correlation wavelet method to determine the global variation of pca first mode . the results showed that this goal was reached and encourage other uses of this approach to different kinds of analysis . magnetogram data , h - component , quiet days , principal component wavelet analysis .
mathematical models are used throughout infectious disease epidemiology to understand the dynamical processes that shape patterns of disease . while early models did not include complex population structure , modeling approaches now frequently let the epidemic spread take place on a network , which enables greater realism than a model in which all individuals mix homogeneously .it does , however , pose many technical problems for model analysis , particularly the question of how heterogeneity in the number of links each individual participates in their degree influences the epidemic . similarly , even though some of the earliest mathematical models of infectious diseases were stochastic , accounting for the chance nature of transmission of infection , much of the applied modeling that followed was deterministic and based on non - linear differential equations .more recent applied work has , however , recognized the importance of using stochastic epidemic models and also of the development of associated methodology .the difficulty in mathematically analyzing models which include both stochastic elements and network structure can be a reason for not including these factors , but we prefer to include them , and subsequently to systematically reduce the complexity of the resulting model .this is the approach we adopt in this paper ; the reduction process being made possible because of the existence of a separation of time - scales : many variables `` decaying '' away rapidly , leaving a few slow variables which govern the medium- to long - term dynamics . compartmental models of epidemics typically assume that the majority of the population starts susceptible to infection , with a small number infectious , who then spread infection to others before recovering .a key distinction is between susceptible - infectious - susceptible ( sis ) dynamics in which recovery returns an individual to the susceptible compartment and susceptible - infectious - recovered ( sir ) dynamics in which recovery removes an individual from a further role in the next epidemic , with the former being used to model e.g. sexually transmitted infections other than hiv and the latter e.g.pandemic influenza . in the theoretical analysis of epidemic models , a crucial quantity corresponds to the basic reproductive ratio described as the expected number of secondary cases infected per primary case early in the epidemic . depending on the value of , either individuals will experience infection in a population of size model is then said to be supercritical or individuals will experience infection in a population of size model is then subcritical thus defining an epidemic threshold . in reality , the contact network on which the disease spreads will often change over the course of the epidemic ; while approaches exist in which the dynamics of both the disease and the network are considered , it is more common to consider two limiting cases .the first of these is a _static _ approach ( also called ` quenched ' ) in which the network is assumed to evolve much more slowly than the disease and which is typically approached analytically through the use of pair approximation and related techniques .the second is a _ dynamic _ limit ( also called ` annealed ' or ` discrete heterogeneous ' ) in which the network is assumed to evolve much more quickly than the epidemic , and can be described by an effective network characterized by its degree distribution , in which all individuals sharing the same degree are considered equivalent .this case can be analyzed through use of a set of degree - indexed differential equations ( often called the ` heterogeneous mean field ' approach ) provided the maximum degree is not too large . when the distribution of degrees in the population is highly variable a situation that appears to be supported empirically it was recognized that the epidemic may not exhibit straightforward critical behavior .this happens because as the population size becomes large , extremely small levels of infectiousness can lead to large epidemic sizes or more accurately speaking the critical level of infectiousness can depend very sensitively on the largest degree , , in the network . the behavior of highly heterogeneous network epidemics near criticality continues to generate interest in both the physics and mathematics literature . in this paperwe investigate the stochastic behavior of heterogeneous network epidemics over time .we study a network in the dynamic limit , characterized by a power - law degree distribution such that the probability of an individual having degree is given by , with , although the method of analysis is applicable to other distributions .this type of network has the property that the basic reproductive ratio , which is found to be proportional to the second moment of the degree distribution , diverges in the limit of infinite populations , leading to the absence of an epidemic threshold .this is evidently not the case when the population and thus the degree cutoff is finite , but the second moment of the distribution can still be extremely large for sufficiently large , and heterogeneity can play an important role in the dynamics of the system . for the case of large but finite population size , we derive a two - dimensional stochastic differential equation ( sde ) approximation to full sir dynamics , which reduces to an analytically solvable one - dimensional system early in the epidemic .we perform simulations using a power - law degree distribution with a maximum degree cutoff , which show that our approach provides a good approximation provided is not too close to the population size .there have , to our knowledge , been no directly comparable studies of this kind . perhaps those closest have been , first the study of finite - size scaling of the sis model near the critical point to produce a phenomenological one - dimensional sde , second , the derivation of a one - dimensional sde for the sis model from first principles , but using an adiabatic approximation that would only be expected to hold near the critical point , and finally the study of a four - dimensional sde for the sir model on a static network that is much more complex than ours and less amenable to analysis .the outline of the paper is as follows . in sec .[ sec : formulation ] we introduce the model , and formulate it first at the level of individuals ( the microscale ) and then at the level of populations , but retaining a stochastic element ( the mesoscale ) .we begin to explore the reduction of the mesoscopic - level sdes in sec . [sec : det ] and complete the process in sec .[ sec : fastvarelim ] . in sec .[ sec : cir ] we perform a further reduction which allows us to make contact with a model which has already been discussed in the literature . in sec .[ sec : size ] these reduced models are used to find the distribution of epidemic sizes , and we conclude with a discussion of our results in sec . [sec : discussion ] .there are two appendices where technical details relevant to secs .[ sec : det ] and [ sec : fastvarelim ] have been relegated .in this section we formulate the model first at the microscale , that is , in terms of individuals , and from this derive a mesoscopic version of the model where the variables are continuous , and represent the fractions of types of individuals who are infected or who are susceptible to infection .the general procedure to do this is reviewed in ref . , and has been previously applied to models of epidemics on networks , albeit in situations where the nodes of the network contained many individuals , rather than just one , as in the present context .as discussed in the introduction , individuals are located at the nodes of a network through which they are exposed to infection from individuals at neighboring nodes .since we work in the dynamic limit , individuals do not alter their number of contacts when acquiring the infection or recovering from it , and the network plays no role other than encoding the information about how many connections to other individuals a given individual possesses .an individual is labeled according to whether it is ( i ) susceptible , infected or recovered , and ( ii ) the degree of the node where it is located .thus the variables at the microscale are , and , .we will assume that the population is closed , i.e. , that , with total population , at all times , so that we can remove one of the variables : .two types of events can occur : the infection of a susceptible individual , whenever an individual of type comes in contact with one of type between individuals and having degrees and , respectively , are governed by a poisson process with rate ; and the recovery of an infectious individual with rate . in this paper we will be interested in the properties of an epidemic that takes place over a much shorter timeframe than demographic processes and hence will ignore the birth and death of individuals , leaving the only interactions between the individuals to be : 1 .infection of an individual at a node of degree .the transition rate for this process is given by where the summation is over infected individuals labeled according to the degree of the node on which they are located .the only arguments of which are shown explicitly are those involving individuals on nodes of degree , since only these change in this process ; those on the right represent the initial state , and those on the left represent the final state .2 . recovery of a infected individual at a node of degree .this proceeds at a rate given by , and so the transition rate for this process is given by the dynamics of the process can be described by writing down an equation for the probability that the system is in a state at time , which will be denoted by .this is the master equation , which equates to the net rate of increase of due to the processes given by eqs .( [ eqn : rate_1 ] ) and ( [ eqn : rate_2 ] ) , and is given by \nonumber \\ & \quad - \sum^k_{k=1 } \left[t_1(s_k-1,i_k+1\vert s_k , i_k)+t_2(s_k , i_k-1\vert s_k , i_k)\right]p(s_k , i_k , t ) .\label{eqn : master}\end{aligned}\ ] ] once again , the dependence of the probability distribution , , on the state variables has been suppressed in the sum over on the right - hand side of this equation .we will refer to the discrete model defined by eqs . as the full microscopic model , since we we shortly derive a mesoscopic model from it , and subsequently further reduce this model .once an initial state has been specified , the model is completely determined , and in principle can be found for any state at any time .this behavior can be explored numerically _ via _ monte carlo simulations ( e.g. using gillespie s method or other approaches ) however our main interest in this paper is in approximating the master equation to obtain models which are more amenable to analysis . a mesoscopic description of the process involves going over to new variables and , , which are assumed continuous for large , but retaining the stochastic nature of the system by keeping large , but finite .a macroscopic description , on the other hand , would involve taking the limit , which renders the process deterministic , and which eliminates all effects of the original discrete nature of the system .the construction of the mesoscopic model involves expanding the master equation , eq ., in powers of and truncating this expansion at order .this is discussed in many places in the literature , but here we follow ref . , which gives explicit forms for the functions and , , which define the mesoscopic model .carrying out this procedure , the master equation becomes a fokker - planck equation of the form +\frac{1}{2n^2}\sum^k_{k=1}\sum\limits_{\mu,\nu = 1}^2\frac{\partial ^2}{\partial x^{(k)}_\mu \partial x^{(k)}_\nu}\left[b^{(k)}_{\mu \nu}(\bm{x } ) p\left(\bm{x } , t\right)\right ] , \label{eqn : fpe}\end{aligned}\ ] ] where ; , .we have also introduced .the explicit form of the functions and are where are the original transition rates ( scaled by ) given in eqs .( [ eqn : rate_1 ] ) and ( [ eqn : rate_2 ] ) , but written in terms of the continuous state variables and .the final stage of the construction of the mesoscopic form of the model is to adopt a coarser time scale by introducing a new time variable . from eq .( [ eqn : fpe ] ) it can be see that without this change of variable , the macroscopic ( ) limit eliminates the right - hand side of the equation , giving a trivial macroscopic limit . with the change of variable , the first term on the right - hand side survives ( but without the factor ) , giving a liouville - like equation for , which implies a non - trivial deterministic dynamics .a clearer way to see this , and in fact a more intuitive formulation of the mesoscopic dynamics to that of the fokker - planck equation , is to use the sde which is equivalent to eq .( [ eqn : fpe ] ) .we utilize the general result that a fokker - planck equation of the form given by eq .( [ eqn : fpe ] ) is equivalent to the sde , where the are gaussian random variables with zero mean and a correlation matrix given by the matrix . in terms of the variables and , the sdes take the form where here the noise is to be interpreted in the sense of it .it is clear from eqs .( [ eqn : sk ] ) and ( [ eqn : ik ] ) that the limit eliminates the noise terms leaving deterministic equations , and we recover eq .( 2.6 ) from ref . .it is convenient to make the noise in the sdes explicitly multiplicative .since the rates are non - negative we may introduce new functions , , and then transform to new noise variables it is straightforward to check that , and therefore that all the state dependence has been transferred from the correlator to the sdes . equations correspond to the full mesoscopic model . in order to describe the evolution of the entire population , we need to consider the sdes for all ; that is , we have to work with equations . in this paper, we will be interested in networks where individuals can pick degrees from a truncated zipf distribution of the form and we let . since we are interested in the case when is large , the number of equations making up the full mesoscopic model are so large that it can make any direct treatment impractical .therefore , in the following sections we will be concerned with reducing the number of equations , and we start by looking at the deterministic limit of the system .this will also have a crucial role to play in the reduction of the stochastic model , described in sec .[ sec : fastvarelim ] .the deterministic limit which controls the dynamics of the macroscopic version of the model is found by taking the limit of eqs . , and so eliminating the noise terms . for notational conveniencewe reorder the entries of the state vector , so that , and write eqs . in the form . to find the fixed points of the dynamics we set the time derivatives on the left - hand side of these equations to zero ;these fixed points will be denoted by an asterisk and obey .it is immediately clear that there is a set of stable fixed points with and undetermined , for all , corresponding to a disease - free state . for a given initial condition ,the fixed point is uniquely determined , however it may not be stable . to investigate the stability of the fixed point we perform a linear stability analysis .the stability matrix is defined so that near , the dynamics are , and is found to have the form where is a matrix with entries and is the unit matrix. the eigenvalues and eigenvectors of are straightforward to determine .any vector which has the last entries equal to zero is an eigenvector with eigenvalue zero , reflecting the degeneracies of the system .similarly a vector with the first entries set equal to zero and the last entries equal to , , is an eigenvector with eigenvalue if , that is , if .these then yield another degenerate set of eigenvalues .the final eigenvalue can be found by equating the trace of to the sum of its eigenvalues . in summary ,the eigenvalues are while a suitable set of corresponding eigenvectors are listed in appendix [ sec : eve ] . from eq .( [ eqn : eva_full ] ) we see that the condition for the fixed points to be ( marginally ) stable is as mentioned in the introduction , there are two possible outcomes for the deterministic dynamics depending on the parameter values : either a negligible or a significant number of individuals in the population will get infected during the course of the epidemic .the quantity that determines which of these cases we are in is often referred to as the basic reproductive ratio ; the final size of the epidemic , in turn , can be obtained from the coordinates of the fixed point , . in order to determine these in the general heterogeneous case , we look first at the homogeneous case , where .if we set an initial condition with only a few infective individuals in an almost completely susceptible population , we will have and , where and ; with this , the limit of eq .( [ eqn : ik ] ) takes the form , early in the epidemic . from thiswe obtain where corresponds to the basic reproductive ratio .this defines an epidemic threshold , such that for the number of infectives grows exponentially in the early stages of the epidemic , while for the epidemic does not take off and its final state corresponds to . for the casewhen , we can find the coordinate of the fixed point by dividing eq . by eq . with ; this yields if we set again an initial condition , with , we can integrate the equation above to find where we have used that the final state of the epidemic corresponds to .therefore , to a very good approximation , we can find coordinate of the fixed point as the smallest solution to note that is always a solution , and the only one for .the total epidemic size will be given by , which corresponds to the fraction of recovered individuals at the end of the outbreak .this is due to the fact that the initial number of recovered individuals is zero , and that every individual that gets infected during the course of the epidemic will eventually recover and remain permanently recovered .now , a significant simplification can be made in the deterministic limit of the heterogeneous case , which reduces the dynamical equations to equations ( see refs . , which consider the static case ) .this is achieved by use of the ansatz ; we also define such that . with these two variables , eqs . , in the case when , can be rewritten independently of as where we have introduced .if we also introduce the probability generating function of the degree distribution , then . in this reduced model , there is a line of stable fixed points , uniquely determined by the initial conditions , which satisfy .the stability matrix for these fixed points is the matrix which has a similar form to the stability matrix for the full system given by eq .( [ eqn : j_2k_defn ] ) .once again , an eigenvector exists which has the last entry equal to zero .this corresponds to an eigenvalue zero , reflecting the existence of a line of fixed points .the trace of then gives the second eigenvalue , and the two eigenvalues are thus with the eigenvectors given in appendix [ sec : eve ] .the stability condition is now now we can find analogous expressions to those in eqs . and for the heterogeneous case .we start by noting that , in terms of , the total number of susceptible individuals will be given by it is clear that if , then ; conversely , if then , and so .this means that a completely susceptible population will correspond to being equal to unity .a small initial number of infectives will , in turn , correspond to .in addition from eq .( [ eqn : lambda_det ] ) , , where , at early times .therefore from eq . ([ eqn : lambda_early ] ) it is clear that there is a critical value of equal to ; if is larger than the infection grows , and if it is below it does not . thus is the heterogeneous basic reproductive ratio , generalizing in the homogeneous case .central to our investigation is the fact that this quantity does not have a deterministic threshold for networks characterized by power - law degree distributions in the case of interest , i.e. , when . in order to find the heterogeneous final size ,we proceed again as in the homogeneous case ; dividing eq . by eq ., we find from this , the heterogeneous analog of eq .is given by where we again set an initial condition with a very small number of infectives , , in an almost completely susceptible population , , and the final state is given by .the component of the fixed point will then correspond to the smallest solution to \right\rbrace , \label{eqn : theta_fix}\end{aligned}\ ] ] with the heterogeneous basic reproductive ratio defined above .the epidemic size , by definition , will be given by .furthermore , the components of the fixed point correspond to .our interest in this paper is in the ( finite ) stochastic dynamics of the model , where a complete reduction of the kind that led to eqs .( [ eqn : theta_det ] ) and ( [ eqn : lambda_det ] ) is not possible . in order to see this , we proceed in the same manner as in the deterministic case :we start from eq ., this time without taking the limit , and use the ansatz ; summing over and dividing by , we arrive at where we have introduced ^{-1}\sum_k \sigma^{(k)}_1 \rho^{(k)}_1(\tau) ] . with this, we arrive at a reduced equation for given by as in the deterministic case , then , we can express the equations for the variables , described by eq ., entirely in terms of and _ via _ eq . .however , we will see that the same does not hold for the dynamics of the variables , as we have anticipated . in a similar fashion as with the variables , we take the equation for the variables , eq . ,multiply it by and sum over , to obtain let us now denote the noise terms appearing in the equation for above as and .these two noise terms , like , have zero mean ; also , it can be readily verified that the correlations between all three of them are given by , where now , with ,\qquad \bar{b}_{33}=\gamma \sum_k k^2 i_k,\end{aligned}\ ] ] and , where .we can now , as in sec .[ sec : formulation ] , express and in terms of three independent normally - distributed random variables , and , with zero mean and , .the definition of in terms of has already been given by eq .( [ eqn : xi_to_zeta ] ) ; the other two are : where and . substituting all this back into eq .yields . \label{eqn : lambda_stoch}\end{aligned}\ ] ] the main feature we note from this is that the two - dimensional system of equations can not be closed .this is so because is still dependent on for all values of , and so it is not possible to achieve a complete reduction of the stochastic system .therefore , when we take the intrinsic noise present in the system into account , a partially reduced model of equations eq . for , with , and eq . for the best we can achieve . due to this , the exploration of the epidemic dynamics for extremely heterogeneous cases , in which can take very large values , is rendered impractical : even after reducing the number of equations , we will still be left with a very - high - dimensional system .however , progress can be made by observing that , under certain conditions , the deterministic limit of the model presents a separation of time - scales , as we shall see in the following section .we will end this section by carrying out this partial reduction in the deterministic case , as a preparation for the stochastic treatment .this partial reduction involves applying the above reduction to the variables only , by writing , but keeping the variables .there are now dynamical equations which take the form there is again a line of fixed points corresponding to disease - free states , , , but with undetermined .the stability matrix is now where is a -dimensional vector whose -th entry is , is a matrix with entries and is the unit matrix .the eigenvalues and eigenvectors can be found in the same way as before , the eigenvalues having the form with the eigenvectors given in appendix [ sec : eve ] .the stability condition is once again given by eq .( [ eqn : stability_cond_total ] ) .as we have discussed , the deterministic limit of eqs . can be greatly simplified by performing a change of variables and describing the evolution of the system in terms of and .the inclusion of demographic noise , however , hinders such a reduction of dimensionality .it is not possible to obtain a closed two - dimensional system of equations , thus enormously complicating the analysis of the stochastic epidemic , compared to its deterministic limit . here, we attempt to overcome this issue by exploiting the properties of the deterministic limit of the model , and by finding a way to close the equations for and in the finite- case , while still retaining the essential features of the full model . in order to do this , we start from eqs .([eqn : semi_2 ] ) for the partially reduced -dimensional deterministic system for and .examination of the eigenvalues appearing in eq . , shows that .if the ratio is small , then there is the potential to carry out a reduction of the system . in such case, the deterministic dynamics would quickly evolve along the directions corresponding to the set of most negative eigenvalues towards a lower - dimensional region of the state - space , in which the system as a whole evolves in a much slower time - scale , and which contains the fixed point .as illustrated in figure [ fig : l_r0 ] , showing the value of for different choices of , we would expect this time - scale separation to occur for combinations of parameters for which the final size of the epidemic is small , i.e. , for large . since there is one most negative eigenvalue , with a degeneracy equal to , after sufficient time has passed the evolution of the systemwill then be constrained to a two - dimensional surface , which we will refer to as the slow subspace . with the aim of reducing the dimensionality of the problem , then , we ignore the fast initial behavior of the system and focus the analysis on the slow surfacewe do so by imposing the condition that no dynamics exists along the fast directions , i.e. , where , and is the -th left - eigenvector of the jacobian evaluated at the fixed point , with given by eq . .this procedure goes under the name of adiabatic elimination , or fast - variable elimination , and it is a common practice in the simplification of deterministic non - linear dynamical systems e.g ., systems of oscillating chemical reactions . and , , as a function of for different values of ( ) .] for ease of application of the method , the left- and right - eigenvectors of the jacobian are normalized in such a way that these are given , respectively , by eqs . and . imposing the condition given by eq . , one finds that , where and is given by eq .( [ xandy ] ) . in terms of the and variablesthis reads \,\sum^k_{l=1 } l i_l , \ \k=2,\ldots , k , \label{eqn : slow_ik_intermediate}\end{aligned}\ ] ] and this gives the behavior of which limits the dynamics of the deterministic system to a slow subspace .a more convenient form can be found by multiplying eq .( [ eqn : slow_ik_intermediate ] ) by and summing from to .this allows us to relate to to find the following expressions for : } k d_k \left[\gamma { \theta^*}^k-\beta \left(\phi(\theta){\theta^*}^k-\phi(\theta^*)\theta^k\right)\right ] , \ \k=2,\ldots , k.\label{eqn : slow_ik}\end{aligned}\ ] ] it should be stressed that the equations ( [ eqn : slow_ik ] ) , which we may write in the form , , should be interpreted as the time - evolution that the need to follow , so that in the directions , .this follows from the deterministic equations of motion and from eq .( [ eqn : kill_fast ] ) .thus the only temporal evolution is in the and directions .the functional forms , with defined by eq .( [ eqn : slow_ik ] ) , formally only hold sufficiently near the fixed point that linearization is accurate . however in fig .[ fig : slow_km10 ] we show three - dimensional projections of the deterministic dynamics from eqs . for an epidemic on a network characterized by a power - law degree distribution with and maximum degree , with heterogeneous basic reproductive ratio , which gives .it appears that the solid red line , which represents the deterministic dynamics , lies in the slow subspace for most of the time , not just at late times when the system approaches the fixed point .that this is the case can be seen from a consideration of the jacobian of the system at an arbitrary time , and not just at the fixed point : it is straightforward to check that is a right - eigenvector of the matrix in eq .( [ eqn : j_k_plus_one_arb_time ] ) with eigenvalue , if . therefore , remarkably , the eigenvectors of the jacobian at the fixed point , defined in eq .( [ eqn : eve_semi ] ) , are eigenvectors of the jacobian at all times .the existence of this negative eigenvalue with degeneracy means that the system remains in the plane described by the vectors and , since any small deviations out of the plane will tend to be pushed back again . normally ,when carrying out a fast - variable elimination , one would find that the system possesses a line of fixed points which is quickly approached at early times , dominated by the deterministic dynamics , and along which the slow , stochastic dynamics takes place .in this case , however , the line of fixed points corresponds to an absorbing boundary of the system : the evolution of the epidemic stops altogether once it reaches the disease - free state .therefore , it is important that the projection onto the slow variables constitutes a good approximation to the dynamics also far from the line of fixed points .we parameterize the slow subspace using the coordinates and . using eq .( [ eqn : leve_semi ] ) and , we obtain a linear transformation of eqs .( [ eqn : slow_z1 ] ) and ( [ eqn : slow_z2 ] ) shows that an equivalent set of coordinates are and , previously called .so , in summary , in sec .[ sec : det ] we showed the system of equations leads to a closed set of equations in and . here , through a consideration of the slow modes of the deterministic equations, we have shown that not only can we describe the system in terms of and , but that it remains in the vicinity of a subspace which can be parameterized by these variables .( left ) and ( right ) for , , , .red , dashed line : deterministic limit , eqs . ; blue , yellow , green lines : stochastic differential equations and .the initial condition corresponds to , for , and . in both cases ,the system is pushed back from the initial condition ( lower right in both panels ) towards the slow subspace described by eq ., on which it seems to stay during the rest of the course of the epidemic .note the different scales in the axes of both panels , so the fluctuations away from the slow subspace are small in both cases ; the fluctuations on the slow subspace can , however , be large , as seen from the departure of the stochastic trajectories from the deterministic dynamics.,title="fig : " ] ( left ) and ( right ) for , , , .red , dashed line : deterministic limit , eqs . ; blue , yellow , green lines : stochastic differential equations and .the initial condition corresponds to , for , and . in both cases ,the system is pushed back from the initial condition ( lower right in both panels ) towards the slow subspace described by eq ., on which it seems to stay during the rest of the course of the epidemic .note the different scales in the axes of both panels , so the fluctuations away from the slow subspace are small in both cases ; the fluctuations on the slow subspace can , however , be large , as seen from the departure of the stochastic trajectories from the deterministic dynamics.,title="fig : " ] we would like to carry this reasoning over to the stochastic case , to find a reduced set of sdes , but which are closed , i.e. which can be expressed entirely in terms of the slow variables and ( or and ) .figure [ fig : slow_km10 ] also shows three - dimensional projections of stochastic trajectories from eqs . and , for a total population of .the behavior is found to be similar to that of the deterministic case : the dynamics seems to quickly reach , and then fluctuate around , the deterministic slow subspace , with the early behavior dominated by the deterministic dynamics . therefore we expect that we should be able to effectively reduce the stochastic system from dimensions to two dimensions , by neglecting the fast initial behavior of the epidemic .we will do this by projecting the stochastic differential equations , eqs . and , onto the slow degrees of freedom of the deterministic limit of the model ; that is , we will neglect the fluctuations away from the slow subspace , along the fast directions corresponding to the most negative eigenvalue .this can be formulated mathematically through the application of a projection operator that only picks the components of , , and along the slow eigenvectors of the deterministic limit , and : ^\top + \bm{v}^{\{k+1\}}[\bm{u}^{\{k+1\}}]^\top .\label{eqn : projector}\end{aligned}\ ] ] the details of the derivation are given in appendix [ sec : projection ] , where it is shown that the sdes given by eqs .( [ eqn : theta_stoch ] ) and ( [ eqn : lambda_stoch ] ) of sec .[ sec : det ] are recovered , except that now is determined in terms of and .this closed system of stochastic differential equations for and , which we call the _ reduced mesoscopic model _, constitutes the main result of this paper and so we reproduce it here : , \label{eqn : slow_lambda}\end{aligned}\ ] ] where ^{-1}\beta \lambda \theta ] and \right ) .\label{sigma_3_explicit}\end{aligned}\ ] ] we now compare the dynamics of the full microscopic model , defined by eqs . , to that of the reduced mesoscopic model above .figures [ fig : ts_full_red_3 ] and [ fig : ts_full_red_4 ] show the time series of the full microscopic model in terms of the variables and , and that of the reduced mesoscopic model , for and different values of having the same deterministic final size , obtained from eq . ; figure [ fig : phase_full_red_2 ] , in turn , compares the phase diagrams .we see that , in the case of small , the time series of both variables in the full microscopic model are rather smooth , and are dominated by a horizontal spread due to the fact that the epidemic takes off at different moments . as heterogeneity increases ,the dynamics of the system becomes noisier and noisier , as seen in the phase diagrams ; however , the time series of continues to be smooth .this observation will be useful in the following section . from the time series of the reduced mesoscopic model we see that , except for small , the temporal behavior of and does not present the horizontal spread observed in the full microscopic model .however , once the epidemic has taken off , the dynamics of the full microscopic model is correctly captured when is not so large ; this can be seen from the phase diagrams .we note that the relative magnitude of the eigenvalues for this choice of parameters takes the values , and that the reduced mesoscopic model fares well despite being rather large .we find slightly worse agreement between the full and reduced models for smaller values of , corresponding to larger .one further detail to add is that in the case of the reduced model there are far fewer trajectories in which the epidemic does take off , compared to the full microscopic model ; that is , the reduced model results in an over - representation of very early extinctions .note that , for the value of employed in these figures , is the maximum possible degree cutoff . in this case , we would not expect the reduced model to yield accurate results .this is due to the fact that not even the full -dimensional mesoscopic model ( eqs . ) provides a good approximation to the microscopic dynamics in this case , since is of the order of and this would need to be explicitly taken into account when performing the mesoscopic expansion leading to the fokker - planck equation .we end this section with some further observations regarding the reduced stochastic system .first , the existence of a -fold degenerate eigenvalue equal to throughout the time evolution also means that stochastic trajectories , as well as deterministic ones , which venture out of the slow subspace will tend to be pushed back into the plane .thus we would expect the reduced set of sdes to be a good approximation throughout the motion , and not just at late times .this may be the reason why the approximation works well even if is not particularly small .second , in retrospect , the elimination of the fast variables in the stochastic system is not strictly necessary , since the sdes previously obtained in sec .[ sec : det ] are recovered , and could be found in terms of and using eq .( [ eqn : slow_ik ] ) obtained from the elimination of fast variables in the deterministic system .one can compare the value of in terms of the to the one in terms of in the deterministic limit and see that , indeed , the approximation is not very good for moderate to large values of .this does not seem to play a huge role when considered as one of many terms in the stochastic evolution of .finally , we can make a general comment about the nature of the fluctuations as the epidemic starts to grow from its initial state . in this casethe relevant eigenvalues will be those of the jacobian eq .( [ eqn : j_k_plus_one_arb_time ] ) , but with and , and will therefore be given by those in eq .( [ eqn : eva_semi ] ) , but with set equal to . therefore the largest eigenvalue is now . in this case , if , so that an epidemic starts to grow , the dynamics is pushed away from the line in its early stages , as can be seen from fig .[ fig : slow_km10 ] where the trajectories leave the vicinity of the absorbing state after reaching the slow subspace .+ + + + + + + + , and .colored lines correspond realizations of the full microscopic model ( top ) and the reduced mesoscopic model ( bottom ) started with a total number of 5 infectives with degrees sampled from a truncated zipf distribution with and ( left ) , ( center ) , and ( right ) .the black , dashed line corresponds to the deterministic solution.,title="fig:",scaledwidth=26.0% ] , and .colored lines correspond realizations of the full microscopic model ( top ) and the reduced mesoscopic model ( bottom ) started with a total number of 5 infectives with degrees sampled from a truncated zipf distribution with and ( left ) , ( center ) , and ( right ) .the black , dashed line corresponds to the deterministic solution.,title="fig:",scaledwidth=26.0% ] , and .colored lines correspond realizations of the full microscopic model ( top ) and the reduced mesoscopic model ( bottom ) started with a total number of 5 infectives with degrees sampled from a truncated zipf distribution with and ( left ) , ( center ) , and ( right ) .the black , dashed line corresponds to the deterministic solution.,title="fig:",scaledwidth=26.0% ] + + , and .colored lines correspond realizations of the full microscopic model ( top ) and the reduced mesoscopic model ( bottom ) started with a total number of 5 infectives with degrees sampled from a truncated zipf distribution with and ( left ) , ( center ) , and ( right ) .the black , dashed line corresponds to the deterministic solution.,title="fig:",scaledwidth=26.0% ] , and .colored lines correspond realizations of the full microscopic model ( top ) and the reduced mesoscopic model ( bottom ) started with a total number of 5 infectives with degrees sampled from a truncated zipf distribution with and ( left ) , ( center ) , and ( right ) .the black , dashed line corresponds to the deterministic solution.,title="fig:",scaledwidth=26.0% ] , and .colored lines correspond realizations of the full microscopic model ( top ) and the reduced mesoscopic model ( bottom ) started with a total number of 5 infectives with degrees sampled from a truncated zipf distribution with and ( left ) , ( center ) , and ( right ) .the black , dashed line corresponds to the deterministic solution.,title="fig:",scaledwidth=26.0% ]in the previous section , we have derived a reduced two - dimensional system with three white - noise processes , the reduced mesoscopic model eqs . ,which provides a good approximation to the dynamics of the full microscopic model , provided the maximum degree individuals can pick is not so large .this is true even for relatively large values of , corresponding to small separation between the eigenvalues of the system .we have also noted that the time series of , obtained from the full microscopic model , presents very smooth behavior over the range of parameters explored .inspired by this observation , we attempt to further simplify the model by noting that eqs . have the form + + + + taking into account the fact that , in any given realization of the process , the lowest value reached by will be roughly around , we compare the magnitudes of the noise intensities between and for fixed and different values of . we find that , in general , and especially during the early stages of the epidemic when , is small compared to and figure [ fig : noises ] ; the effect is more pronounced for more heterogeneous distributions with smaller deterministic final sizes , i.e. , larger .therefore , the system may be described simply by a deterministic ordinary differential equation ( ode ) for , and eqs become with a gaussian noise with zero mean and , with the noise intensity , , given by ^{1/2}.\end{aligned}\ ] ] we call this system the _ semi - deterministic mesoscopic model _ , which is two - dimensional like the reduced mesoscopic model , but has only one white - noise process in place of the latter s three .here we have used that , and , .now , we note that eq. corresponds to a cox - ingersoll - ross ( cir ) model where is a time - dependent parameter which is the solution of the deterministic differential equation ( [ eqn : theta_simple ] ) , , and +\gamma \frac{\phi(\theta^*)+\psi(\theta^*)}{\phi(\theta^*)}}.\end{aligned}\ ] ] for fixed and , the conditional probability distribution of , given , is known in closed form , and corresponds to a non - central chi - square distribution given by where and where is the modified bessel function of the first kind of order . therefore , if we assume that evolves discretely , staying constant from time to time for reasonably small , we can use the result above to simulate eqs . by sampling the values of directly from the distribution in eq . with fixed parameters , at every time - step .the case corresponds to the special case of a non - central chi - square distribution with zero degrees of freedom ; thus , in order to simulate eq .we follow the method described in refs . and sample from a central chi - square distribution with poisson - distributed degrees of freedom , with the convention that the central chi - square distribution with zero degrees of freedom is identically zero .figure [ fig : phase_cir ] shows the dynamics obtained in this way . while we note an additional over - representation of early extinctions , the other results are in very good agreement with the reduced mesoscopic model , eqs . . , and .colored lines correspond to realizations of eqs . , started with a total number of 5 infectives with degrees sampled from a truncated zipf distribution with and ( left ) , ( center ) , and ( right ) .the black , dashed line corresponds to the deterministic solution.,title="fig:",scaledwidth=26.0% ] , and .colored lines correspond to realizations of eqs . , started with a total number of 5 infectives with degrees sampled from a truncated zipf distribution with and ( left ) , ( center ) , and ( right ) .the black , dashed line corresponds to the deterministic solution.,title="fig:",scaledwidth=26.0% ] , and .colored lines correspond to realizations of eqs . , started with a total number of 5 infectives with degrees sampled from a truncated zipf distribution with and ( left ) , ( center ) , and ( right ) .the black , dashed line corresponds to the deterministic solution.,title="fig:",scaledwidth=26.0% ]in the previous sections , we have derived a two - dimensional reduction of the -dimensional sir dynamics described by eqs . , by exploiting a separation of time - scales in the deterministic dynamics of the system .we have found reasonably good agreement between the temporal behaviors of the full and reduced models .furthermore , we have also discussed an extra simplification , consisting in neglecting the noise acting on the susceptible variable , and describing the system in terms of one ode and one sde instead of two sdes .now we explore how accurately these reductions capture the distribution of epidemic sizes , given by the number of recovered individuals present in the system at the end of the epidemic , , which can be obtained as and in the full and reduced models , respectively , where and are the final values of and to be confused with the fixed points in the deterministic limit , and .formal mathematical work has considered central limit theorems for models that generalize ours ; we are interested here in explicit calculation and simulation approaches .we obtain the distribution of from the full model _ via _ a sellke construction , and note that it is characterized by two regions : one to the left of the interval , corresponding to early extinctions , and one to the right corresponding to the epidemic taking off .the first thing we note when comparing this to results from the two versions of the reduced model eqs. , and eqs .is that , as discussed previously , there is an over - representation of very early extinctions in the case of the latter , and therefore the relative sizes of the left- and right - most regions of the distribution are not well captured .we find that the agreement improves significantly if we condition the epidemic to take off , and only consider values of larger than some prescribed limit , say is , we only take into account realizations in which at least two individuals become infected , apart from the initial infective . with this idea in mind ,we can go a bit further by obtaining an approximation of the height of the left - most portion of the distribution of .if we denote by the probability that the initial infective , with degree , does not spread the disease to any other individuals during its _ lifetime_before recovering and , similarly , by the probability that it spreads the disease to one , and only one other individual with degree , we can write where we have again used the ansatz . the expressions are obtained by interpreting the initial behavior of the epidemic as a birth - death process , where a ` birth ' corresponds to the infection of an individual with degree with rate ; the ` death ' of the initial infective , in turn , corresponds to its recovery , which occurs with rate . with this , we obtain for the total probability , where we take the first infective to have degree with probability . since , initially , , we have , i.e. , the first moment of the degree distribution , which we denote by .therefore , .\label{eqn : p2}\end{aligned}\ ] ] figure [ fig : r_inf ] shows the distribution of final sizes from the reduced mesoscopic model and the cir - like model , renormalized so that the fraction of trajectories with is equal to , and compares these to the distribution from the full model .this shows that the birth - death chain approximation we have introduced provides a simple method to adjust for the inaccuracies in early extinctions we have previously noted , yielding an accurate approximation overall. + +stochastic epidemics on heterogeneous networks involve two potentially large parameters .the first of these is the size of the population , , which is proportional to the dimensionality of the microscopic description . for large populations , this motivates the derivation of a mesoscopic model based on a diffusion approximation whose errors are controlled by inverse powers of , and which we have formulated in this paper .the second of these is the maximum degree , which is proportional to the dimensionality of the full mesoscopic model .we have been able to use fast - mode elimination techniques to derive two - dimensional approximations to this scheme : the reduced mesoscopic model , which involves three white - noise processes ; and the semi - deterministic mesoscopic model , which involves only one white - noise process . our numerical results back up our theoretical understanding that these low - dimensional systems are accurate approximations to the microscopic model , except in the case where approaches , although this is to be expected since our derivation of the diffusion limit implicitly assumes that .nevertheless , the reduction can be applied to cases with extremely large values of , as long as they are not too close to , thus significantly simplifying the original model .we have observed that the elimination of fast variables in the system provides a good approximation to the full dynamics even for relatively large values of , corresponding to a small separation between time - scales .we have also shown that the main inaccuracy of the reduction , relating to very early extinction probabilities of the epidemic , can be dealt with using a simple birth - death process argument . in this way, we hope to have provided a useful tool in the study of both infectious diseases and spreading processes on heterogeneous networks .cp - r was funded by conicyt becas chile scholarship no .th was supported by the uk engineering and physical sciences research council ( epsrc ) .a set of right - eigenvectors corresponding to the eigenvalues from eq . are given by where the -th unit vector and a set of right - eigenvectors corresponding to the eigenvalues from eq .are given by the partially reduced system with eigenvalues given by eq .has a set of right - eigenvectors given by where now in eqs .( [ eqn : eve_full ] ) and ( [ eqn : eve_total ] ) we did not normalize the eigenvectors .however in this case we do need to eigenvectors to be normalized to simplify the application of the reduction technique described in sec .[ sec : fastvarelim ] .this requires finding the corresponding left - eigenvectors .when finding these , we have a complication given by the fact that there is one eigenvalue which is -fold degenerate , and the condition will not necessarily hold in general .however , if we construct a matrix in such a way that its -th column corresponds to , we can choose to be the -th row of the inverse matrix , if such an inverse exists .we find that indeed this is the case , and the left - eigenvectors in the partially reduced model are then given by we note that using this property , we can verify that the orthogonality condition is satisfied : this appendix we derive the two - dimensional reduced mesoscopic model , by projecting the dynamics of the -dimensional partially reduced mesoscopic model onto the slow degrees of freedom of its corresponding deterministic limit . using the projector defined in eq .( [ eqn : projector ] ) on gives the slow variables defined by eqs .( [ eqn : slow_z1 ] ) and ( [ eqn : slow_z2 ] ) . later ,the inverse transformation , back to the more familiar variables and , will be required .it is },\label{eqn : theta_slow}\\ \lambda(\bm{z } ) & = \frac{\phi(\theta^*)}{k d_k{\theta^*}^{k}}\,z_2 , \label{eqn : lambda_slow}\end{aligned}\ ] ] however for the moment we will use the and variables , and will denote the right - hand side of eq .( [ eqn : theta_slow ] ) as .the deterministic slow dynamics , obtained by applying the projector to , will be given by ,\label{eqn : slow_dz1}\\ \dot{z}_2 & = z_2\left(\beta \phi ( \chi(\bm{z}))-\gamma \right),\label{eqn : slow_dz2}\end{aligned}\ ] ] where the dot represents the time derivative .applying now the projector to the noise terms in the finite- case we find , for the component along , \label{rhs_z2dot } \\ & = \frac{1}{\sqrt{n}}\left[\left(1-\frac{\phi(\chi(\bm z))}{\chi(\bm z)}\frac{\beta \theta^*}{-\gamma + \beta \phi(\theta^*)}\right)\bar{\sigma}_1(\bm z)\zeta_1(\tau ) + \frac{\beta \theta^*}{-\gamma + \beta \phi(\theta^*)}\left(\bar{\sigma}_2(\bm z)\zeta_2(\tau)+\bar{\sigma}_3(\bm z)\zeta_3(\tau)\right)\right ] , \label{eqn : eta_z1}\end{aligned}\ ] ] where we obtain , from eqs . , and , ^{1/2},\\ \bar{\sigma}_2(\bm{z})&= \left[\frac{\beta \phi(\theta^*)z_2}{kd_k{\theta^*}^k}\left(\phi(\chi(\bm z))+\psi(\chi(\bm z))-\frac{\phi(\chi(\bm z))^2}{\chi(\bm z)g^\prime ( \chi(\bm z))}\right)\right]^{1/2},\\ \bar{\sigma}_3(\bm z ) & = \left[\frac{z_2}{k d_{k}{\theta^*}^{k}}\left(\gamma \left[\phi(\theta^*)+\psi(\theta^*)\right]+\beta \left[\phi(\theta^*)\psi(\chi(\bm{z}))-\phi(\chi(\bm{z}))\psi(\theta^*)\right]\right)\right]^{1/2}. \label{eqn : sigma_3_bar_determined}\end{aligned}\ ] ] similarly , for the component of the fluctuations along : we now form and as linear combinations of and , using eqs .( [ eqn : theta_slow ] ) and ( [ eqn : lambda_slow ] ). these are equal to the appropriate linear combinations of eqs .( [ eqn : slow_dz1 ] ) , ( [ eqn : slow_dz2 ] ) , ( [ eqn : eta_z1 ] ) and ( [ eqn : eta_z2 ] ) .one finds that eqs .( [ eqn : theta_stoch ] ) and ( [ eqn : lambda_stoch ] ) are exactly recovered , except that now is determined , given by eq .( [ eqn : sigma_3_bar_determined ] ) . 47ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * ( ) , * * , ( ) * * , ( ) _ _ ( , , ) _ _ ( , , ) * * , ( ) * * , ( ) , _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) in _ _ ( ) pp . * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ , ed .( , ) * * , ( ) _ _ ( , ) _ _ , ed .( , ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ , vol .( , ) `` , '' in _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) ,
networks of contacts capable of spreading infectious diseases are often observed to be highly heterogeneous , with the majority of individuals having fewer contacts than the mean , and a significant minority having relatively very many contacts . we derive a two - dimensional diffusion model for the full temporal behavior of the stochastic susceptible - infectious - recovered ( sir ) model on such a network , by making use of a time - scale separation in the deterministic limit of the dynamics . this low - dimensional process is an accurate approximation to the full model in the limit of large populations , even for cases when the time - scale separation is not too pronounced , provided the maximum degree is not of the order of the population size .
three models which study the stochastic behaviour of the prices of commodities that take into account several aspects of possible influences on the prices were proposed by e schwartz in the late nineties . in the simplest model ( the so - called one - factor model )schwartz assumed that the logarithm of the spot price followed a mean - reversion process of ornstein uhlenbeck type .the one - factor model is expressed by the following evolution equation measures the degree of mean reversion to the long - run mean log price , is the market price of risk , is the standard deviation of the return on the stock , is the stock price , is the drift rate of and is the time . is the current value of the futures contract which depends upon the parameters , _i.e. _ , .generally , , and are assumed to be constants .in such a case the closed - form solution of equation ( [ 1fm.01 ] ) which satisfies the initial condition given in .it is .it has been shown that the closed - form solution ( [ 1fm.02 ] ) follows from the application of lie point symmetries .in particular it has been shown that equation ( [ 1fm.01 ] ) is of maximal symmetry , which means that it is invariant under the same group of invariance transformations ( of dimension ) as that of the black - scholes and the heat conduction equation . the detailed analysis for the lie symmetries of the three models , which were proposed by schwartz , and the generalisation to the -factor modelcan be found in .other financial models which have been studied with the use of group invariants can be found in leach05a , leach06a , naicker , sinkala08a , sinkala08b , wafo , consta , lescot , dimas2 and references therein .solution ( [ 1fm.02 ] ) is that which arises from the application of the invariant functions of the lie symmetry vector also leaves the initial condition invariant . in a realistic world parameters are not constants , but vary in time and depend upon the stock price , that is , the parameters have time and space dependence , where as space we mean the stock price parameters as an analogue to physics . in this workwe are interested in the case for which the parameters , , and are space dependent , _ ie _ , are functions of .we study the lie point symmetries of the space - dependent equation ( 1fm.01 ) . as we see in that case ,when , there does not exist any lie point symmetry which satisfies the initial condition ( 1fm.01a ) .the lie symmetry analysis of the time - dependent black - scholes - merton equations was carried out recently in , it has been shown that the autonomous , and the nonautonomous black - scholes - merton equation are invariant under the same group of invariant transformations , and they are maximal symmetric .the plan of the paper is as follows .the lie point symmetries of differential equations are presented in section [ preliminaries ] .in addition we prove a theorem which relates the lie point symmetries of space - dependent linear evolution equations with the homothetic algebra of the underlying space which defines the laplace operator . in section [ space1 ]we use these results in order to study the existence of lie symmetries of for the space - dependent one - factor model ( 1fm.01 ) and we show that the space - dependent problem is not necessarily maximally symmetric . the generic symmetry vector and the constraint conditions are given and we prove a corollary in with the space - dependent linear evolution equation is always maximally symmetric when we demand that there exist at least one symmetry of the form ( [ 1fm.03 ] ) which satisfies the schwartz condition ( [ 1fm.01a ] ) . furthermore in section [ proof2 ] we consider the time - dependence problem and we show that the model is always maximally symmetric . finally in section[ con ] we discuss our results and we draw our conclusions .appendix[proof1 ] completes our analysis .below we give the basic definitions and properties of lie point symmetries for differential equations and also two theorems for linear evolution equations . by definition a lie point symmetry , of a differential equation where the are the independent variables , is the dependent variable and is the generator of a one - parameter point transformation under which the differential equation is invariant .let be a one - parameter point transformation of the independent and dependent variables with the generator of infinitesimal transformations being the differential equation can be seen as a geometric object on the jet space .therefore we say that is invariant under the one - parameter point transformation with generator , , if } } \theta = 0 .\label{go.11}\]]or equivalently } } \theta = \lambda \theta ~,~{mod}\theta = 0 , \label{go.12}\]]where } $ ] is the second prolongation of in the space .it is given by the formula } = x+\eta _ { i}\partial _ { u_{,i}}+\eta _ { ij}\partial _ { u_{,ij } } , \label{go.13}\]]where , and is the operator of total differentiation , _ ie _ , .moreover , if condition ( [ go.11 ] ) is satisfied ( equivalently condition ( [ go.12 ] ) ) , the vector field is called a lie point symmetry of the differential equation . a geometric method which relates the lie and the noether point symmetries of a class of second - order differential equationshas been proposed in jgp , ijgmmp .specifically , the point symmetries of second - order partial differential equations are related with the elements of the conformal algebra of the underlying space which defines the laplace operator . similarly , for the lie symmetries of the second - order partial differentialequation, is the laplace operator , is a nondegenerate tensor ( we call it a metric tensor ) and , the following theorem arises . ] .[ theom1]the lie point symmetries of ( [ 1fm.04 ] ) are generated by the homothetic group of the metric tensor which defines the laplace operator .the general form of the lie symmetry vector is is the homothetic factor of , for the killing vector ( kv , for homothetic vector ( hv ) , and are solutions of ( [ 1fm.05 ] ) , is a kv / hv of and the following condition holds , namely, that . another important result for the linear evolution equation of the form of ( [ 1fm.04 ] ) is the following theorem which gives the dimension of the possible admitted algebra .[ theom2 ] the one - dimensional linear evolution equation can admits 0 , 1 , 3 and 5 lie point symmetries plus the homogenous and the infinity symmetries .however , as equation ( [ 1fm.04 ] ) is time independent , it admits always the autonomous symmetry . in the following we apply theorems [ theom1 ] and [ theom2 ] in order to study the lie symmetries of the space - dependent one - factor modelthe space - dependent one - factor model of commodity pricing is defined by the equation the parameters , , , and , depend upon the stock price , . in order to simplify equation ( [ 1fm.06 ] )we perform the coordinate transformation , that is , equation ( 1fm.06 ) becomes is the laplace operator in the one - dimensional space with fundamental line element admits a two - dimensional homothetic algebra .the gradient kv is and the gradient hv is homothetic factor equation ( [ 1fm.08 ] ) is of the form of ( [ 1fm.04 ] ) where now performing any symmetry analysis we observe that , when , ( [ 1fm.08 ] ) is in the form of the heat conduction equation and it is maximally symmetric , _ ie _ , it admits symmetries . in the casefor which from ( [ 1fm.10 ] ) we have that .however , this is only a particular case whereas new cases can arise from the symmetry analysis .let be the two hvs of the space ( [ 1fm.09 ] ) with homothetic factors . as ( [ 1fm.08 ] ) is autonomous and linear , it admits the lie symmetries , where is a solution of ( 1fm.08 ) , therefore from theorem [ theom1 ] we have that the possible additional lie symmetry vector is which the following conditions hold we study two cases : a ) and b ) let . then ( [ 1fm.s05 ] ) is satisfied .hence from ( [ 1fm.s04 ] ) we have the system , _ ie _ , .this means that from any vector field we have only one symmetry .hence from theorem [ theom2 ] condition ( [ 1fm.s06 ] ) should hold for and in this case the space - dependent one - factor model admits lie point symmetries .consider that . from ( [ 1fm.s04 ] )we have that then ( [ 1fm.s05 ] ) gives consider the case for which .recall that for the space ( [ 1fm.09 ] ) , that is , from ( [ 1fm.s07 ] ) we have the conditions .we continue with the subcases : [ [ subcase - b1 ] ] subcase b1 + + + + + + + + + + let that is , . in this casethe symmetry conditions are : we have the following system if system ( [ 1fm.s11])-([1fm.s13 ] ) holds for or , then equation ( [ 1fm.08 ] ) admits lie symmetries and in the case for which conditions ( [ 1fm.s11])-([1fm.s13 ] ) hold , _ ie _ , admits lie symmetries which is the maximum for a evolution equation .[ [ subcase - b2 ] ] subcase b2 + + + + + + + + + + in the second subcase we consider that .hence , if b2.a ) , then from ( [ 1fm.s08 ] ) and ( [ 1fm.s09 ] ) it follows that from theorem [ theom2 ] these conditions must hold for and and equation ( [ 1fm.08 ] ) is maximally symmetric .b2.b ) let .then it follows that conditions hold for or .if these conditions hold for both and , then equation ( [ 1fm.08 ] ) is maximally symmetric .we collect the results in the following theorem .[ theom3]the autonomous linear equation ( [ 1fm.08 ] ) , apart from the symmetry of autonomy , the linear symmetry and the infinity symmetry , can admit : \a ) the two lie symmetries , where is a hv of the one - dimensional flat space with if and only if condition ( [ 1fm.s06 ] ) holds for and .b1 ) the two or four lie symmetries if conditions ( [ 1fm.s11])-([1fm.s13 ] ) hold for or and and , respectively , where and b2.a ) the four lie symmetries ( [ 1fm.s20a ] ) if conditions ( [ 1fm.s14])([1fm.s16 ] ) hold for and , where and is given by ( [ 1fm.s20 ] ) .b2.b ) the two or four lie symmetries ( [ 1fm.s20a ] ) if and only if conditions ( [ 1fm.s17])([1fm.s19 ] ) hold for or and and , respectively , where furthermore , we comment that theorem ( [ theom3 ] ) holds for all linear autonomous equations of the form of ( [ 1fm.04 ] ) . herewe discuss the relation among the lie symmetries and the initial condition ( [ 1fm.01a ] ) . in the case of constant parameters ,_ ie _ , in equation ( [ 1fm.01 ] ) the lie symmetry vector ( [ 1fm.03 ] ) is the linear combination among the linear symmetry and the symmetry which is generated by the kv of the underlying space , which is .however , for a general function , , in order for the symmetry which is generated by the kv to satisfy the initial condition or the initial condition has to change .consider now that and satisfies the conditions from theorem [ theom3 ] , b2.a , we have that is given by ( [ 1fm.11 ] ) and at the same time generates two lie point symmetries for equation ( [ 1fm.08 ] ) .the lie point symmetries are the autonomous and trivial symmetries .the symmetry vector field is the kv of the one - dimensional space .therefore , if we wish the field to satisfy an initial condition such as , then it should be which gives .from this we can see that , when , we have the initial condition ( [ 1fm.01a ] ) .let , and be constants .hence from ( [ 1fm.11 ] ) we have that for we have the solution for position let now and consider that the kv generates a lie point symmetry of equation ( 1fm.08 ) from case a of theorem [ theom3 ]. then from condition ( 1fm.s06 ) we have that is , however , in that case , equation ( [ 1fm.08 ] ) is maximally symmetric and admits point symmetries .consider reduction with the lie symmetry which keeps invariant the initial condition the application of in ( [ 1fm.08 ] ) gives as another application of theorem [ theom3 ] we select .then the kv is .let this generate a lie point symmetry for equation ( [ 1fm.08 ] ) from the case a of theorem [ theom3 ] , that is , conditions ( [ 1fm.s06 ] ) give now we can see that equation ( [ 1fm.08 ] ) is maximally symmetric and admits point symmetries . consider the lie symmetry , which leaves invariant the modified initial condition .the invariant solution which follows is we observe that , when generates a lie point symmetry for equation ( [ 1fm.08 ] ) , the functional form of , which includes and has a specific form , such that equation ( [ 1fm.08 ] ) is maximally symmetric and equivalent with the black - scholes and the heat equations . in general , for unknown function , from theorem [ theom1 ]we have the following corollary .[ cor]when the kv of the underlying space which defines the laplace operator in equation ( [ 1fm.08 ] ) generates a lie point symmetry , the functional form of is equation ( [ 1fm.08 ] ) is maximally symmetric .the symmetry vectors , among the autonomous , the homogeneous and the infinity symmetries , are: , , , and , , where and are the elements of the homothetic algebra of the underlying space .we note that corollary [ cor ] holds for all autonomous linear 1 + 1 evolution equations . in the following section we discuss the group invariants of the time - dependent problem .when the parameters of equation ( [ 1fm.01 ] ) depend upon time , the one - factor model can be written as without loss of generality we can select . by analysing the determining equations as provided by the sym package dimas05a , dimas06a, dimas08a we find that the general form of the lie symmetry vector is \partial _ { x } \nonumber \label{a3 } \\[1pt ] & & + \left [ f(t)+\frac{1}{4}\left ( 4xbq+x(1 - 2p)a^{\prime 2}qa^{\prime } -4x(b^{\prime } + ap^{\prime } ) \right .\right .\nonumber \\[1pt ] & & \left .+ a^{\prime \prime } ) \right ) \right ] f\partial _ { f},\end{aligned}\]]where functions are given by the system of ordinary differential equations , & & + b^{\prime } -2pb^{\prime } -2f^{\prime } + ap^{\prime } -2app^{\prime } + aq^{\prime } -\frac{a^{\prime \prime } } { 2},\end{aligned}\]] & & + 3a^{\prime } p^{\prime } -aq^{\prime } -2bq^{\prime } + 2apq^{\prime } + 2b^{\prime \prime } + 2ap^{\prime \prime } \end{aligned}\]]and in addition to the infinite number of solution symmetries .consequently the algebra is so that it is related to the classical heat equation by means of a point transformation . in the following we discuss our results .in the models of financial mathematics the parameters of the models are assumed to be constants .however , in real problems these parameters can depend upon the stock prices and upon time . in this work we considered the one - factor model of schwartzand we studied the lie symmetries in the case for which the parameters of the problem are space - dependent . in terms of lie symmetries , the one - factor model it is maximally symmetric and it is equivalent with the heat equation , but in the case where the parameters are space dependent , that is not necessary true , and we show that the model can admit 1 , 3 or 5 lie point symmetries ( except the trivial ones ) . to perform this analysis we studied the lie symmetries of the autonomous linear evolution equation and we found that there exist a unique relation among the lie symmetries and the collineations of the underlying geometry , where as geometry we define the space of the second derivatives .however , for a specific relation among the parameters of the model the system is always maximally symmetric .in particular , that holds when is an arbitrary function and are constants . in that case , the correspoding symmetry ( 1fm.03 ) becomes .consider that , and ( con.01 ) holds .then the application of the lie symmetry in ( 1fm.08 ) gives the solution in the limit , solution ( [ con.02 ] ) becomes can compared with solution ( [ 1fm.02 ] ) .consider now that is periodic around the line .let that and ( [ con.01 ] ) holds .hence the solution of the space - dependent one - factor model ( [ 1fm.08 ] ) which follows from the lie symmetry is is a periodic function of the stock price . for the taylor expansion of the static solution ( [ con.04 ] ) around the point , is in figure [ fig3 ]we give the static evolution of the solutions , ( con.03 ) and ( [ con.04 ] ) , for various values of the constant . ) ( left figures ) and solution ( [ con.04 ] ) ( right .figure ) for various values of the constant .solid line is for , dash dash line is for , and the dash dot line is for .,height=264 ] on the other hand , in section [ proof2 ] we studied the case for which the parameters of the one - factor model are time - dependent and we showed that the model is always maximally symmetric and equivalent with the heat equation , that is , the time - depedence does not change the admitted group invariants of the one - factor model ( [ 1fm.01 ] ) . a more general consideration will be to extend this analysis to the two - factor and three - factor models and also to study the cases for which the parameters are dependent upon the stock price and upon the time , _ ie _ , the parameters are space and time dependent .this work is in progress .finally we remark how useful are the methods which are applied in physics and especially in general relativity for the study of space - dependent problems in financial mathematics .the reason for this is that from the second derivatives a ( pseudo)riemannian manifold can be defined .this makes the use of the methods of general relativity and differential geometry essential .the research of ap was supported by fondecyt postdoctoral grant no .rmm thanks the national research foundation of the republic of south africa for the granting of a postdoctoral fellowship with grant number 93183 while this work was being undertaken .in it has been shown that for a second - order pde of the form , lie symmetries are generated by the conformal algebra of the tensor .specifically the lie symmetry conditions for equation ( [ eq.02 ] ) are by comparison of equations ( [ 1fm.04 ] ) and ( [ eq.02 ] ) we have that , _ ie _ , and . therefore the symmetry vector for equation ( [ 1fm.04 ] ) has the form continue with the solution of the symmetry conditions .when we replace in ( [ eq.05 ] ) , it follows that which means that , where is a ckv of the metric , , with conformal factor , _ ie_, and furthermore , from ( eq.04 ) the following system follows ( recall that and ) we observe that , where is a constant ; that is , is a kv / hv of . finally for the function , it holds that case i : let .then from ( [ eq.18 ] ) which means that .however , from ( eq.19 ) we have that which gives the linear symmetry . in that case from the form of the autonomous symmetry arises .sophocleuous c , leach pgl and andriopoulos k ( 2008 ) algebraic properties of evolution partial differential equations modelling prices of commodities _ mathematical methods in the applied sciences _ * 31 * 679 - 694 leach pgl , ohara jg & sinkala w ( 2006 ) symmetry - based solution of a model for a combination of a risky investment and a riskless investment _ journal of mathematical analysis and application _ * 334 * 368 - 381 sinkala w , leach pgl & ohara jg ( 2008 ) optimal system and group - invariant solutions of the cox - ingersoll - ross pricing equation _ mathematical methods in the applied sciences _ * 31 * 679 - 694 ( doi : 10.1002/maa.935 ) ibragimov nh & soh cw ( 1997 ) solution of the cauchy problem for the black - scholes equation using its symmetries , _ proceedings of the international conference on modern group analysis. mars , nordfjordeid , norway_. cimpoiasu r & constantinescu r , ( 2012 ) new symmetries and particular solutions for 2d black - scholes model , _ proceedings of the 7th mathematical physics meeting : summer school and conference on modern mathematical physics , belgrade , serbia _ tamizhmani km , krishnakumar k & leach pgl ( 2014 ) algebraic resolution of equations of the black scholes type with arbitrary time - dependent parameters , _ applied mathematics and computations _ * 247 * 115 - 124 paliathanasis a & tsamparlis m ( 2014 ) the geometric origin of lie point symmetries of the schrodinger and the klein - gordon equations , _ international journal of geometric methods in modern physics _ * 11 * 1450037 dimas s & tsoubelis d ( 2005 ) sym : a new symmetry - finding package for mathematica _ group analysis of differential equations _ibragimov nh , sophocleous c & damianou pa edd ( university of cyprus , nicosia ) 64 - 70
we consider the one - factor model of commodities for which the parameters of the model depend upon the stock price or on the time . for that model we study the existence of group - invariant transformations . when the parameters are constant , the one - factor model is maximally symmetric . that also holds for the time - dependent problem . however , in the case for which the parameters depend upon the stock price ( space ) the one - factor model looses the group invariants . for specific functional forms of the parameters the model admits other possible lie algebras . in each case we determine the conditions which the parameters should satisfy in order for the equation to admit lie point symmetries . some applications are given and we show which should be the precise relation amongst the parameters of the model in order for the equation to be maximally symmetric . finally we discuss some modifications of the initial conditions in the case of the space - dependent model . we do that by using geometric techniques . * keywords : * lie point symmetries ; one - factor model ; prices of commodities * msc 2010 : * 22e60 ; 35q91
lifetime distribution represents an attempt to describe , mathematically , the length of the life of a system or a device .lifetime distributions are most frequently used in the fields like medicine , engineering etc .many parametric models such as exponential , gamma , weibull have been frequently used in statistical literature to analyze lifetime data .but there is no clear motivation for the gamma and weibull distributions .they only have more general mathematical closed form than the exponential distribution with one additional parameter .+ recently , one parameter lindley distribution has attracted the researchers for its use in modelling lifetime data , and it has been observed in several papers that this distribution has performed excellently .the lindley distribution was originally proposed by lindley in the context of bayesian statistics , as a counter example of fudicial statistics which can be seen that as a mixture of exp( ) and gamma(2 , ) .more details on the lindley distribution can be found in ghitany et al .+ a random variable x is said to have lindley distribution with parameter if its probability density function is defined as : + with cumulative distribution function some of the advances in the literature of lindley distribution are given by ghitany et al . who has introduced a two - parameter weighted lindley distribution and has pointed that lindley distribution is particularly useful in modelling biological data from mortality studies .mahmoudi et . have proposed generalized poisson lindley distribution .bakouch et al . have come up with extended lindley ( el ) distribution , adamidis and loukas have introduced exponential geometric ( eg ) distribution .shanker et . have introduced a two - parameter lindley distribution .zakerzadeh et al. have proposed a new two parameter lifetime distribution : model and properties .hassan has introduced convolution of lindley distribution .ghitany et al. worked on the estimation of the reliability of a stress - strength system from power lindley distribution .elbatal et al. has proposed a new generalized lindley distribution .+ risti has introduced a new family of distributions with survival function given by in this paper we introduce a new family of distribution generated by a random variable which follows one parameter lindley distribution .the survival function of this new family is given as : where and is a cumulative distribution function(cdf ) which we use to generate a new distribution .the cdf is referred to as a transformer and the corresponding probability density function ( pdf ) is given by we consider the transformer to follow exponential distribution with cdf .hence the survival function of the new distribution is given by with corresponding density given by we refer the random variable with survival function ( 4 ) as lindley - exponential(l - e ) distribution with parameters and which we denote by l - e( ) .the aim of this paper is to study the mathematical properties of the l - e distribution and to illustrate its applicability .the contents are organized as follows .the analytical shapes of the pdfin equations ( 5 ) are established in section 2 . the quantile function presented in section 3 .the expressions for the moment generating function and moments corresponding to equation ( 5 ) are given in section 4 .limiting distribution of sample statistics like maximum and minimum has been shown in section 5 . in section 6 ,entropy of l - e distribution is presented .the maximum likelihood estimation procedure is considered in section 7 .the performance of the maximum likelihood estimators for small samples is assessed by simulation in section 8 .section 9 gives estimation of stress - strength parameter r by using maximum likelihood estimation method .finally we conclude the paper by showing applicability of the model to the real data sets .here , the shape of pdf ( 5 ) follows from theorem 1 . + * theorem 1 : * the probability density function of the l - e distribution is decreasing for and unimodel for . in the latter case , mode is a root of the following equation : _ proof : _ the first order derivative of is where , . for ,the function is negative .so for all .this implies that is decreasing for .also note that , and .this implies that for , has a unique mode at such that for and for .so , is unimodal function with mode at .the pdf for various values of and are shown in figure 1 . and .,scaledwidth=90.0% ] we , now , consider the hazard rate function ( hrf ) of the l - e distribution , which is given by + * proposition 1 * : for the hazard rate function follows relation .+ * proof : * the proof is straight forward and is omitted .+ in figure 2 , hazard function for different values of parameters and . and .,scaledwidth=90.0% ]the cdf , , can be obtained by using eq.([4 ] ) .further , it can be noted that is continuous and strictly increasing so the quantile function of is , . in the following theorem, we give an explicit expression for in terms of the lambert function . for more details on lambert functionwe refer the reader to jodr .* theorem 2 : * for any , the quantile function of the l - e distribution is where denotes the negative branch of the lambert w function .+ + _ proof : _ by assuming , the cdf can be written as for fixed and , the quantile function is obtained by solving .by re - arranging the above , we obtain taking exponential and multiplying on both sides , we get + by using definition of lambert - w function ( , where is a complex number ) , we see that is the lambert function of the real argument .thus , we have moreover , for any it is immediate that , and it can also be checked that since . therefore , by taking into account the properties of the negative branch of the lambert w function , we have also by substituting in cdf and solving it for , we get further the first three quantiles we obtained by substituting in equation ( 11 ) . moment generating function of the random variable follow l - e distribution is given as + where , and known as digamma function . +hence the first and second raw moments can be obtained by and respectively .+ where is eulergamma constant = 0.577216 .+ table 1 displays the mode , mean and median for l - e distribution for different choices of parameter and .it can be observed from the table that all the three measures of central tendency decrease with increase in and increase with an increase in .also for any choice of and it is observed that mean median mode , which is an indication of positive skewness .
in this paper , we introduce a new distribution generated by lindley random variable which offers a more flexible model for modelling lifetime data . various statistical properties like distribution function , survival function , moments , entropy , and limiting distribution of extreme order statistics are established . inference for a random sample from the proposed distribution is investigated and maximum likelihood estimation method is used for estimating parameters of this distribution . the applicability of the proposed distribution is shown through real data sets . * keyword : * lindley distribution , entropy , stress - strength reliability model , maximum likelihood estimator . + * ams 2001 subject classification : * 60e05
with the growth of internet the relationship between online information and financial markets has become a subject of ever increasing interest .online information offers with respect to its origin and purpose and reflects either interest of some profile of users in the form of query or knowledge about certain topic in the form of news blogs or reports .financial markets are strongly information - driven and these effects can be seen by studying either search query volumes or social media sentiment .many studies have analysed the effects of search query volumes of specific terms with movements in financial markets of related items .bordino et al . show that daily trading volumes of stocks traded in nasdaq 100 are correlated with daily volumes of yahoo queries related to the same stocks , and that query volumes can anticipate peaks of trading by one or more days .dimpfl et al . report that the internet search queries for term `` dow '' obtained from google trends can help predict dow jones realized volatility .vlastakis et al . study information demand and supply using google trends at the company and market level for30 of the largest stocks traded on nyse and nasdaq 100 .chauvet et al . devise an index of investor distress in the housing market , housing distress index ( hdi ) , also based on google search query data .preis et al . demonstrate how google trends data can be used for designing a market strategy or defining a future orientation index . in principle , different effects between information sources and financial markets are expected considering news , blogs or even wikipedia articles .andersen et al . characterize the response of us , german and british stock , bond and foreign exchange markets to real - time u.s . macroeconomic news .zhang and sikena exploit blog and news and build a sentiment model using large - scale natural language processing to perform a study on how a company s media frequency , sentiment polarity and subjectivity anticipate or reflect stock trading volumes and financial returns .chen et al . investigate the role of social media in financial markets , focussing on single - ticker articles published on seeking alpha - a popular social - media platform among investors .mao et al . compare a range of different online sources of information ( twitter feeds , news headlines , and volumes of google search queries ) using sentiment tracking methods and compare their value for financial prediction of market indices such as the djia ( dow jones industrial average ) , trading volumes , and implied market volatility ( vix ) , as well as gold prices .casarin and squazzoni compute bad news index as weighted average of negative sentiment words in headlines of three distinct news sources .the idea of cohesiveness of news as a systemic financial risk indicator is related to recent works studying mimicry and co - movement in financial markets as phenomena reflecting systemic risk in financial systems .harmon et al . show that the last economic crisis and earlier large single - day panics were preceded by extended periods of high levels of market mimicry - direct evidence of uncertainty and nervousness , and of the comparatively weak influence of external news .kennet et al . define an index representing the balance between the stock correlations and the partial correlations after subtraction of the index contribution and study the dynamics of s&p 500 over the period of 10 years ( 1999 - 2010 ) .the idea of cohesiveness as a measure of news importance is simple : if many sources report about same events then this should reflect their importance and correlate with the main trends in financial markets .however , in order to capture the trends of systemic importance one must be able to track different topics over majority of relevant online news sources .in other words , one needs : ( i ) an access to the relevant news sources and ( ii ) a comprehensive vocabulary of terms relevant for the domain of interest .we satisfy the second prerequisite for systemic approach through the use of large vocabulary of financial terms corresponding to companies , financial institutions , financial instruments and financial glossary terms . to satisfy the first prerequisite , in our analysis we rely on financial news documents extracted by a novel text - stream processing pipeline newstream ( ` http://newstream.ijs.si/ ` ) , from a large number of web sources .these texts are then filtered and transformed into a form convenient for computing nci for the particular period of time .we show that importance of financial news can be measured in a more systemic way than via sentiment towards individual entities or number of occurrences of individual terms , and that strong cohesiveness in news reflects the trends in financial markets .there is already a strong evidence linking co - movement of financial instruments to systemic risk in financial markets .we hypothesize that cohesiveness of financial news reflects in some part this systemic risk .our news cohesiveness index ( nci ) captures average mutual similarity between the documents and entities in the financial corpus .if we represent documents as sets of entitites then there are two alternative views on similarity : ( i ) two documents are more similar than some other two documents if they share more entities and ( ii ) two entities are more similar than some other two entities if they co - occur in more documents .we construct nci so that the overall similarity in a corpus of documents is equal regardless on the view we choose to adopt .we analyse the nci in the context of different financial indices , their volatility , trading volumes , as well as google search query volumes .we show that nci is highly correlated with volatility of main us and eu stock market indices , in particular their historical volatility and vix ( implied volatility of s&p500 ) .furthermore , we demonstrate that there is a substantial difference between aggregate term occurrence and cohesiveness in their relations toward financial indices .in order to measure the herding effects in financial news we introduce a news cohesiveness index ( nci ) - a systemic indicator that quantifies cohesion in a collection of financial documents . a starting point for the calculation of nci is a _ document - entity matrix _ that quantifies occurrences of entities in each individual document collected over certain period of time .we use concept of entity ( instead of e.g. term ) to represent different lexical appearances of some concept in texts . in our casewe use a vocabulary of entities that includes financial glossary terms , financial institutions , companies and financial instruments .the full taxonomy of entities is available in supplementary information section 3 .we start with the definition of occurrence , which says whether some entity is present or not in some document , regardless of how many times it occurs in the document .this makes document - entity matrix a binary matrix : is an matrix , where is the number of documents published in selected time period and is the total number of entities we monitor .document - entity matrix also corresponds to a biadjacency matrix of a bipartite graph between documents and entities .an edge between document and entity exists if the entity appears in the document .the overall similarity in the collection of documents should be equal regardless whether we choose to view it as the similarity between the _ documents _ or between the _entities_. to achieve this we define the similarity as the _ scalar product _ of either document pairs or entity pairs .now we define nci as a frobenius norm of scalar similarity matrix between all pairs of documents or entities : where is either or .frobenius norms of both document - document similarity matrix and entity - entity similarity matrix are equal , therefore the cohesion is conserved whether we measure it as the _ document _ or the _ entity _ similarity : in the network representation these two similarity matrices correspond to two projections of a bipartite graph of the original document - entity matrix , as illustrated in figure [ fig : projection - matrix - network - representation ] .moreover , one can exploit properties of the frobenius norm of scalar similarity matrix and express cohesiveness as a function of singular values of document - entity matrix ( proof in the supplementary information section 2 ) : where are the largest singular values of matrix in a singular value decomposition : because singular values are calculated on the original document - entity matrix and not its document or entity projections , we claim that we capture an _ intrinsic _ property of the corresponding document - entity matrix that is invariant to projection .this can also be inferred from the fact that the eigenvalues of similarity matrices and are equal and they correspond to the singular values of document - entity matrix .this approach can be beneficial for large document - entity matrices as it is much more efficient in terms of time and memory compared to explicit calculation of similarity matrix .we can calculate just first values incrementally , until we reach the desired accuracy of nci ( see supplementary information section 1 ) . in practice ,only a small number of singular values is enough to calculate nci up to the desired precision .as the number of documents is changing each day while the number of entities stays constant , all nci indices in our analyses are normalized with the number documents in the corpus .we have statistically confirmed that the nci is largely above the level of fluctuations of cohesiveness random null model ( see supplementary information section 2 ) .sometimes it is interesting to perform detailed analysis of which groups of entities or documents contribute the most to the overall cohesiveness . for this purpose we can divide entities or documents into groups using any appropriate semantic criteria and calculate cohesiveness for each group separately or between pairs of groups .semantic partitions in the entity projection are created via grouping of entities in mutually disjoint groups , defined by their taxonomy labels ( hence semantic interpretation ) . on the other hand ,semantic partitions in the document projection can be created via grouping of entities either by their temporal or source membership .figure [ fig : semantic - partitioning ] illustrates the concept of partitioning in the context of different projections .we can calculate cohesiveness separately for each semantic group or a combination of semantic groups .note that even in this case we do not need to explicitly calculate similarity matrices ( see supplementary information section 1 ) .following the taxonomy of entities described in supplementary information section 3 we defined four semantic groups : companies , regions , financial instrument and euro crisis terms .figure [ fig : semantic_components_occurrences ] shows the most frequent entities in each of the semantic partitions , based on the news corpus collected over the period of analysis . in order to assesnci s utility as a systemic risk indicator , we use correlations analysis and granger causality tests against the pool of financial market and information indicators .the analysis should also provide deeper insight into the interplay between news , trends in financial markets and behaviour of investors .we adopt terminology from , and treat our news based indicators ( nci variants and entity occurence ) as indicators of the information supply in online media , while volumes of google search queries will be treated as indicators of information demand or as a proxy of investor interest .we group indicators as follows : * * inormation supply indicators : * - cohesiveness index based on all the news ( nci ) from newstream , cohesiveness index based only on filtered financial news ( nci - financial ) from newstream , total entity occurrences based on the aggregate from all news documents , and total entity occurrences based on strictly financial documents of newstream . * * information demand indicators : * - these are volumes of google search queries ( gsq ) for 4 finance / economy related categories from google finance ( from google domestic trends - finance&investment , bankruptcy , financial planning , business ) . * * financial market indicators : * - these include daily realized volatilities , historical volatilities and trading volumes of major stock market indices ( s&p 500 , dax , ftse , nikkei 225 , hang seng ) as well as implied volatilities of s&p500 ( vix ) .details on preparation of individual indicators are given in methods section .we start the analysis with a simple comparison of nci calculated on all news and nci calculated on filtered financial news .figure [ fig : nci_vs_indices ] shows dynamics of nci , and nci - financial in comparison to vix ( implied volatility of s&p 500 , the so called `` fear factor '' ) .scatter plots on the right show that correlation of vix and nci - financial is significantly higher than vix and nci .this is a first illustration of the importance of the filtering the right content for the construction of indicators from texts . for more details on how filtering affects correlations with other indicessee supplementary information section 3 .figure [ fig : corr_matrix ] shows pearson correlation coefficients between different information indicators and financial market indicators .corresponding p - values are calculated using a permutation test and are available in supplementary information section 5 .all correlations reported in this article have p - value unless explicitly stated .interesingly , the correlations between total entity occurrences , nci and nci - financial are relatively low , confirming that cohesiveness captures very different signal from the entity occurrences .furthermore , correlations between total entity occurrences , nci and financial indices are , on average , much lower than correlations between nci - financial and financial indices .relatively low correlation between nci - financial and nci confirms importance of filtering out strictly financial market - related articles from the newstream , rather than having all the articles that contain some of the entities from the vocabulary .we have performed a more detailed analysis of these effects by studying in parallel behavior of different variants of entity occurrences and nci - financial using different subsets of the vocabulary and the document space , independently .the main insight gained was that entity occurences become more informative when a smaller vocabulary of the most frequent entities is used , but this requires use of the whole document space .nci has proven to be much more robust to the choice of both vocabulary and document space ( details in supplementary information section 6 ) .interestingly , the nci - financial index is highly correlated with implied volatility ( , figure [ fig : corr_matrix ] ) , as well as with historical and daily realized volatilities ( , figure [ fig : corr_matrix ] ) .these correlations are much higher than the correlations of the gsq categories ( , figure [ fig : corr_matrix ] ) .in contrast to nci - financial , gsq categories exhibit relatively stronger correlations with stock trading volumes ( , figure [ fig : corr_matrix ] ) .google bankruptcy and google unemployment are significantly correlated with nci - financial ( correlation above 0.2 , figure [ fig : corr_matrix ] ) , which is most probably due to similarities in vocabulary used in constructing nci - financial and respective gsq indicators .a more in depth picture of the news cohesiveness index is obtained when observing individual semantic components of nci - financial and their correlation patterns with financial and google search query indicators .semantic components based on ` [ region ] ` and ` [ eurocrisis ] ` taxonomy categories all have similar correlation patterns to nci - financial ( with correlation above 0.7 for ` [ eurocrisis ] ` and above 0.5 for ` [ region ] ` , figure [ fig : corr_matrix ] ) ; this also shows that these categories are most important for the behavior of nci - financial . on the other hand , semantic components based on ` [ company ] ` and ` [ instrument ] ` exhibit quite different , in many parts , opposite correlation patterns ( with correlations close to 0 or even negative ) .it is interesting to note that both the nci - financial and gsq indicators have strong negative correlation with nikkei 225 volatility and trading volume ( up to -0.4 for nci - financial and up to -0.5 for gsq - unemployment ) .the granger - causality test ( gc test ) is frequently used to determine whether a time series is useful in forecasting another time series .the idea of the gc test is to evaluate if can be better predicted using both the histories of and rather than using only the history of ( i.e. granger - causes ) .the test is performed by regressing on its own time - lagged values and on those of included .an f - test is used in examining if the null hypothesis that is not granger - caused by can be rejected . in table[ fig : table_gc_bidirect ] we show results of pairwise g - causality tests between information supply and demand indicators and financial indicators .cells of the table give both directionality ( , or bidirectional ) and significance at two levels of f - test ( p - values ; ) . besides gc testing nci - financial and its semantic components at higher taxonomy levels ,we show also results obtained for nci ( non - filtered news nci ) and total entity occurences as a baseline . the results in table [ fig : table_gc_bidirect ] paint a much different picture than the correlation study .firstly , granger causality seems to be almost exclusively directed from financial to information world , with single bidirectional exception between ` [ region]x[eurocrisis ] ` semantic component of nci - financial and hang seng daily realized volatility .our financial news indicator nci - financial seems to be g - caused solely by ftse daily volatility .this finding is in contrast with the fact that the nci - financial is strongly correlated with several other indicators like implied volatility ( vix ) ( , table [ fig : corr_matrix ] ) .however , two of the semantic components ` [ eurocrisis]x[eurocrisis ] ` and ` [ region]x[eurocrisis ] ` are strongly g - caused by implied volatility , historical and daily volatilities of most of the major stock market indices .on the other hand , the gsq categories seem to be mostly gc driven by trading volumes , almost exclusively of us and uk financial market ( s&p 500 and ftse ) .gsq indicators seem to be divided in two groups by their g - causality : ( i ) those that are g - caused mainly by trading volumes ( business and industrial , bankruptcy , financial planning and finance and investment ) and total entity occurrences in the news , and ( ii ) those that are strongly g - caused by all other gsq categories ( unemployment ) .interestingly , total entity occurrence in the news , seem to be the strongest g - causality driver of the gsq volumes , while two of the semantic components of nci - financial are g - caused by gsq finance and investment and financial planning .this work introduces a new indicator of financial news importance based on a concept of cohesiveness of texts , from large corpora of news and blogs sources .in contrast to indicators introduced by other authors which are based on sentiment modelling , nci measures cohesiveness in the news by approximating the average similarity between texts .our correlation results confirm the main hypothesis that cohesiveness of the financial news is a signal that is strongly correlated with systemic financial market indicators in particular volatilities of major stock exchanges .the analysis of granger causality tests over a pool of financial and information related indicators suggests that nci - financial is mainly related with the volatility of the market . in our analysismost important semantic components of nci - financial are mainly g - caused by implied ( vix ) , historical and daily volatilities .this implies effects from both short term and long term risks in the financial market .the only exception ( bidirectional causality between _ [ region]x[eurocrisis ] _ and hang seng daily volatility ) might be plausibly explained as a time zone effect .this does not seem to be the case for gsq indicators which are mainly driven by trading volumes , with the exception of gsq unemployment , which seems to be driven mostly by search volumes of other gsq categories .similar to findings of some previous studies , in which aggregate sentiment or financial headline occurrence were used as measures of state of the financial market , information supply co - movement as measured by nci - financial , seem to be primarily caused by trends on the financial market rather than the opposite .we find that similar results holds also for the gsq categories which approximate information demand side in our case .g - causality patterns show , similarly to correlation , that cohesiveness captures quite different signal with respect to total entity occurrence ; the results also suggest the presence of somewhat circular interplay between information supply and information demand indicators .for example , total entity occurrence is g - causing three of the gsq categories ( business and industry , bankruptcy and financial planning ) , while financial planning and unemployment are g - causal for semantic components ` [ instrument]x[eurocrisis ] ` and ` [ eurocrisis]x[eurocrisis ] ` , which suggests feedback mechanisms between news and search behaviour . in comparison with the findings of studies which used simpler measures of news importance or sentiment, we find that financial news cohesiveness reflects the level of the volatility in the market and is gc driven both by current level of volatility and implied volatility , while gsq volumes are driven mainly by trading volumes .impact of news cohesiveness and gsq volumes in the reverse direction , as determined by gc tests , is only weakly implied in case of semantic components of nci - financial and hang seng index .this is not in line with previous works that report predictive utility ( mostly for gsq volumes ) with respect to certain financial instruments .however , one has to bear in mind that the results of gc tests reflect average of lagged correlations between indicators over the specific period in time ( in our case oct 2011 - jul 2013 ) .it is possible that direction of causality between information and financial indicators changes in time , but this was hard to detect in our data due to the limited length of time series .another possible reason for different results is that most of previous works were based on a limited number of google search query terms , typicaly more closely related to the particular stock market index of interest . in principle , this is different from volumes of gsq term categories in our case , which reflect aggregates over larger number of different query terms .gsq categories closely resemble the concept of semantic components and it is possible that the application of the concept of cohesiveness , if adapted to gsq category volumes , may produce signals more predictive with respect to financial market trends .access to structured information about the financial markets with its various instruments and indicators is available for several decades , but systematic quantification of unstructured information hidden in news from diverse web sources is of relatively recent origin .we base our analyses on a newly created text processing pipeline - newstream , designed and implemented within the scope of eu fp7 projects first ( ` http://project-first.eu/ ` ) and foc ( ` http://www.focproject.eu/ ` ) .newstream continuously downloads articles from more than 200 worldwide news sources , extracts the content and stores complete texts of articles .it is a domain independent data acquisition pipeline , but biased towards finance by the selection of news sources and the taxonomy of entities relevant for finance . for the purpose of filtering , efficientstoring and analytics , expert based financial taxonomy and vocabulary of entities and terms have been created , containing names of relevant financial institutions , companies , finance and economics specific terms , etc .the newstream pipeline has been collecting data since october 2011 . in our analyseswe use text corpora from october 2011 to june 2013 and we have filtered over 1,400,000 financially related texts stored in the form of document - entity matrices .full structure of the taxonomy is in the supplementary information section 3 , and the list of the domains from which most documents were downloaded in the supplementary information section 4 . [[ filtering - of - financial - documents ] ] filtering of financial documents + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + newstream pipeline downloads articles from more than 200 web sites of online news and blogs .moreover , despite the selection of financial news sites , there are many articles which are only indirectly related to finance , such as politics or even sport . to obtain a clean collection of strictly financial texts , we have developed a rule - based model utilizing taxonomy categories as features to describe documents , and a gold standard of human labelled documents ( 3500 documents ) . a machine learned rule - based model is used as a filter for extracting strictly finance related texts from a corpus .this model has a recall of over 50% , with precision of well over 80% .the rule - based model for filtering financial documents can be found in supplementary information section 3 .[ [ financial - indicators ] ] financial indicators + + + + + + + + + + + + + + + + + + + + we analyse nci in comparison to the financial market indicators of worldwide markets and google search query volumes . for that purposewe have downloaded stock market indices from yahoo finance web service ( ` http://finance.yahoo.com/ ` ) : high , low , open , close prices and volume of s&p 500 , dax , ftse , nikkei 225 and hang seng index .we also use implied volatility of s&p500 ( vix ) .implied volatily is calculated for the next 30 days by chicago board options exchange ( cboe , ` http://www.cboe.com/ ` ) using current prices of indices options .historical ( realized ) volatilities are calculated from the past prices of the indices themselves .we use daily prices of individual indices to calculate a proxy of daily realized volatility . historical ( realized ) volatilities are calculated as standard deviations of daily log returns in the appropriate time window : where are daily prices , and is time window . in our analyseswe used a window of 21 working days . [[ google - search - query - volumes ] ] google search query volumes + + + + + + + + + + + + + + + + + + + + + + + + + + + almost all previous studies used search query volumes of specific terms .instead , we used google search query volumes of predefined term categories from google finance website .we have chosen five categories from google domestic trends that are related to financial market : business and industrial , bankruptcy , financial planning , finance and investing , unemployment .we downloaded yoy ( year over year ) change values for these categories from google finance web service ( ` https://www.google.com/finance ` ) .we have used functions of the r packages _ tseries _ , _ lmtest _ , _ vars _ , _ urca _ to download and calculate indices , construct joint time series dataset , determine correlations and study granger causality relations .we have followed the methodology of toda and yamamoto for granger causality testing of non - stationary series .details of the procedure are given in supplementary information section 5 .10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 , , , & . in _ _ , wsdm 12 , ( , ) ._ et al . _ . _ _ * * , ( ) . &_ _ ( ) . & ._ _ * * , ( ) . , & ( ) .http://ssrn.com/abstract=2148769 . ,_ * * ( ) . , , & ._ _ * * ( ) ._ et al . _ . __ * * ( ) . , , & ._ _ * * , ( ) . & .( ) . , , & ._ _ ( ) . http://ssrn.com/abstract=1807265 . ,_ _ ( ) ._ _ * * , ( ) . , , , &_ _ * * ( ) . , , & ._ _ * * ( ) . , & ._ _ * * ( ) ._ et al . _( ) . http://ssrn.com/abstract=1829224 .et al . _ . __ * * , ( ) . , & ._ _ ( ) .( ) . http://ssrn.com/abstract=2311964 . , &( ) . http://rady.ucsd.edu/faculty/directory/engelberg/pub/portfolios/fears.pdf .* * , ( ) .this work was supported in part by the european commission under the fp7 projects foc ( forecasting financial crises , measurements , models and predictions , grant no .255987 and foc inco , grant no .297149 ) and by the croatian ministry of science , education and sport project `` machine learning algorithms and applications '' .we would like to thank the following people for helpful discussions : stefano battiston , vinko zlati , guido caldarelli , michelangelo puliga , tomislav lipi and matej miheli .all authors contributed to the writing and editing of the manuscript .mp , naf and ts carried out the modelling and the analyses .pkn , i m and mg were involved in gathering and processing of the data .iv and ts were involved in interpretation of the results .* supplementary information * accompanies this paper . +* competing financial interests : * the authors declare no competing financial interests .+ * dataset availability : * all data and codes that we used in our analysis is freely available on ` http://lis.irb.hr/foc/data/data.html ` .correspondence to tomislav muc , + email : ` tomislav.smuc.hr `
motivated by recent financial crises significant research efforts have been put into studying contagion effects and herding behaviour in financial markets . much less has been said about influence of financial news on financial markets . we propose a novel measure of collective behaviour in financial news on the web , news cohesiveness index ( nci ) , and show that it can be used as a systemic risk indicator . we evaluate the nci on financial documents from large web news sources on a daily basis from october 2011 to july 2013 and analyse the interplay between financial markets and financially related news . we hypothesized that strong cohesion in financial news reflects movements in the financial markets . cohesiveness is more general and robust measure of systemic risk expressed in news , than measures based on simple occurrences of specific terms . our results indicate that cohesiveness in the financial news is highly correlated with and driven by volatility on the financial markets . = 1
space time explorer & quantum equivalence space test ( ste - quest ) is a medium - sized mission candidate for launch in 2022/2024 in the cosmic vision programme of the european space agency .after recommendation by the space science advisory committee , it was selected to be first studied by esa , followed by two parallel industrial assessment studies .this paper gives a brief summary of the assessment activities by astrium which build on and extend the preceding esa study as described in .ste - quest aims to study the cornerstones of einstein s equivalence principle ( eep ) , pushing the limits of measurement accuracy by several orders of magnitude compared to what is currently achievable in ground based experiments . on the one hand, experiments are performed to measure the gravitational red - shift experienced by highly accurate clocks in the gravitational fields of earth or sun ( space time explorer ) . on the other hand , differential accelerations of microscopic quantum particlesare measured to test the universality of free fall , also referred to as weak equivalence principle ( quantum equivalence space test ) .these measurements aim at finding possible deviations from predictions of general relativity ( gr ) , as postulated by many theories trying to combine gr with quantum theory .examples include deviations predicted by string theory , loop quantum gravity , standard model extension , anomalous spin - coupling , and space - time - fluctuations , among others .the ste - quest mission goal is summarized by the four primary science objectives which are listed in tab.[tab : mission_objectives ] together with the 4 measurement types geared at achieving them .lll primary mission objective & measurement accuracy & measurement strategy + measurement of & to a fractional frequency & ( a ) space - clock comparison + earth gravitational red - shift & uncertainty better than & to ground clock at apogee + & & ( b ) space - clock comparison + & & between apogee and perigee + measurement of & to a fractional frequency & ( c ) comparison between + sun gravitational red - shift & uncertainty better than & two ground clocks via + & , goal : & spacecraft + measurement of & to a fractional frequency & ( c ) comparison between + moon gravitational red - shift & uncertainty better than & two ground clocks via + & , goal : & spacecraft + measurement of & to an uncertainty in the & ( d ) atom interferometer + weak equivalence principle & etvs param .smaller & measurements at perigee + & & +the ste - quest mission is complex and comprises a space - segment as well as a ground segment , with both contributing to the science performance .highly stable bi - directional microwave ( x / ka - band ) and optical links ( based on laser communication terminals ) connect the two segments and allow precise time - and - frequency transfer .the space - segment encompasses the satellite , the two instruments , the science link equipment , and the precise orbit determination equipment .the ground - segment is composed of 3 ground terminals that are connected to highly accurate ground - clocks . in order to fulfil the mission objectives ,various measurement types are used that are shown in fig.[fig : measurement_principle ] .we shall briefly discuss them ._ earth gravitational red - shift measurements : _ the frequency output of the on - board clock is compared to that of the ground clocks . in order to maximize the signal , i.e. the relativistic frequency offset between the two clocks , a highly elliptical orbit ( heo ) is chosen .when the spacecraft is close to earth during perigee passage , there are large frequency shifts of the space - clock due to the strong gravitational field . when it is far from earth during apogee passage , there are only small gravitational fields and therefore small frequency shifts . whilst measurement type ( a ) compares the space - clock at apogee to the ground - clock , relying on space - clock accuracy , measurement type ( b ) compares the frequency variation of the space - clock over the elliptical orbit , which requires clock stability in addition to visibility of a ground terminal at perigee ( see also section [ sec : orbit_ground ] ) . , scaledwidth=100.0% ]+ _ sun gravitational red - shift measurements : _ the relativistic frequency offset between a pair of ground clocks is compared while the spacecraft merely serves as a link node between the ground clocks for a period of typically 5 to 7 hours , as described by measurement type ( c ) . in that case , the accuracy of the space - clock is of minor importance whereas link stability and performance over a long period are essential . asthis requirement is also hard to fulfil by the microwave links alone , the optical links play an important role .however , availability and performance of optical links are strongly affected by weather conditions which in turn depend on the location and altitude of the ground terminals .+ _ moon gravitational red - shift measurements : _ as in the preceding case , the relativistic frequency offset between a pair of ground clocks is compared while the spacecraft merely serves as a link node between the ground clocks .the potential signals for a violation of the eep with the moon as the source mass can be easily distinguished from those with the sun as the source mass due to the difference in frequency and phase .it is important to point out that unless the eep is violated the measured frequency difference between two distant ground clocks ( due to the sun or the moon ) is expected to be zero up to small tidal corrections and a constant offset term from the earth field .+ _ atom interferometer measurements : _ these measurements do not require contact to ground terminals but must be performed in close proximity to earth where the gravity gradients are large and the etvs parameter ( see eq.[equ : otvos ] below ) becomes small .the latter defines the magnitude of a possible violation of the weak equivalence principle ( wep ) , as required for mission objective 4 . for this reason , the atom interferometer operates during perigee passage at altitudes below 3000 km , where gravity accelerations exceed .the ste - quest payload primarily consists of two instruments , the atomic clock and the atom interferometer , as well as equipment for the science links ( microwave and optical ) and for precise orbit determination ( pod ) . whilst a detailed description of the proposed payload elementsis found in , here we shall only give a brief overview over the payload as summarized in table [ tab : payload ] .note that some payload elements , such as the optical links and the onboard clock , may become optional in a future implementation of ste - quest , which is discussed in the official esa assessment study report ( yellow book) .lll payload element , & subsystem , & components + instrument & unit & + & pharao ng & caesium tube , laser source + * atomic clock * & & + & microwave - optical local & laser head , frequency comb , + & oscillator ( molo ) & highly - stable cavity + & & + & mw synthesis & frequency & ultra - stable oscillator , frequency gen . ,+ & distribution ( msd ) & comparison and distribution ( fgcd ) + & & + & instr .control unit ( icu ) & control electronics , harness + & physics package & vacuum chamber , atom source , + * atom interf .* & & magnetic coils and chip , laser + & & interfaces , mu - metal shields + & & + & laser subsystem & cooling , repumping , raman , + & & detection , optical trapping lasers + & & + & electronics & ion pump control , dmu , magnetic coil + & & drive , laser control electronics + & microwave links & mw electronics , x-/ka - band antennas + * equipment * & & + & optical links & 2 laser communication terminals + & & ( lcts ) , synchronization units + & gnss equipment & gnss receiver and antennas + * equipment * & & + & laser ranging equipment & corner cube reflectors ( optional ) + + _ the atomic clock_ is essentially a rebuilt of the high - precision pharao clock used in the aces mission , with the additional provision of an optically derived microwave signal ( mw ) for superb short term stability which is generated by the molo . in pharao, a cloud of cold caesium atoms is prepared in a specific hyperfine state and launched across a vacuum tube .there , the cloud is interrogated by microwave radiation in two spatially separated ramsey zones before state transition probabilities are obtained from fluorescence detection . repeating the cyclewhile scanning the mw fields yields a ramsey fringe pattern whose period scales inversely proportional to the flight time t between ramsey interactions . in the molo , a reference laseris locked to an ultra - stable cavity , which provides superb short - term stability in addition to the long - term stability and accuracy obtained from pharao by locking the reference signal to the hyperfine transition of the caesium atoms .the reference laser output is frequency - shifted to correct for long term drifts and then fed to the femtosecond frequency comb to convert the signal from the optical to the microwave - domain .core components of the molo , such as the highly stable cavity , the frequency comb , and optical fibres are currently being developed or tested in dedicated national programs .additionally , other components of optical clocks have been investigated in the ongoing space - optical - clock program ( soc) of the european space agency .+ _ the atom interferometer_ uses interference between matter waves to generate an interference pattern that depends on the external forces acting on the atoms under test . in order to reject unwanted external noise sources as much as possible ,differential measurements are performed simultaneously with two isotopes of rubidium atoms , namely and .the measured phases for both isotopes are subtracted to reveal any possible differences in acceleration under free fall conditions of the particles .this way , the common mode noise which is present on both signals is rejected to a high degree . in the proposed instrument a number of the order of atoms of both isotopes are trapped and cooled by means of sophisticated magnetic traps in combination with laser cooling .thus , the residual temperature and consequently the residual motion and expansion of the atomic clouds is reduced to the nk regime , where the cloud of individual atoms makes the transition to the bose - einstein condensed state .the atoms are then released from the trap into free fall before being prepared in superpositions of internal states and split into partial waves by pulses of lasers in a well defined time sequence . by using a sequence of 3 laser pulses separated by a period of , a mach - zehnder interferometer in space - timeis formed .the distribution of the internal atomic states is then detected either by fluorescence spectroscopy or by absorption imaging .the distribution ratio of the internal atomic states provides the information about the phase shift which is proportional to the acceleration acting on the atoms .the etvs parameter ( see mission objective 4 ) depends on the ratio of inertial mass to gravitational mass for the two isotopes and can be found from the measured accelerations as follows : where subscript 1 refers to physical parameters of the first isotope ( ) and subscript 2 to those of the second isotope ( ) .+ the programmatics of european space atom interferometers , as proposed for ste - quest strongly builds on the different national and european - wide activities in this direction , in particular the space atom interferometer ( sai) and the nationally funded projects quantus and ice . in these activities innovative technologies are developed and tested in parabolic flights , drop tower campaigns and sounding rocket missions .+ _ the science links _ comprise optical and microwave time & frequency links to compare the on - board clock with distributed ground clocks .similarly , the links allow to compare distant ground clocks to one another using the on - board segment as a relay .the microwave link ( mwl ) is designed based on re - use and further development of existing mwl technology for the aces mission , while implementing lessons learned from aces .this applies to the flight segment and the ground terminals , as well as for science data post - processing issues down to the principles and concept for deploying and operating a network of ground terminals and associated ground clocks .digital tracking loops with digital phase - time readout are used to avoid beat - note concepts .different to aces and given the highly eccentric orbit , ka - band is used instead of ku - band , x - band instead of s - band for ionosphere mitigation , and the modulation rate is changed from 100 mchip / s to 250 mchip / s .the mwl flight segment comprises four receive channels to simultaneously support comparisons with 3 ground terminals and on - board calibration .the optical link baseline design relies on two tesat laser communication terminals ( lcts) on - board the satellite , both referenced to the space - clock molo signal , and the ground lcts to perform two time bi - directional comparisons .the lcts operate at 1064 nm with a 5 w fiber laser , similar to the edrs lct , and a 250 mhz rf modulation .a telescope aperture of 150 mm in space and 300 mm on ground was chosen .a preliminary performance estimate has been made by extrapolation from existing measurements with more detailed investigations following now in dedicated link studies .the optical links are found to be particularly sensitive to atmospheric disturbances and local weather conditions .these particularly affect common view comparisons between two ground terminals which are typically located in less favorable positions concerning local cloud coverage . + _ the precise - orbit - determination ( pod ) equipment _ is required to determine the spacecraft position and velocity , which allows calculating theoretical predictions of the expected gravitational red - shifts and the relativistic doppler , and compare them to the measured ones .our studies found that a gnss - receiver with a nadir - pointing antenna suffices to achieve the required accuracy .additional corner - cube reflectors to support laser ranging measurements , such as also used for the giove - a satellite , may optionally be included as a backup and for validation of gnss - derived results ._ the baseline orbit _ is highly elliptical with a typical perigee altitude of 700 km and an apogee altitude of 50000 km , which maximizes gravity gradients between apogee and perigee in order to achieve mission objective 1 ( earth gravitational red - shift ) .the corresponding orbit period is 16h , leading to a 3:2 resonance of the ground track as depicted in fig.[fig : orbit ] left .some useful orbit parameters are summarized in tab .[ tab : baseline_orbit ] .llllll perigee alt . & apogee alt . & orbit period & inclination & arg .perigee & raan drift + 700 - 2200 km & 50 000 km&16 h & & & + contrary to the old baseline orbit , where the argument of perigee was drifting from the north to the south during the mission , the argument of perigee for the new baseline orbit is frozen in the south . therefore , contact to the ground terminals located in the northern hemisphere can only be established for altitudes higher than 6000 km , which considerably impairs the modulation measurements of the earth gravitational red - shift ( measurement type b in table[tab : mission_objectives ] ) . on the pro side , for higher altitudes and in particular around apogee , common view links to several terminals and overmany hours can be achieved which greatly benefits the sun gravitational red - shift measurements ( type c ) .the baseline orbit also features a slight drift of the right - ascension of the ascending node ( raan ) due to the term of the non - spherical earth gravitational potential , which leads to a total drift of approximately 30 per year .this drift being quite small , the orbit can be considered as nearly - inertial during one year so that seasonal variations of the incident solar flux are expected ( see also fig.[fig : solar_flux ] ) .the major advantage of the new baseline orbit over the old one is a greatly reduced for de - orbiting , which allows reducing the corresponding propellant mass from 190 kg to 60 kg .the reason for that is the evolution of the perigee altitude due to the 3-body interaction with the sun and the moon .the perigee altitude starts low at around 700 km , goes through a maximum of around 2200 km at half time , and then drops back down below 700 km in the final phase , as depicted in fig.[fig : orbit ] right .the low altitude at the end of mission facilitates de - orbiting using the 6 x 22-n oct thrusters during an apogee maneuver that is fully compliant with safety regulations . + _ the ground segment _ comprises 3 ground terminals , each equipped with microwave antennas for x- and ka - band communication as well as ( optional ) optical terminals based on the laser communication terminals ( lct ) of tesat .the terminals must be distributed over the earth surface with a required minimum separation of five thousand kilometers between them .this maximizes their relative potential difference in the gravitational field of the sun in pursuit of mission objective 2 .additionally , the terminals must be in the vicinity of high - performance ground clocks ( distance km ) to which they may be linked via a calibrated glass - fibre network .this narrows the choice of terminal sites to the locations of leading institutes in the field of time - and frequency metrology .factoring in future improvements in clock technology and design , those institutes are likely able to provide clocks with the required performance of a fractional frequency uncertainty on the order of ) at the start of the mission , considering that current clock performance comes already quite close to the requirement .accounting for all the aspects mentioned above , boulder ( usa ) , torino ( italy ) , and tokyo ( japan ) were chosen as the baseline terminal locations .the spacecraft has an octagonal shape with a diameter of 2.5 m and a height of 2.7 .it consists of a payload module ( plm ) , accommodating the instruments as well as science link equipment , and a service module ( svm ) for the spacecraft equipment and propellant tanks ( see fig .[ fig : spacecraft_overview ] ) . the laser communication terminals ( lcts )are accommodated towards the nadir end of opposite skew panels .the nadir panel ( + z - face ) accommodates a gnss antenna and a pair of x - band / ka - band antennas for the science links . the telemetry & tele - commanding ( tt&c ) x - band low gain and medium gain antennasare accommodated on y - panels of the service module .the two solar array wings are canted at with respect to the spacecraft y - faces and allow rotations around the y - axis .radiators are primarily placed on the + x/-x faces of the spacecraft and good thermal conductivity between dissipating units and radiators is provided through the use of heat - pipes which are embedded in the instrument base - plates .a set of 2 x 4 reaction control thrusters ( rcts ) for orbit maintenance and slew maneuvers after the perigee measurement phase is used as part of the chemical propulsion system ( cps ) together with 1 x 6 orbit control thrusters ( octs ) for de - orbiting .the latter is becoming a stringent requirement for future mission designs to avoid uncontrolled growth of space debris and reduce the risk of potential collisions between debris and intact spacecraft .the micro - propulsion system ( mps ) is based on gaia heritage and primarily used for attitude control , employing a set of 2x8 micro - proportional thrusters to this purpose . as primary sensors of the attitude and orbit control subsystem ( aocs ) ,the spacecraft accommodates 3 star trackers , 1 inertial measurement unit ( imu ) , and 6 coarse sun sensors . the payload module of fig .[ fig : payload_module ] shows how the two instruments are accommodated in the well protected central region of the spacecraft , which places them close to the center - of - mass ( com ) and therefore minimizes rotational accelerations whilst also providing optimal shielding against the massive doses of radiation accumulated over the five years of mission duration .the pharao tube of the atomic clock is aligned with the y - axis .this arrangement minimizes the coriolis force which acts on the cold atomic cloud propagating along the vacuum tube axis when the spacecraft performs rotations around the y - axis .these rotations allow preserving a nadir - pointing attitude of the spacecraft as required for most of the orbit ( see also section [ sec : aocs ] ) .the sensitive axis of the atom interferometer must point in the direction of the external gravity gradient .it is therefore aligned with the z - axis ( nadir ) and suspended from the spacecraft middle plate into the structural cylinder of the service module which provides additional shielding against radiation and temperature variations .the total wet mass of the spacecraft with the launch adapter is approximately 2300 kg , including unit maturity margins and a 20% system margin .the total propellant mass ( cps+mps ) is 260 kg , out of which 60 kg are reserved for de - orbiting .the total power dissipation for the spacecraft is 2230 w , including all required margins , if all instruments and payload units operate at full power .the spacecraft is designed to be compatible with all requirements ( including launch mass , envelope , loads , and structural frequencies ) for launch with a soyuz - fregat from the main esa spaceport in kourou .the two instruments pose stringent requirements on attitude stability and non - gravitational accelerations which drive the design of the attitude and orbit control system ( aocs ) and the related spacecraft pointing strategy . to ensure full measurement performance, the atom interferometer requires non- gravitational center - of - mass accelerations to be below a level of . among the external perturbations acting on the spacecraft and causing such accelerations , air drag is by far the dominant one .it scales proportional to the atmospheric density and therefore decreases with increasing altitudes .figure [ fig : drag_and_rotation ] ( left ) displays the level of drag accelerations for various altitudes and under worst case assumptions , i.e. maximal level of solar activity and maximal cross section offered by the spacecraft along its trajectory .the results indicate that below altitudes of approximately 700 km drag accelerations become intolerably large .we conclude that , for the baseline orbit , drag forces are generally below the required maximum level and therefore need not be compensated with the on - board micro - propulsion system , although such an option is feasible in principle . another requirement is given by the maximal rotation rate of during perigee passage when sensitive atom interferometry experiments are performed .figure [ fig : drag_and_rotation ] ( right ) plots the spacecraft rotation rate ( black solid line ) against the time for one entire orbit , assuming a nadir - pointing strategy .the plot demonstrates that the requirement ( red broken line ) would not only be violated by the fast rotation rates encountered at perigee , even for the comparatively slow dynamics at apogee the rotation rates would exceed the requirement by more than one order of magnitude .we therefore conclude that atom interferometer measurements are incompatible with a nadir - pointing strategy throughout the orbit , requiring the spacecraft to remain fully inertial during such experiments . considering that atom interferometry is generally performed at altitudes below 3000 km, the following spacecraft pointing strategy was introduced for ste - quest ( see figure [ fig : pointing_strategy ] ) : * * perigee passage : * for altitudes below 3000 km , lasting about each orbit , the atom interferometer is operating and the spacecraft remains inertial .the atom interferometer axis is aligned with nadir at perigee and rotated away from nadir by 64 degrees at 3000 km altitude ( see figure [ fig : pointing_strategy ] ) .the solar array driving mechanism is frozen to avoid additional micro - vibrations . * * transition into apogee : * for altitudes between 3000 km and 7000 km , right after the perigee passage , the spacecraft rotates such that it is nadir pointed again ( slew maneuver ) .the solar arrays are unfrozen and rotated into optimal view of the sun . ** apogee passage : * for altitudes above 7000 km the spacecraft is nadir pointed and rotational accelerations are generally much slower than during perigee passage . the solar array is allowed to rotate continuously for optimal alignment with the sun .atom interferometer measurements are generally not performed . * * transition into perigee : * for altitudes between 7000 km and 3000 km , right after the apogee passage , the spacecraft rotates such that it is aligned with nadir at perigee .the solar arrays are frozen . during the science operations at perigee and apogee, the attitude control relies solely on the mps .the slew maneuvers , i.e. pitch rotations , are performed by means of the cps in at a rate of .the cps is also used to damp out residual rotational rates which are caused by solar array bending modes and fuel sloshing due to the fast maneuvers . when the perigee passage is entered , the mps is activated to further attenuate the residual rate errors coming from the minimum impulse bit and sensor bias ( ca . ) , which is shown in fig.[fig : rate_errors]b .closed - loop simulations , based on a dynamical computer model of the spacecraft in - orbit and simulated aocs equipment , demonstrated that the damping requires about 500 s. the complete transition , including maneuvers , lasts about . in order to mitigate the effects of external disturbances during the perigee passage, an aocs closed - loop control has been designed that is based on cold - gas thrusters and includes roll - off filtering to suppress sensor noise while maintaining sufficient stability margins .this allows compensating external torques from drag forces , solar radiation pressure and the earth magnetic field ( among others ) , yielding rotation rates well below the required after the initial damping time of approximately s , as can be seen in in fig.[fig : rate_errors]b .the associated mean relative pointing error ( rpe ) is found to be smaller than in a 15 s time window corresponding to one experimental cycle .note that the spacecraft rotation rates grow to intolerably large levels of several hundred , if the spacecraft is freely floating without control loops to compensate the environmental torques ( see fig.[fig : rate_errors]a ) .for orbit maintenance and de - orbiting , use of the cps is foreseen .is outside the plot scale.,scaledwidth=100.0% ] the problems associated with micro - vibrations lie at the heart of two key trades for our current spacecraft configuration .one major cause of micro - vibrations are reaction wheels . to investigate the option of using them instead of the mps for attitude control , extensive simulations were performed with a finite element model of the spacecraft which provided the transfer functions from the reaction wheels to the atom interferometer interface .based on this model and available measurement data of low - noise reaction wheels ( bradford w18e ) , the level of micro - vibrations seen at instrument interface could be compared against the requirements .the results of the analyses are summarized in fig .[ fig : micro_vibrations ] left which clearly shows a violation of the requirement for frequencies above 20 hz within the measurement band [ 1 mhz , 100 hz ] .the major contribution to the micro - vibrations comes from the h1 harmonic when the disturbance frequency corresponds to the wheel rotation rate up to the maximum wheel rate of 67 hz .further micro - vibrations are induced by the amplification of solar array flexible modes through the wheel rotation .the violation of the requirement for the maximum level of tolerable micro - vibrations when using reaction - wheels , as found from our simulations , lead us to use a micro - propulsion system for attitude control in our baseline configuration . } ] , where and are the earth radius and the radial distance to apogee , respectively .it follows that the space - clock and the science links must have a fractional frequency uncertainty of better than to allow measuring the gravitational red - shift of the earth to the specified accuracy . from similar deliberationsone finds that the ground clock and science link performance must be even better , on the order of , to measure the gravitational red - shift of the sun to the specified accuracy .this requires very long contact times between the spacecraft and ground terminals , on the order of , which is only achieved after integration over many orbits .+ the above requirements can be directly translated into the required stability for the link , expressed in modified allan deviation , and into the frequency uncertainty for a space - to - ground or ground - to - ground comparison .a detailed performance error budget was established to assess all contributions to potential instabilities .for the microwave links we found that , provided the challenging requirements for high thermal stability are met , the aces design with the suggested improvements can meet the specifications .the assessment of the optical links showed that a significantly better performance than the state - of - the - art is required .a major issue for further investigation is the validity of extrapolating noise power spectral densities over long integration times in presence of atmospheric turbulence , which is required for comparing the estimated link performance to the specified link performance expressed in modified allan deviation . + _ orbit uncertainty : _ as the fractional frequency uncertainty is specified to be around for the space - clock , a target value of for the orbit uncertainty does not compromise the overall measurement accuracy .although preliminarily a constant target value of 2 m in position and 0.2 mm / s in velocity accuracy has been derived in , these values can be significantly relaxed at apogee .a final analysis will have to build on a relativistic framework for clock comparison as outlined in .+ _ gnss tracking : _ the proposed method of precise orbit determination ( pod ) relies on tracking gps and galileo satellites with a gnss receiver supporting all available modulation schemes , e.g. bpsk5 , bpsk10 and mboc , and an antenna accommodated on the nadir panel . considering that gnss satellites broadcast their signal towards the earth and not into space , they become increasingly difficult to track at altitudes above their constellation ( 20200 km altitude ) , when the spacecraft can only receive signals from gnss satellites behind the earth . in order to increase the angular range of signal reception ,the receiver is required to support signal side - lobe tracking .this way , a good signal visibility for six hours during each orbit and for altitudes up to 34400 km can be achieved , with a corresponding pvt ( position - velocity - time ) error of 1.7 m at 1 hz output rate .the transition maneuvers split the visibility region ( green arch in fig .[ fig : gnss_visibility ] ) into separate segments , which must be considered separately in the orbit determination . for both regionswe find orbit errors less than 10 cm in position and less than 0.2 mm / s in velocity .extrapolation of the spacecraft trajectory from the end of the visibility arch as far as apogee introduces large orbit errors ( ) due to large uncertainties in parameters describing the solar radiation pressure .nonetheless , these errors are sufficiently small to achieve the specified frequency uncertainty for the space - clock measurement if one considers the arguments for relativistic clock comparison derived in . a brief discussion of the possibility to relax the ste - quest orbit accuracy close to apogee is also given in . finally , it shall be pointed out , that the accuracies obtained from gnss - based pod are sufficient to probe the flyby anomaly , an inexplicable momentum transfer which has been observed for several spacecraft careening around the earth .these investigations can be performed without further modifications to the baseline hardware or mission concept , as described in ref .the major focus of the assessment study was to define preliminary designs of the spacecraft subsystems in compliance with all mission requirements .to this end , key aspects such as pointing strategy of spacecraft and solar arrays , variability of solar flux and temperature stability , measurement performance and pointing stability , instrument accommodation and radiative shielding , and launcher specifications , among others , must be reflected in the custom - tailored design features .additional provisions for ease of access during integration and support of ground - based instrument testing on spacecraft level are also important considerations .as we intend to give a concise overview in this paper , we will limit our discussion on a few distinctive design features .+ _ thermal control subsystem ( tcs ) : _ the ste - quest payload dissipates a large amount of power which may total up to more than 1.7 kw ( including maturity and system margins ) if all payload equipment is active .this represents a major challenge for the thermal system which can only be met through dedicated heat - pipes transporting the heat from the protected accommodation region in the spacecraft center to the radiator panels .also problematic is the fact that the baseline orbit is not sun - synchronous and features a drift of the right - ascension of the ascending node ( raan ) , which leads to seasonally strongly variable thermal fluxes incident on the spacecraft from all sides .this is mitigated by optimal placement of the radiators in combination with a seasonal yaw - rotation by to minimize the variability of the external flux on the main radiators .the variation of the thermal flux during the 6 years of mission is plotted in fig.[fig : solar_flux ] ( left ) .the time when the largest temperature fluctuations occur during one orbit , defined as the maximum temperature swing , is close to the start of the mission and coincides with the time of the longest eclipse period .the additional use of heaters in proximity to the various payload units allows us to achieve temperature fluctuations of the payload units of less than , whilst average operating points consistently lie within the required range from to . during periods when only a minimum of solar flux is absorbed by the spacecraft , e.g. cold case of fig.[fig : solar_flux ], the heaters of the thermal control system may additionally contribute up to 150 w to the power dissipation .these results were found on the basis of a detailed thermal model of the spacecraft implemented with esatan - tms , a standard european thermal analysis tool for space systems .+ _ electrical subsystem & communications : _ similar to the tcs , the design of the electrical subsystem is driven by the highly variable orbit featuring a large number of eclipses ( more than 1800 during the mission with a maximum period of 66 minutes ) , the high power demand of the instruments , and the satellite pointing strategy in addition to the required stability of the spacecraft power bus . in order to meet the power requirements , two deployable solar arrays at a cant angle of 45 degrees are rotated around the spacecraft y - axis .this configuration , in combination with two yaw - flips per year , ensures a minimum solar flux of approximately on the solar panels ( see fig.[fig : solar_flux ] right ) , which generates a total power of 2.4 kw , including margins .the option of using a 2-axis sadm has also been investigated in detail and could be easily implemented .however , it is currently considered too risky as such mechanisms have not yet been developed and qualified in europe and they might cause problems in relation to micro - vibrations . as far as telemetry and tele - commanding ( tt&c ) is concerned , contact times to a northern ground station ( cebreros ) are close to 15 h per day on average for the baseline orbit , which is easily compliant with the required minimum of 2 h per day .this provides plenty of margin to download an estimated 4.4 gbit of data using a switched medium gain antenna ( mga ) and low gain antenna ( lga ) x - band architecture which is based on heritage from the lisa pathfinder mission .the switched tt&c architecture allows achieving the required data volume at apogee on the one hand , whilst avoiding excessive power flux at perigee to remain compliant with itu regulations on the other hand .+ _ mechanical subsystem , assembly integration and test ( ait ) and radiation aspects : _ the spacecraft structure is made almost entirely from panels with an aluminum honeycomb structure ( thickness 40 mm ) and carbon - fibre reinforced polymer ( cfrp ) face sheets , which is favorable from a mechanical and mass - savings perspective .a detailed analysis based on a finite - element - model ( fem ) of the spacecraft structure revealed a minimum eigenfrequency of 24 hz in transverse and 56 hz in axial direction , well above the launcher requirements of 15 hz and 35 hz , respectively .the first eigenmode of the atom interferometer physics package , corresponding to a pendulum mode in radial direction , was found at 67 hz .the spacecraft structure is designed to optimize ait and aspects of parallel integration . to this end, the two instruments can be integrated and tested separately before being joined on payload module level without the need to subsequently remove any unit or harness connection .the heat - pipe routing through the spacecraft structure allows functional testing of both instruments on ground by operating the heat - pipes either in nominal mode or re - flux mode ( opposite to the direction of gravity ) . as an important feature in the integration process, payload and service module components are completely separated in their respective modules up to the last integration step , when they are finally joined and their respective harness connected on easily accessible interface brackets .the second major aspect in structural design and instrument accommodation has been to minimize radiation doses of sensitive instrument components .as the spacecraft crosses the van allen belt twice per orbit and more than 5000 times during the mission , it sustains large radiation doses from trapped electrons and solar protons .dedicated design provisions and protected harness routing ensure that the total ionizing doses ( tids ) are below 30 krad for all instrument units and below 5 krad for the sensitive optical fibres . the non - ionizing energy loss , which is primarily caused by low - energy protons , is found to be well below , expressed in terms of 10 mev equivalent proton fluence .these results were obtained from the total dose curves accumulated over the orbit during the mission in combination with a dedicated sectoring analysis performed with a finite - element - model of the spacecraft .ste - quest aims to probe the foundations of einstein s equivalence principle by performing measurements to test its three cornerstones , i.e. the local position - invariance , the local lorentz - invariance and the weak equivalence principle , in a combined mission .it complements other missions which were devised to explore the realm of gravity in different ways , including the soon - to - be - launched aces and microscope missions and the proposed step mission .the ste - quest measurements are supported by two instruments .the first instrument , the atomic clock , benefits from extensive heritage from the aces mission , which therefore reduces associated implementation risks .the other instrument , the atom interferometer , would be the first instrument of this type in space and poses a major challenge which is currently met by dedicated development and qualification programs .the mission assessment activities summarized in this paper yielded a spacecraft and mission design that is compliant with the challenging demands made by the payload on performance and resources .several critical issues have been identified , including low maturity of certain payload components , potential unavailability of sufficiently accurate ground clocks , high sensitivity of the optical links to atmospheric distortions and associated performance degradation , and high energy dissipation of the instruments in addition to challenging temperature stability requirements .however , none of these problems seems unsurmountable , and appropriate mitigation actions are already in place .the work underlying this paper was performed during the mission assessment and definition activities ( phase 0/a ) for the european space agency ( esa ) under contract number 4000105368/12/nl / hb .the authors gratefully acknowledge fruitful discussions and inputs from the esa study team members , in particular martin gehler ( study manager ) , luigi cacciapuoti ( project scientist ) , robin biesbroek ( system engineer ) , astrid heske , ( payload manager ) , florian renk ( mission analyst ) , pierre waller , ( atomic clock support ) , and eric wille ( atom interferometer support ) .we also thank the astrium team members for their contributions to the study , in particular felix beck , marcel berger , christopher chetwood , albert falke , jens - oliver fischer , jean - jacques floch , sren hennecke , fabian hufgard , gnter hummel , christian jentsch , andreas karl , johannes kehrer , arnd kolkmeier , michael g. lang , johannes loehr , marc maschmann , mark millinger , dirk papendorf , raphael naire , tanja nemetzade , bernhard specht , francis soualle and michael williams .the authors are grateful for important contributions from our project partners mathias lezius ( menlo systems ) , wolfgang schfer , thorsten feldmann ( timetech ) , and sven schff ( astos solutions ) . finally , we are indebted to rdiger gerndt and ulrich johann ( astrium ) for regular support and many useful discussions .martin gehler , luigi cacciapuoti , astrid heske , robin biesbroek , pierre waller , and eric wille .the esa ste - quest mission study - space mission design to test einstein s equivalence principle . in _ proceedings of the aiaa space2013 conference , paper number : 2013.5464 , doi : 10.2514/6.2013 - 5464_. american institute of aeronautics and astronautics , 2013 .s. schiller and et al .pace time explorer and quantum equivalence principle space test ( ste - quest ) : proposal in response to esa m3 call 2010 for the cosmic vision programme , 2010 .available at _ http://www.exphy.uni - duesseldorf.de / publikationen/2010/ste - quest_final.pdf_. mp plattner , u hugentobler , d voithenleitner , m hseinze , v klein , k kemmerle , and s bedrich . optical clock technology for optimized satellite navigation . in _eftf 2010 , the 24th european frequency and time forum , 13 - 16 april 2010 _ , volume 1 , page 2 , 2010 .m lezius , k predehl , w stower , a turler , m greiter , ch hoeschen , p thirolf , w assmann , d habs , alexander prokofiev , et al .radiation induced absorption in rare earth doped optical fibers ., 59(2):425433 , 2012 .c schubert , j hartwig , h ahlers , k posso - trujillo , n gaaloul , u velte , a landragin , a bertoldi , b battelier , p bouyer , et al .differential atom interferometry with rb87 rb85 for testing the uff in ste - quest . , 2013 .d aguilera , h ahlers , b battelier , a bawamia , a bertoldi , r bondarescu , k bongs , p bouyer , c braxmaier , l cacciapuoti , et al .-test of the universality of free fall using cold atom interferometry . , 2013 .fiodor sorrentino , kai bongs , philippe bouyer , luigi cacciapuoti , marella de angelis , hansjoerg dittus , wolfgang ertmer , antonio giorgini , jonas hartwig , matthias hauth , et al . a compact atom interferometer for future space missions ., 22(4):551561 , 2010 .guillaume stern , baptiste battelier , rmi geiger , gal varoquaux , andr villing , frdric moron , olivier carraz , nassim zahzam , yannick bidel , w chaibi , et al .light - pulse atom interferometry in microgravity ., 53(3):353357 , 2009 .mp hess , j kehrer , m kufner , s durand , g hejc , h fruhauf , l cacciapuoti , r much , and r nasca .status and test results . in _frequency control and the european frequency and time forum ( fcs ) , 2011 joint conference of the ieee international _ , pages 18 .ieee , 2011 .a seidel , mp hess , j kehrer , w schafer , m kufner , m siccardi , l cacciapuoti , i aguilar sanches , and s feltham .the aces microwave link : instrument design and test results . in _frequency control symposium , 2007 joint with the 21st european frequency and time forum .ieee international _, pages 12951298 .ieee , 2007 .m gregory , f heine , h kmpfner , r meyer , r fields , and c lunde .laser communication terminal performance results on 5.6 gbit coherent inter satellite and satellite to ground links . in _ international conference on space optics _ , volume 4 , page 8 , 2010 .gerald hechenblaikner , jean - jacques floch , francis soualle , and marc - peter hess . -based precise orbit determination for a highly eccentric orbit in the ste - quest mission . in _ proceedings of the 26th international technical meeting of the satellite division of the institute of navigation ( ion gnss 2013 ) , nashville , september 2013_. institute of navigation , 2013 .c urschl , g beutler , w gurtner , u hugentober , and m ploner .orbit determination for giove - a using slr tracking data . in_ extending the range .proceedings of the 15th international workshop on laser ranging _ , pages 4046 , 2008 . till rosenband , db hume , po schmidt , cw chou , a brusch , l lorini , wh oskay , re drullinger , tm fortier , je stalnaker , et al . frequency ratio of al+ and hg+ single - ion optical clocks ; metrology at the 17th decimal place . , 319(5871):18081812 , 2008 .g matticari , g noci , l ceruti , l fallerini , a atzei , c edwards , g morris , and k pfaab .old gas micro propulsion for european science missions : status on development and realization activities for lisa pathfinder , microscope and forthcoming euclid , following the successful delivery of the first fm for gaia program .49th aiaa / asme / sae / asee joint propulsion conference & exhibit .t. j. sumner , j. anderson , j .-blaser , a. m. cruise , t. damour , h. dittus , c. w. f. everitt , b. foulon , y. jafry , b. j. kent , n. lockerbie , f. loeffler , g. mann , j. mester , c. pegrum , r. reinhardt , m. sandford , a. scheicher , c. c. speake , r. torii , s. theil , p. touboul , s. vitale , w. vodel , and p. w. worden . . ,39:254258 , 2007 .
ste - quest is a fundamental science mission which is considered for launch within the cosmic vision programme of the european space agency ( esa ) . its main scientific objectives relate to probing various aspects of einstein s theory of general relativity by measuring the gravitational red - shift of the earth , the moon and the sun as well as testing the weak equivalence principle to unprecedented accuracy . in order to perform the measurements , the system features a spacecraft equipped with two complex instruments , an atomic clock and an atom interferometer , a ground - segment encompassing several ground - terminals collocated with the best available ground atomic clocks , and clock comparison between space and ground via microwave and optical links . the baseline orbit is highly eccentric and exhibits strong variations of incident solar flux , which poses challenges for thermal and power subsystems in addition to the difficulties encountered by precise - orbit - determination at high altitudes . the mission assessment and definition phase ( phase - a ) has recently been completed and this paper gives a concise overview over some system level results .
the paper is concerned with the spectral and computational analysis of effective preconditioners for linear systems arising from finite element approximations to the elliptic convection - diffusion problem with domain of .we consider a model setting in which the structured finite element partition is made by equi - lateral triangles .the interest of such a partition relies on the observation that automatic grid generators tend to construct equi - lateral triangles when the mesh is fine enough .the analysis is performed having in mind two popular preconditioned krylov methods .more precisely , we analyze the performances of the preconditioned conjugate gradient ( pcg ) method in the case of the diffusion problem and of the preconditioned generalized minimal residual ( pgmres ) in the case of the convection - diffusion problem .we define the preconditioner as a combination of a basic ( projected ) toeplitz matrix times diagonal structures .the diagonal part takes into account the variable coefficients in the operator of ( [ eq : modello ] ) , and especially the diffusion coefficient , while the ( projected ) toeplitz part derives from a special approximation of ( [ eq : modello ] ) when setting the diffusion coefficient to and the convective velocity field to . under such assumptions ,if the problem is coercive , and the diffusive and convective coefficients are regular enough , then the proposed preconditioned matrix sequences have a strong clustering at unity , the preconditioning matrix sequence and the original matrix sequence are spectrally equivalent , and the eigenvector matrices have a mild conditioning .the obtained results allow to show the optimality of the related preconditioned krylov methods .it is important to stress that interest of such a study relies on the observation that automatic grid generators tend to construct equi - lateral triangles when the mesh is fine enough .numerical tests , both on the model setting and in the non - structured case , show the effectiveness of the proposal and the correctness of the theoretical findings .the outline of the paper is as follows . in section [ sez : fem ]we report a brief description of the fe approximation of convection - diffusion equations and the preconditioner definition .section [ sez : clustering ] is devoted to the spectral analysis of the underlying preconditioned matrix sequences , in the case of structured uniform meshes . in section [ sez : numerical_tests ] , after a preliminary discussion on complexity issues , selected numerical tests illustrate the convergence properties stated in the former section and their extension under weakened assumption or in the case of unstructured meshes . a final section [ sez : conclusions ] deals with perspectives and future works .problem ( [ eq : modello ] ) can be stated in variational form as follows : where is the space of square integrable functions , with square integrable weak derivatives vanishing on .we assume that is a polygonal domain and we make the following hypotheses on the coefficients : the previous assumptions guarantee existence and uniqueness for problem ( [ eq : formulazione_variazionale ] ) and hence the existence and uniqueness of the ( weak ) solution for problem ( [ eq : modello ] ) . for the sake of simplicity , we restrict ourselves to linear finite element approximation of problem ( [ eq : formulazione_variazionale ] ) . to this end , let be a usual finite element partition of into triangles , with and .let be the space of linear finite elements , i.e. the finite element approximation of problem ( [ eq : formulazione_variazionale ] ) reads : for each internal node of the mesh , let be such that , and if .then , the collection of all s is a base for . we will denote by the number of the internal nodes of , which corresponds to the dimension of .then , we write as and the variational equation ( [ eq : formulazione_variazionale_fe ] ) becomes an algebraic linear system : according to these notations and definitions , the algebraic equations in ( [ eq : modello_discreto ] ) can be rewritten in matrix form as the linear system where and represent the approximation of the diffusive term and approximation of the convective term , respectively .more precisely , we have where suitable quadrature formula are considered in the case of non constant coefficient functions and .as well known , the main drawback in the linear system resolution is due to the asymptotical ill - conditioning ( i.e. very large for large dimensions ) , so that preconditioning is highly recommended .hereafter , we refer to a preconditioning strategy previously analyzed in the case of fd / fe approximations of the diffusion problem and recently applied to fd / fe approximations of ( [ eq : modello ] ) with respect to the preconditioned hermitian and skew - hermitian splitting ( phss ) method .more precisely , the preconditioning matrix sequence is defined as where , i.e. , the suitable scaled main diagonal of and clearly equals .the computational aspects of this preconditioning strategy with respect to krylov methods will be discussed later in section [ sez : complexity_issues ] . here , preliminarily we want to stress as the preconditioner is tuned only with respect to the diffusion matrix : in other words , we are implicity assuming that the convection phenomenon is not dominant , and no stabilization is required in order to avoid spurious oscillations into the solution .moreover , the spectral analysis is performed in the non - hermitian case by referring to the hermitian and skew - hermitian ( hss ) decomposition of ( that can be performed on any single elementary matrix related to by considering the standard assembling procedure ) . according to the definition, the hss decomposition is given by where since by definition , the diffusion term is a hermitian matrix and does not contribute to the skew - hermitian part of .notice also that if .more in general , the matrix is symmetric and positive definite whenever . indeed , without the condition , the matrix does not have a definite sign in general : in fact , a specific analysis of the involved constants is required in order to guarantee the nonnegativity of the term . moreover, the lemma below allows to obtain further information regarding such a spectral assumption , where indicate both the usual vector norms and the induced matrix norms . _ _ [ lemma : normae ] let be the matrix sequence defined according to _ ( [ eq : def_e_nb])_. under the assumptions in _ ( [ eq : ipotesi_coefficienti ] ) _ , then we find with absolute positive constant only depending on and .the claim holds both in the case in which the matrix elements in _ ( [ eq : def_psi ] ) _ are evaluated exactly and whenever a quadrature formula with error is considered for approximating the involved integrals .hereafter , we will denote by , , the matrix sequence associated to a family of meshes , with decreasing finesse parameter .as customary , the whole preconditioning analysis will refer to a matrix sequence instead to a single matrix , since the goal is to quantify the difficulty of the linear system resolution in relation to the accuracy of the chosen approximation scheme .the aim of this section is to analyze the spectral properties of the preconditioned matrix sequences in the case of some special domains partitioned with structured uniform meshes , so that spectral tools derived from toeplitz theory can be successfully applied .the applicative interest of the considered type of domains will be motivated in the short .indeed , let be a lebesgue integrable function defined over ^ 2 ] , related to a bigger parallelogram shaped domain containing ( see figure [ fig : esagono_esagoni_parallelogrammi ] ) , i.e. , , with number of the internal nodes . here , , is a proper projection matrix simply cutting those rows ( columns ) referring to nodes belonging to , but not to .thus , the matrix is full rank , i.e. , , and .in the same way , we can also consider the toeplitz matrix with the same generating function arising when considering a smaller parallelogram shaped domain contained in , i.e. , , with , proper projection matrix and such that , and .these embedding arguments allow to bound , according to the min - max principle ( see for more details on the use of this proof technique ) , the minimal eigenvalue of the matrices as follows a first technical step in our spectral analysis concerns relationships between this generating function and the more classical generating function arising in the case of fe approximations on a square with friedrichs - keller meshes ( see figure [ fig : esagono_esagoni].b ) , or standard fd discretizations .it can be easily observed that these two function are equivalent , in the sense of the previously reported definition , since we have ^ 2.\ ] ] thus , because is a matrix - valued linear positive operator ( lpo ) for every , the toeplitz matrix sequences generated by this equivalent functions result to be spectrally equivalent , i.e. , for any furthermore , due to the strict positivity of every ( see for the precise definition ) , since is not identically zero , we have that is positive definite , and since is not identically zero , we find that is also positive definite . here , we are referring to the standard ordering relation between hermitian matrices , i.e. , the notation , with and hermitian matrices , means that is nonnegative definite .an interesting remark pertains to the fact that the function is the most natural one from the fe point of view , since no contribution are lost owing to the gradient orthogonality as instead in the case related to ; nevertheless its relationships with the function can be fully exploited in performing the spectral analysis .more precisely , from ( [ eq : lmin_proiezione ] ) and ( [ eq : rel_toeplitz ] ) and taking into account ( see e.g. ) that we deduce following the very same reasoning , we also find that since , by the embedding argument , both and are asymptotic to , it follows that . finally ,following the same analysis for the maximal eigenvalue , we find where , by , we know that and hence the spectral condition number of grows as i.e. where the constant hidden in the previous relation is mild and can be easily estimated .it is worth stressing that the same matrix can also be considered in a more general setting .in fact , the matrix sequence can also be defined as , since again each internal node in is a vertex of the same constant number of triangles .therefore , by referring to projection arguments , the spectral analysis can be equivalently performed both on the matrix sequence and .no matter about this choice , we make use of a second technical step , which is based on standard taylor s expansions .[ lemma : taylor_expansion_esagoni ] let , with and hexagonal domain partitioned as in figure [ fig : esagono_esagoni].a .for any such that there exists a proper such that the taylor s expansions centered at a proper have the form where and are constants independent of .+ -1.0 cm in the cases at hand , the validity of this claim is just a direct check : the key point relies in the symmetry properties induced by the structured uniform nature of the considered meshes , both in the case of and .hereafter , we give some detail with respect to the case of the hexagonal domain partitioned as in figure [ fig : esagono_esagoni].a , see for the proof in the case .thus , let s consider the taylor s expansion centered at a proper .by calling and similarly denoting the derivatives of the diffusion coefficient , it holds that with ^t \| ] well separated from zero _ [ spectral equivalence property]_. since , with such that , and , we have ^{-1 } a_n(\a ) \\ % & = & [ \pi \ , d_n^{\frac{1}{2}}(\a ) \ , \pi^t \ , \pi a_n(1 ) \ , \pi^t \ ,\pi \ , d_n^{\frac{1}{2}}(\a ) \ , \pi^t ] ^{-1 } \pi \ , a_n(\a ) \ ,\pi^t \\ % & = & [ \pi \ , \pi^t \ , \pi \ , d_n^{\frac{1}{2}}(\a ) a_n(1 ) d_n^{\frac{1}{2}}(\a ) \ , \pi^t \ ,\pi \ , \pi^t ] ^{-1 } \pi \ , a_n(\a ) \ ,\pi^t \\ % & = & [ \pi \ , d_n^{\frac{1}{2}}(\a ) a_n(1 ) d_n^{\frac{1}{2}}(\a ) \ , \pi^t ] ^{-1 } \pi \ , a_n(\a ) \ ,\pi^t \\ % & = & [ \pi p_n^{-1}(\a ) \ , \pi^t ] ^{-1 } \pi \ , a_n(\a ) \ , \pi^t.\end{aligned}\ ] ] since is full rank , it is evident that the spectral behavior of ^{-1}\pi a_n(\a ) \pi^t\ ] ] is in principle better than the one of , to which we can address the spectral analysis .thus , the proof technique refers to and the very key step is given by the relations outlined in ( [ eq : lmin_toeplitz_bottom ] ) and ( [ eq : lmin_toeplitz_top ] ) .it is worth stressing that the previous claims can also be proved by considering the sequence , instead of , simply by directly referring to the quoted asymptotical expansions .the interest may concern the analysis of the same uniform mesh on more general domains .moreover , the same spectral properties has been proved in the case of uniform structured meshes as in figure [ fig : esagono_esagoni].b in .the natural extension of the claim in theorem [ teo : clu+se_a_esagoni ] , in the case of the matrix sequence with , can be proved under the additional assumptions of lemma [ lemma : normae ] , in perfect agreement with theorem 5.3 in .[ teo : clu+se_rea_esagoni ] let and be the hermitian positive definite matrix sequences defined according to _ ( [ eq : def_rea ] ) _ and _ ( [ eq : def_p ] ) _ in the case of the hexagonal domain partitioned as in figure [ fig : esagono_esagoni].a . under the assumptions in _ ( [ eq : ipotesi_coefficienti ] ) _ , the sequence is properly clustered at . moreover , for any all the eigenvalues of belong to an interval ] , with , independent of the dimension .it is worth stressing that the existence of a proper eigenvalue cluster and the aforementioned localization results in the preconditioned spectrum can be very important for fast convergence of preconditioned gmres iterations ( see , e.g. , ) .finally , we want to give notice that the pcg / pgmres numerical performances do not get worse in the case of unstructured meshes , despite the lack of a rigorous proof . before analyzing in detail selected numerical results , we wish to give technical information on the performed numerical experiments .we apply the pcg or the pgmres method , in the symmetric and non - symmetric case , respectively , with the preconditioning strategy described in section [ sez : fem ] , to fe approximations of the problem ( [ eq : modello ] ) .whenever required , the involved integrals have been approximated by means of the barycentric quadrature rule ( the approximation by means of the nodal quadrature rule gives rise to similar results , indeed both are exact when applied to linear functions ) .the domains of integration are those reported in figure [ fig : esagono_esagoni ] and we assume dirichlet boundary conditions .all the reported numerical experiments are performed in matlab , by employing the available ` pcg ` and ` gmres ` library functions ; the iterative solvers start with zero initial guess and the stopping criterion is considered .the case of unstructured meshes is also discussed and compared , together with various types of regularity in the diffusion coefficient .in fact , we consider the case of a coefficient function satisfying the assumptions ( [ eq : ipotesi_coefficienti ] ) .more precisely , the second columns in table [ tab : it - es123esagono_triangoliequilateri_m5 ] report the number of iterations required to achieve the convergence for increasing values of the coefficient matrix size when considering the fe approximation with structured uniform meshes as in figure [ fig : esagono_esagoni ] and with template function , ^t ] ( see section [ sez : diffusion_equation ] ) .the numerical results are reported in table [ tab : it_perturbed_mesh ] . as expected ,the number of iterations is a constant with respect to the dimension when the preconditioner is applied .nevertheless , for large enough , the same seems to be true also for the preconditioner .indeed , for increasing dimensions , the unstructured partitioning is more and more similar to the one given by equilateral triangles sketched in fig .[ fig : esagono_esagoni].a .a theoretical ground supporting these observation is still missing and would be worth in our opinion to be studied and developed . & & & + 37 & 3 & 4 & 5 + 169 & 3 & 4 & 5 + 721 & 3 & 4 & 4 + 2977 & 3 & 4 & 4 + 12097 & 3 & 4 & 4 + 48769 & 3 & 4 & 4 + & & & + 37 & 4 & 5 & 5 + 169 & 4 & 5 & 5 + 721 & 4 & 5 & 5 + 2977 & 4 & 5 & 5 + 12097 & 4 & 5 & 5 + 48769 & 4 & 5 & 5 + & & & + 81 & 3 & 4 & 4 + 361 & 3 & 4 & 5 + 1521 & 3 & 4 & 5 + 6241 & 3 & 4 & 5 + 25281 & 3 & 4 & 5 + 128881 & 3 & 4 & 5 + & & & + 81 & 4 & 5 & 5 + 3611 & 4 & 5 & 5 + 1521 & 4 & 5 & 5 + 6241 & 4 & 5 & 5 + 25281 & 4 & 5 & 5 + 128881 & 4 & 5 & 5 + & & & + 28 & 5 & 5 & 5 + 73 & 4 & 4 & 5 + 265 & 4 & 4 & 5 + 1175 & 4 & 4 & 5 + 4732 & 4 & 4 & 5 + 19288 & 4 & 4 & 5 + 76110 & 4 & 4 & 4 + & & & + 28 & 4 & 5 & 5 + 73 & 4 & 5 & 5 + 265 & 4 & 5 & 5 + 1175 & 4 & 5 & 5 + 4732 & 4 & 5 & 5 + 19288 & 4 & 5 & 5 + 76110 & 4 & 5 & 5 + & & & + 24 & 4 & 5 & 5 + 109 & 4 & 5 & 5 + 465 & 4 & 5 & 5 + 1921 & 4 & 5 & 5 + 7809 & 4 & 5 & 5 + 31489 & 4 & 5 & 5 + & & & + 24 & 4 & 5 & 5 + 109 & 4 & 5 & 5 + 465 & 4 & 5 & 5 + 1921 & 4 & 5 & 5 + 7809 & 4 & 5 & 5 + 31489 & 4 & 5 & 5 + [ tab : it - es123esagono_perturbed ] & & + 37 & 4 & 8 + 169 & 4 & 10 + 721 & 4 & 11 + 2977 & 4 & 11 + 12095 & 4 & 13 + 48769 & 4 & 16 + & & + 37 & 4 & 8 + 169 & 4 & 9 + 721 & 4 & 9 + 2977 & 4 & 10 + 12095 & 4 & 11 + 48769 & 4 & 13 + -0.3 cm + -4.5 cm-0.5 cm -3.5 cm -0.5 cm -0.5 cm + -0.5 cm -0.5 cm + -0.5 cm -0.5 cmas emphasized in the introduction it is clear that the problem in figure [ fig : esagono_esagoni].a is just an academic example , due to the perfect structure made by equi - lateral triangles .however , it is a fact that a professional mesh generator will produce a partitioning , which `` asymptotically '' , that is a for a mesh fine enough , tends to the one in figure [ fig : esagono_esagoni].a .the latter fact has a practical important counterpart , since the academic preconditioner is optimal for the real case with nonconstant coefficients and with the partitioning in b ) .a theoretical ground supporting these observations is still missing and would be worth in our opinion to be studied and developed in three directions : a ) giving a formal notion of convergence of a partitioning to a structured one , b ) proving spectral and convergence results in the case of an asymptotically structured partitioning , c ) extending the spectral analysis in the case of weak regularity assumptions .other possible developments include the case of higher order finite element spaces : it would be intriguing to find an expression of the underlying toeplitz symbol as a function of the different parameters in the considered finite element space ( degrees of freedom , polynomial degree , geometry of the mesh ) , and this could be done uniformly in dimension , i.e. for equation ( [ eq : modello ] ) with , .10 o. axelsson , m. neytcheva , _ the algebraic multilevel iteration methods theory and applications ._ in _ proceedings of the second international colloquium on numerical analysis _( plovdiv , 1993 ) , 1323 , vsp , 1994 . z. bai , g.h . golub , m.k .ng , _ hermitian and skew - hermitian splitting methods for non - hermitian positive definite linear systems ._ siam j. matrix anal .( 2003 ) , no .3 , 603626 .d. bertaccini , g.h .golub , s. serra capizzano , c. tablino possio , _ preconditioned hss methods for the solution of non - hermitian positive definite linear systems and applications to the discrete convection - diffusion equation .99 ( 2005 ) , no .3 , 441484 .d. bertaccini , g.h .golub , s. serra - capizzano , _ spectral analysis of a preconditioned iterative method for the convection - diffusion equation ._ siam j. matrix anal .( 2007 ) , no .1 , 260278 .d. bertaccini and michael k. ng , band - toeplitz preconditioned gmres iterations for time - dependent pdes , bit , 40 ( 2003 ) , pp . 901914 .a. bttcher , b. silbermann , _ introduction to large truncated toeplitz matrices ._ springer - verlag , new york , 1998 .b. buzbee , f. dorr , j. george , g.h .golub , _ the direct solutions of the discrete poisson equation on irregular regions ._ siam j. numer( 1971 ) , 722736. f. dorr , _ the direct solution of the discrete poisson equation on a rectangle ._ siam rev . 12 ( 1970 ) , 248263 .w. hackbusch , _ multigrid methods and applications . _ springer verlag , berlin , germany , ( 1985 ) .a. russo , c. tablino possio , _ preconditioned hss method for finite element approximations of convection - diffusion equations_. siam j. matrix anal .( 2009 ) , no .3 , 9971018 .s. serra capizzano , _ on the extreme eigenvalues of hermitian ( block)toeplitz matrices . _ linear algebra appl .270 ( 1998),109129 . s. serra capizzano , _ the rate of convergence of toeplitz based pcg methods for second order nonlinear boundary value problems .81 ( 1999 ) , no .3 , 461495 .s. serra capizzano , _ some theorems on linear positive operators and functionals and their applications . _ computers and mathematics with applications 39 , no .7 - 8 ( 2000 ) , 139167 .s. serra capizzano , _ convergence analysis of two grid methods for elliptic toeplitz and pdes matrix sequences .92 - 3 ( 2002 ) , 433 - 465 . s. serra capizzano , c. tablino possio , _ spectral and structural analysis of high precision finite difference matrices for elliptic operators ._ linear algebra appl . 293( 1999 ) , no . 1 - 3 , 85131 .s. serra capizzano , c. tablino possio , _ high - order finite difference schemes and toeplitz based preconditioners for elliptic problems ._ electron .11 ( 2000 ) , 5584 .s. serra capizzano , c. tablino possio , _ finite element matrix sequences : the case of rectangular domains .algorithms 28 ( 2001 ) , no . 1 - 4 , 309327. s. serra capizzano , c. tablino possio , _ preconditioning strategies for 2d finite difference matrix sequences . _anal . 16 ( 2003 ) , 129 .s. serra capizzano , c. tablino possio , _ superlinear preconditioners for finite differences linear systems ._ siam j. matrix anal .appl . 25 ( 2003 ) , no. 1 , 152164 . j.r .shewchuk , _ a two - dimensional quality mesh generator and delaunay triangulator .( version 1.6 ) _ , www.cs.cmu.edu/ quake/ triangle.html p. swarztrauber,_the method of cyclic reduction , fourier analysis and the facr algorithm for the discrete solution of poisson s equation on a rectangle . _ siam rev .19 ( 1977 ) , 490501 .u. trottenberg , c.w .oosterlee , and a. schller , multigrid , academic press , london , 2001 .tyrtyshnikov , _ a unifying approach to some old and new theorems on distribution and clustering . _ linear algebra appl. 232 ( 1996 ) , 143 .
the paper is devoted to the spectral analysis of effective preconditioners for linear systems obtained via a finite element approximation to diffusion - dominated convection - diffusion equations . we consider a model setting in which the structured finite element partition is made by equi - lateral triangles . under such assumptions , if the problem is coercive , and the diffusive and convective coefficients are regular enough , then the proposed preconditioned matrix sequences exhibit a strong clustering at unity , the preconditioning matrix sequence and the original matrix sequence are spectrally equivalent , and the eigenvector matrices have a mild conditioning . the obtained results allow to show the optimality of the related preconditioned krylov methods . the interest of such a study relies on the observation that automatic grid generators tend to construct equi - lateral triangles when the mesh is fine enough . numerical tests , both on the model setting and in the non - structured case , show the effectiveness of the proposal and the correctness of the theoretical findings . matrix sequences , clustering , preconditioning , non - hermitian matrix , krylov methods , finite element approximations 65f10 , 65n22 , 15a18 , 15a12 , 47b65
the design and characterization of strategies for controlling quantum dynamics is vital to a broad spectrum of applications within contemporary physics and engineering .these range from traditional coherent - control settings like high - resolution nuclear and molecular spectroscopy , to a variety of tasks motivated by the rapidly growing field of quantum information science .in particular , the ability to counteract decoherence effects that unavoidably arise in the dynamics of a real - world quantum system coupled to its surrounding environment is a prerequisite for scalable realizations of quantum information processing ( qip ) , as actively pursued through a variety of proposed device technologies .active decoupling techniques offer a conceptually simple yet powerful control - theoretic setting for quantum - dynamical engineering of both closed - system ( unitary ) and open - system ( non - unitary ) evolutions .inspired by the idea of _ coherent averaging _ of interactions by means of tailored pulse sequences in nuclear magnetic resonance ( nmr ) spectroscopy , decoupling protocols consist of repetitive sequences of control operations ( typically drawn from a finite repertoire ) , whose net effect is to coherently modify the natural target dynamics to a desired one . in practice , a critical decoupling task is the selective removal of unwanted couplings between subsystems of a fully or partially controllable composite quantum system .historically , a prototype example is the elimination of unwanted phase evolution in interacting spin systems via trains of -pulses ( the so - called hahn - echo and carr - purcell sequences ) . for open quantum systems, this line of reasoning motivates the question of whether removing the coupling between the system of interest and its environment may be feasible by a control action restricted to the former only .such a question was addressed in for the paradigmatic case of a single qubit coupled to a bosonic reservoir , establishing the possibility of decoherence suppression in the limit of rapid spin flipping via the echo sequence mentioned above .the study of dynamical decoupling as a general strategy for quantum coherent and error control has since then attracted a growing interest from the point of view of both model - independent decoupling design and optimization , and the application to specific physical systems .representative contributions include the extension to arbitrary finite - dimensional systems via dynamical - algebraic , geometric , and linear - algebraic formulations ; the construction of fault - tolerant eulerian and concatenated decoupling protocols , as well as efficient combinatorial schemes ; the connection with quantum zeno physics ; proposed applications to the compensation of specific decoherence mechanisms ( notably , magnetic state decoherence and 1/ noise ) and/or the removal of unwanted evolution within trapped - ion and solid - state quantum computing architectures .these theoretical advances have been paralleled by steady experimental progress .beginning with a proof - of - principle demonstration of decoherence suppression in a single - photon polarization interferometer , dynamical decoupling techniques have been implemented alone and in conjunction with quantum error correction within liquid - state nmr qip , and have inspired charge - based and flux - based echo experiments in superconducting qubits .recently , dynamic decoherence control of a solid - state nuclear quadrupole qubit has been reported .all the formulations of dynamical decoupling mentioned so far share the feature of involving purely _ deterministic _ control actions . in the simplestsetting , these are arbitrarily strong , effectively instantaneous rotations ( so - called _ bang - bang controls _ ) chosen from a discrete group .decoupling according to is then accomplished by sequentially cycling the control propagator through _ all _ the elements of .if denotes the separation between consecutive control operations , this translates into a minimal averaging time scale , of length proportional to the size of .the exploration of decoupling schemes incorporating _stochastic _ control actions was only recently undertaken .a general control - theoretic framework was introduced by viola and knill in ( see also ) , based on the idea of seeking faster convergence ( with respect to an appropriately defined metric ) by _ randomly sampling _ rather than systematically implementing control operations from .based on general lower bounds for pure - state error probabilities , the analysis of indicated that random schemes could outperform their cyclic counterpart in situations where a large number of elementary control operations is required or , even for small control groups , when the interactions to be removed vary themselves in time over time scales long compared to but short compared to .furthermore , it also suggested that advantageous features of pure cyclic and random methods could be enhanced by appropriately merging protocols within a hybrid design .the usefulness of randomization in the context of actively suppressing _ coherent _ errors due to residual static interactions was meanwhile independently demonstrated by the so - called _ pauli random error correction method _ ( parec ) , followed by the more recent _ embedded dynamical decoupling method _ both due to kern and coworkers .both protocols may be conceptually understood as following from randomization over the pauli group , used alone or , respectively , in conjunction with a second set of deterministic control operations .our goal in this work is twofold : first , to develop a quantitative understanding of _ typical _ randomized control performance for both _ coherent and decoherent phase errors _ , beginning from the simplest scenario of a single qubit already investigated in detail in the deterministic case ; second , to clarify the _ physical _ picture underlying random control , by devoting , in particular , special attention to elucidate the control action and requirements in rotating frames associated with different dynamical representations .the fact that the controlled dynamics remains exactly solvable in the bang - bang ( bb ) limit makes the single - qubit pure - dephasing setting an ideal test - bed for these purposes . from a general standpoint ,since spin - flip decoupling corresponds to averaging over the smallest ( nontrivial ) group , with , this system is not yet expected to show the full advantage of the random approach .remarkably , however , control scenarios can still be identified , where randomized protocols indeed represent the most suitable choice .the content of the paper is organized as follows . after laying out the relevant system and control settings in sect .ii , we begin the comparison between cyclic and randomized protocols by studying the task of phase refocusing in a qubit evolving unitarily in sect .control of decoherence from purely dephasing semiclassical and quantum environments is investigated in the main part of the paper , sects .iv and v. we focus on the relevant situations of decoherence due to random telegraph noise and to a fully quantum bosonic bath , respectively . both exact analytical and numerical results for the controlled decoherence process are presented in the latter case .we summarize our results and discuss their significance from the broader perspective of constructively exploiting randomness in physical systems in sect .vi , by also pointing to some directions for future research .additional technical considerations are included in a separate appendix .our target system is a single qubit , living on a state space .the influence of the surrounding environment may be formally accounted for by two main modifications to the isolated qubit dynamics .first , may couple to effectively _ classical _ degrees of freedom , whose net effect may be modeled through a deterministic or random time - dependent modification of the system parameters . additionally , may couple to a _quantum _ environment , that is , a second quantum system defined on a state space with which may become entangled in the course of the evolution . for the present purposes , will be schematized as a bosonic reservoir consisting of independent harmonic modes .let denote the identity operator on , respectively . throughout the paper, we will consider different dynamical scenarios , corresponding to special cases of the following total drift hamiltonian on : where here , we set , and ( ) , and denote pauli spin matrices , and canonical creation and annihilation bosonic operators of the environmental mode with frequency , respectively . and are real and complex functions that account for an effectively time - dependent frequency of the system and its coupling to the reservoir mode , respectively .we shall write for an appropriate choice of central values , and modulation functions , , respectively .the adimensional parameter is introduced for notational convenience , allowing to include ( ) or not ( ) the coupling to as desired .physically , because and commute at all times , the above hamiltonian describes a _ purely decohering _ coupling between and , which does not entail energy exchange . while in general dissipation might also occur , focusing on pure decoherence is typically justified for sufficiently short time scales and , as we shall see , has the advantage of making exact solutions available as benchmarks .control is introduced by adjoining a classical controller acting on , that is by adding a time - dependent term to the above target hamiltonian , in our case , will be designed so as to implement appropriate sequences of bb pulses .this may be accomplished by starting from a rotating radiofrequency field ( or , upon invoking the rotating wave approximation , by a linearly - polarized oscillating field ) , described by the following amplitude- and phase - modulated hamiltonian : \sigma_x + \sin[\omega t + \varphi_j ( t ) ] \sigma_y \hspace*{-.5mm}\big ) \,,\ ] ] with \:.\ ] ] here , denotes the heaviside step function ( defined as for and for ) , and are positive parameters , and denotes the instants at which the pulses are applied . if the carrier frequency is tuned on resonance with the central frequency , , and the phase for each , the above hamiltonian schematizes a train of _ identical _ control pulses of amplitude and duration in the physical frame . under the bb requirement of impulsive switching ( ) with unbounded strength ( ) ,it is legitimate to neglect ( including possible off - resonant effects ) within each pulse , effectively leading to qubit rotations about the -axis . in particular, a rotation corresponds to ( see also appendix a ) . in what follows ,we shall focus on using trains of bb -pulses to effectively achieve a net evolution characterized by the identity operator ( the so - called _ no - op _ gate ) .this requires averaging unwanted ( coherent or decoherent ) evolution generated by either or or both , by subjecting the system to repeated spin - flips . in group - theoretic termssuch protocols have , as mentioned , a transparent interpretation as implementing an average over the group , represented on as .the quantum operation effecting such group averaging is the projector on the space of operators commuting with , leading to essentially , in _ cyclic _ decoupling schemes based on the above symmetrization is accomplished through a _ time _ average of the effective hamiltonian determining the evolution over a cycle , ; in _ random _ schemes , it emerges from an _ ensemble _ average over different control histories , taken with respect to the uniform probability measure over .neither deterministic nor stochastic sequences of pulses achieve an exact implementation of for a fully generic hamiltonian as in eqs .( [ drift0])-([drift ] ) , except in the ideal limit of arbitrarily fast control where the separation between pulses approaches zero .therefore , it makes sense to compare the performance attainable by different control sequences for realistic control rates . in this paperwe shall focus on the following options . * asymmetric cyclic protocol ( a ) * , fig .[ fig : scheme](a ) .this is the protocol used in , corresponding to repeated spin - echoes .cyclicity is ensured by subjecting the system to an even number of equally spaced -pulses , applied at , , in the limit .the elementary cycle consists of two pulses : the first one , applied after the system evolved freely for an interval , reverses the qubit original state and the second one , applied a time later , restores its original state . * symmetric cyclic protocol ( s ) * , fig .[ fig : scheme](b ) .this protocols , which is directly inspired to the carr - purcell sequence of nmr , is obtained from ( a ) by rearranging the two -pulses within each cycle in such a way that the control propagator is symmetric with respect to the middle point .the first pulse is applied at and the next ones at , with .both the a- and s - protocols have a cycle time , and lead to the same averaging in the limit . for finite , however , the symmetry of the s - protocol guarantees the cancellation of lowest - order corrections , resulting in superior averaging . * long symmetric cyclic protocol ( ls ) * , fig .[ fig : scheme](c ) .this is basically an s - protocol with a doubled control interval , .equivalently , note that this scheme corresponds to alternating a -pulse with the identity after every .the cycle time becomes . for this amount of time , twice as many pulses would be used by protocols ( a , s ) .still , in certain cases , the ls - protocol performs better than the a - protocol ( see sect .v.d ) , which motivates its separate consideration here . * naive random protocol ( r ) * , fig .[ fig : scheme](d ) .random decoupling is no longer cyclic , meaning that the control propagator does _ not _ necessarily effect a closed path ( see also for a discussion of acyclic deterministic schemes ) .the simplest random protocol in our setting corresponds to having , at each time , an equal probability of rotating or not the qubit that is , at every the control action has a 50% chance of being a -pulse and a 50% chance of being the identity . in order not to single out the first control slot , it is convenient to explicitly allow the value ( equivalently , to consider a fictitious pulse in the a- , s- and ls - protocols ) . for pure phase errorsas considered , such a protocol may be interpreted as a simplified parec scheme . while we will mostly focus on this naive choice in our discussion here, several variants of this protocol may be interesting in principle , including unbalanced pulse probabilities and/or correlations between control operations . * hybrid protocol ( h ) * , fig .[ fig : scheme](e ) .interesting control scenarios arise by combining deterministic and random design .the simplest option , which we call `` hybrid '' protocol here , consists of alternating , after every , a -pulse with a random pulse , instead of the identity as in the ls - protocol . for our system , in the embedded decoupling language of , this may be thought of as nesting the a- and r - protocols . in group - theoretic terms, the h - protocol may be understood as _randomization over cycles _ .a complete asymmetric cycle may be constructed in two ways , say and .cycle corresponds to traversing in the order that is , free evolution for ; first pulse ; free evolution for ; second pulse the cycle being completed right after the second pulse .cycle corresponds to the reverse group path , .thus , we have : pulse ; free evolution for ; second pulse ; and another free evolution for the system should be observed at this moment before any other pulse .the h - protocol consists of uniformly picking at random one of the two cycles at every instant , where single - qubit evolving according to unitary dynamics in eq .( [ drift ] ) ) provides a pedagogical yet illustrative setting to study dynamical control .since the goal here is to refocus the underlying phase evolution , the analysis of this system provides a transparent picture for the differences associated with deterministic and random pulses .it also simplifies the comprehension of the results for the more interesting case of a single - qubit interacting with a decohering semiclassical or quantum environment , where the control purpose becomes twofold : phase refocusing and decoherence suppression .we begin by considering the standard case of a time - independent target dynamics , for all .for all the control protocols illustrated above , the system evolves freely between pulses , with the propagator whereas , during a pulse , it is only affected by the control hamiltonian .the propagator for an instantaneous pulse applied at time will be indicated by .let denote the qubit density operator in the computational basis , with and . the relevant phase information is contained in the off - diagonal matrix element .if is the initial qubit state , the time evolution after control intervals under either deterministic or randomized protocols , is dictated by a propagator of the form \ , du \right\ } = p_m u_0(t_m , t_{m-1})p_{m-1 } u_0(t_{m-1},t_{m-2 } ) \ldotsp_1 u_0(t_1,t_0)p_0 \nonumber \\ & & = \underbrace{(p_m p_{m-1 } \ldots p_1 p_0 ) } ( p_{m-1 } \ldots p_1 p_0)^{\dagger } u_0(t_m , t_{m-1 } ) ( p_{m-1 } \ldots p_1 p_0)\ldots u_0(t_2,t_1)(p_1 p_0 ) p_0^{\dagger } u_0(t_1,t_0 ) p_0\ : , \label{u_single } \\ & & \hspace{1.5 cm } u_c(t_m ) \nonumber\end{aligned}\ ] ] where indicates , as usual , time ordering .recall the basic idea of deterministic phase refocusing .for the a - protocol , and , [ see appendix a ] .exact averaging is then ensured after a single control cycle , thanks to the property thus , the total phase that the qubit would accumulate in the absence of control is fully compensated , provided that complete cycles are effected ( that is , an even number of spin flips is applied ) .the overall evolution implements a stroboscopic no - op gate , , , as desired .notice that the identity operator is also recovered with the s- and ls - protocols after their corresponding cycle is completed . in preparation to the randomized protocols ( r , h ) , it is instructive to look at the system dynamics in a different frame .in particular , a formulation which is inspired by nmr is the so - called _ toggling - frame _ or _ logical - frame _ picture , which corresponds to a time - dependent interaction representation with respect to the applied control hamiltonian .let denote the control propagator associated to .then the transformed state is defined as with tilde indicating henceforth logical - frame quantities . at the initial time , the two frames coincide and . the evolution operator in the logical frame is immediately obtained from eqs .( [ rho_single ] ) and ( [ rho_logical ] ) , with \ , du \right\ } \:.\ ] ] that is , the control field is _ explicitly removed _ from the effective logical hamiltonian .because , for bb multipulse control , the expression for the logical frame propagator may simply be read off eq .( [ u_single ] ) , yielding in terms of the composite rotations for cyclic protocols , ( even ) that is , the logical and physical frames overlap stroboscopically in time .thus , and _ phase refocusing in the logical frame is equivalent to phase refocusing in the physical frame . _ now consider the evolution under the randomized protocols .the first pulse occurs at , so after a time interval has elapsed , pulses have been applied . since the final goal is to compare random with cyclic controls, we shall take _ even _ henceforth . at time , population inversion may have happened in general in the physical frame .this makes it both convenient and natural to consider the logical frame , where inversion does _ not _ happen , as the _ primary _ frame for control design .the evolution operator in this frame may be expressed , using eq .( [ u_logical ] ) , as where is a bernoulli random variable which accounts for the history of spin flips up to in a given realization . for each , if a spin flip occurs at time , then and , otherwise and .equivalently , will take the values or with equal probability , depending on whether the composite pulse is the identity or a -pulse .let be an index labelling different control realizations .for a fixed , the qubit coherence in the logical frame is given by this expression provides the starting point for analyzing control performance . for the a - protocol , the only possible realization has and leads to the trivial result . for the r - protocol ,realizations corresponding to different strings of s filling up places give , in general , different phases and an ensemble average should be considered . if the statistical ensemble is large enough , the _ average performance _ may be approximated by the _ expected performance _ , which is obtained by averaging over _ all _ possible control realizations and will be denoted by .the calculation of the expectation value is straightforward in the _ unbiased _ setting considered here . since , for each realization , or independently of the value of its predecessor , notice that , among the realizations existing in the logical frame , there are pairs , say corresponding to labels and , with opposite phases .they have and , while with .taking such a pairing into account , the number of independent realizations to consider is .we then find \ , , \label{e_log_cos}\ ] ] where and .such a pairing may be iterated .suppose we next consider realizations with and , while with .this gives \,,\ ] ] where and , and so on .the successive pairing of realizations finally leads to the following simple expression the following expression is found : ^m \ , .\label{e_logical}\ ] ] several remarks are in order . under random pulses , the phase accumulated during the interval is , on average , completely removed , regardless of the value . an important distinction with respect to the deterministic controls , however , is that now the different phase factors carried by each stochastic evolution may interfere among themselves , causing the ensemble average to introduce an effective _ phase damping_. in general , let us write the ensemble expectation in the form for real functions . here , , whereas ] .this implies the appearance of a time scale requirement which is not present when dealing with deterministic controls , nor with the h - protocol . in the physical frame ,state - independent conclusions regarding the system behavior may be drawn conditionally to specific subsets of control realizations .overall , the h - protocol emerges as an alternative of intermediate performance , which partially combines advantages from determinism and randomness .we now consider the more interesting case where the qubit frequency is time dependent , , being a deterministic ( but potentially unknown ) function .this could result , for example , from uncontrolled drifts in the experimental apparatus .while all protocols become essentially equivalent in the limit , searching for the best protocol becomes meaningful in practical situations where pulsing rates are necessarily finite . under these conditions ,the deterministic protocols described so far will no longer be able , in general , to completely refocus the qubit .this would require a very specific sequence of spin flips for each particular function , which would be hard to construct under limited knowledge about the latter .on the other hand , the average over random realizations does remove the phase accumulated for _ any _ function , making randomized protocols ideal choices for phase refocusing . as a drawback ,however , ensemble dephasing may be introduced .thus , the selection of a given protocol will be ultimately dictated by the resulting tradeoffs .the propagator in the logical frame now reads \:,\ ] ] which reduces to eq .( [ u_logical_chi ] ) when .some assumptions on both the amplitude and frequency behavior of are needed in order to draw some general qualitative conclusions .first , if , the analysis developed in the previous section will still approximately hold . in the spirit of regarding as a central frequency, we will also discard the limit , and restrict our analysis to cases where . if is dominated by frequency components which are very fast compared to , the effect of may effectively self - average out over a time interval of the order or longer than . in the opposite limit , where the time dependence of is significantly slower than ,deterministic controls are expected to be most efficient in refocusing the qubit , improving steadily as decreases . in intermediate situations , however , the deterministic performance may become unexpectedly poor for certain , in principle , unknown values of .these features may be illustrated with a simple periodic dependence .suppose , for example , that , and .for a fixed time interval , a significant reduction of the accumulated phase is already possible with few deterministic pulses .however , care must be taken to avoid unintended `` resonances '' between the natural and the induced sign change .for the a - protocol , this effect is worst at , in which case the control pulses exactly occur at the moment the function changes sign itself , hence precluding any cancellation of . with the r - protocol ,ensemble dephasing becomes the downside to face .the ensemble average now becomes \bigg\}\,.\ ] ] in the absence of time dependence , phase damping is minimized as long as eq .( [ first_scale ] ) holds . under the above assumptions on , the condition remains essentially unchanged , in agreement with the fact that the accuracy of random averaging only depends on .refocusing is also totally achieved with the h - protocol . however , unlike in the case of the r - protocol , the ensemble average no longer depends on the time independent part of the hamiltonian , but only on the function , making the identification of precise requirements on harder in the absence of detailed information on the latter .we have \bigg\}\,.\ ] ] fig .[ fig : fixedm ] illustrates the points discussed so far .the sinusoidal example is considered , and we contrast the two aspects to be examined : the top panels show the phase magnitude , which is optimally eliminated with random pulses , while the bottom ones give the dephasing rate , which is inexistent for deterministic controls .the interval between pulses is fixed , , and the protocols are compared for two arbitrary , but relatively close values of the oscillation frequency rate : and .the deterministic control is very sensitive to slight changes of the drift and at certain instants may behave worse than if pulses were completely avoided . similarly , the h - protocol , even though more effective than the r - protocol in this example , also suffers from uncertainties related to . on the contrary ,deviations in the performance of the r - protocol are practically unnoticeable , making it _ more robust against variations in the system parameters_. , and under the a- [ ( blue ) stars ] , r- [ ( black ) circles ] , and h- [ ( purple ) plus ] protocols in the logical frame , for and .left panels : ; right panels : .average taken over realizations . in this and all simulations that will follow , we set . ] as a further illustrative example , we consider in fig .[ fig : w_sin ] the following time dependence for the qubit : d(t).\ ] ] the left panels have , as before , , while for the right panels a fixed time is now divided into an increasing number of intervals . here , selecting the most appropriate protocol depends on our priorities concerning refocusing and preservation of coherence .we may , however , as the right upper panel indicates , encounter _ adversarial _ situations where the time dependence of the qubit frequency is such that not acting on the system is comparatively better than using the a - protocol .clearly , depending on the underlying time dependence and the pulse separation , such poor performances are also expected to occur with other deterministic protocols .in addition , notice that , consistent with its hybrid nature , the h - protocol may perform worse for values of where the deterministic control becomes inefficient ( compare right upper and lower panels ) . in similar situations , from the point of view of its enhanced stability , _the r - protocol turns out to be the method of choice_. , and under the a- [ ( blue ) stars ] , r- [ ( black ) circles ] , and h- [ ( purple ) plus ] protocols in the logical frame . the time interval considered is ; ; and , where .the drift in the right panels includes .average taken over all possible realizations . ]_ to summarize : _ an isolated qubit with time - dependent parameters provides the simplest setting where advantages of randomization begin to be apparent , in terms of enhanced stability against parameter variations . on average ,phase is fully compensated , and ensemble dephasing may be kept very small for sufficiently fast control .similar features will appear for a qubit interacting with a time - varying classical or quantum environment , as we shall see in sects .iv.c and v.e .qubit coherence is limited by the unavoidable influence of noise sources . within a semiclassical treatment , which provides an accurate description of decoherence dynamics wheneverback - action effects from the system into the environment can be neglected , noise is modeled in terms of a classical stochastic process , effectively resulting in randomly time - dependent systems . typically , external noise sources , which in a fully quantum description are well modeled by a _ continuum _ of harmonic modes ( see sect .v ) , are represented by a gaussian process . here , we focus on _ localized _ noise sources , which may be intrinsic to the physical device realizing the qubit notably , localized traps or background charges , leading to a quantum _ discrete _ environment . in this case , non - gaussian features become important , and are more accurately represented in terms of noise resulting from a single or a collection of classical bistable fluctuators leading to so - called random telegraph noise ( rtn ) or -noise , respectively . beside being widely encountered in a variety of different physical phenomena ,such noise mechanisms play a dominant role in superconducting josephson - junction based implementations of quantum computers .recently , it has been shown that rtn and -noise may be significantly reduced by applying cyclic sequences of bb pulses .we now extend the analysis to randomized control . as it turns out , random decoupling is indeed viable and sometimes more stable than purely deterministic protocols . while a detailed analysis of randomized control of genuine noise would be interesting on its own , we begin here with the case of a single fluctuator .this provides an accurate approximation for mesoscopic devices where noise is dominated by a few fluctuators spatially close to the system .let the time - dependent hamiltonian describing the noisy qubit be given by eqs .( [ drift0])-([drift ] ) , where and characterizes the stochastic process , randomly switching between two values , .we shall in fact consider a _semi_random telegraph noise , that is , we assume that the fluctuator initial state is always .the switching rate from to is denoted by , with .we shall also assume for simplicity that , corresponding to a symmetrical process .the number of switching events in a given time interval is poisson - distributed as semiclassically , dephasing results from the ensemble average over different noise realizations .this leads to the decay of the average of the coherence element , here , the average over rtn realizations is represented by and should be distinguished from the average over control realizations , which , as before , is denoted by .the dephasing factor and the phase have distinctive properties depending on the ratio , where ( ) corresponds to a fast ( slow ) fluctuator . given the initial condition for the fluctuator , where stands for the fluctuatorinitially in state , may be calculated as , where and is the equilibrium population difference .note that for a symmetrical telegraph process , the only difference between the results for a fluctuator initially in state or is a sign in the above phase .the decoherence rate for a slow fluctuator is much more significant than for a fast fluctuator .this has been discussed in detail elsewhere , and has been reproduced for later comparison with the controlled case in fig .[ fig : tele_nopulse ] , where several values of are considered .a fast fluctuator behaves equivalently to an appropriate environment of harmonic oscillators and noise effects are smaller for smaller values of , whereas for a slow fluctuator the decoherence function saturates and becomes . from a symmetrical bistable fluctuator .several values of are considered , resulting from changing the coupling strength at fixed switching rate a.u.,width=230 ] here we compare the reduction of rtn under the action of the a- , h- , and r - protocols . in order to isolate the effects of the noise ,it is convenient to first carry out the analysis in the interaction picture which removes the free dynamics .the density operator becomes where and the superscript will refer to the interaction picture henceforth .the free propagator between pulses is now with ; while at , we have [ see appendix a ] \exp \left[-i\lambda _ j \frac{\pi}{2 } \sigma _ x\right ] \exp \left [ -i\frac{\omega_0 t_j}{2}\sigma _ z \right]\ : . \label{pclas_ip}\ ] ] a second canonical transformation into the logical frame is also considered , so that ( as before ) realizations with an even or an odd number of total spin flips are treated on an equal footing .we will refer to the combination of the two transformations as the logical - ip frame .similarly to eq .( [ correspondence ] ) , the interaction and the logical - ip frame propagators are related as this leads to the following propagators at , where our goal is to compute the ratio where labels , as before , different control realizations .note that interchanging the order of the averages does _ not _ modify the results if all pulse realizations are considered and the number of rtn realizations is large enough . with switch realizationsno significant variations were found by interchanging the averages .the decoherence rate for the three selected protocols is shown in fig .[ fig : tele_dephas ] , where a time was fixed and divided into an increasing number of intervals .the left panels are obtained for three slow fluctuators , , and the right panels for .these are the six different noise regimes considered in ref . , where the a - protocol was studied .the authors concluded that once , scales with , while for , bb pulses are still capable of partially reducing noise due to a fast fluctuator , but are mostly inefficient against slow fluctuators .here , we verified that among all possible realizations of pulses separated by the same interval , the realization corresponding to the a - protocol yields the largest value of , whereas absence of pulses gives , as expected , the smallest value .this justifies why , in terms of average performance for finite , we have in decreasing order : a- , h- , and r - protocol ; while for , different protocols are expected to become equivalent . .( green ) solid lines represent the analytical results from in the absence of control .( blue ) stars : a - protocol ; ( black ) circles : r - protocol ; and ( purple ) plus : h - protocol .averages are taken over rtn realizations and all possible pulse realizations .the time interval considered is , thus ., width=326 ] .( green ) solid line : analytical results in the absence of control pulses ; ( blue ) stars : a - protocol ; both r- and h - protocols have phase equal to zero .the time interval considered is , so .averages computed as in fig .[ fig : tele_dephas ] ., width=326,height=288 ] in terms of refocusing the unwanted phase evolution , the above randomized protocols are optimal , since , while the phase magnitude for the a - protocol is eventually compensated as increases .this is shown in fig .[ fig : tele_phase ] . notice also that the absolute phase is very small for fast fluctuators . instead of fixing a time , an alternative picture of the performance of different protocolsmay also be obtained by fixing the number of intervals , as in figs .[ fig : fixed ] ( left panels ) . as expected , larger leads to coherence preservation for longer times .still another option is to fix the interval between pulses , as in fig .[ fig : fixed ] ( right panel ) .as before , the a - protocol shows the best performance , followed by the h- and r - protocols . .left panels : ( top ) , ( bottom ) .right panel : .( green ) solid line : analytical results in the absence of control .( blue ) stars : a - protocol ; ( black ) circles : r - protocol ; and ( purple ) plus : h - protocol .averages are taken over rtn realizations and pulse realizations.,width=326,height=187 ] if the interaction picture is not taken into account , complete refocusing is again guaranteed , on average , when either the r- or h - protocols are used .however , for the r - protocol , the qubit frequency plays a delicate role in the resulting dephasing process .we now have which may be further simplified as follows . among the pulse realizations existing in the logical frame ,there are pairs , say corresponding to labels and , where and , while with , which leads to . besides , since we are considering a semirandom telegraph noise , a pulse at is equivalent to switching the fluctuator from the initial state to , whose net effect is simply a change in the sign of the phase .therefore , we may write e^{-\gamma^{(k)}(t_m , t_0 ) } \right\ } \ : , \nonumber\end{aligned}\ ] ] where and . in the physical frame ,we find correspondingly e^{-\gamma^{(k)}(t_m , t_0 ) } \right\}\ : .\nonumber\end{aligned}\ ] ] contrary to the result obtained in the absence of noise , eq . ( [ e_logical ] ) , the additional realization - dependent phase shift now remains . while , on average , this phase is removed in the limit where , for finite control rates may destructively interfere with the phase gained from the free evolution , potentially increasing the coherence loss .identifying specific values of where such harmful interferences may happen for the given rtn process is not possible , which makes the results for the r - protocol with finite unpredictable in this case .while the above feature is a clear disadvantage , it is avoided by the h - protocol . for each realization ,the phase accumulated with the free evolution is completely canceled , so the result in the logical - ip frame is equal to that in the logical frame : . if access to a classical register that records the total number of spin flips is also available, this equivalence between frames may be further extended to the physical frame . additionally , as already found in sect .iiib , randomized protocols tend to offer superior stability .let us illustrate the above statement through an example where the noisy dynamics of the system is slightly perturbed .suppose that , moving back to the interaction picture , the noise process is now where physically , describes a sequence of six instantaneous switches , equally separated by the interval , restarting again at every instant , being an odd number .this process may be viewed as bursts of switches of duration followed by an interval of dormancy . the resulting behavior for in the logical - ip frameis depicted in fig .[ fig : tele_devil ] . subjected to a disturbance as given in eq .( [ dist ] ) . left panels : fixed , so .right panels : fixed .( green ) solid line : results in the absence of control pulses ; ( blue ) stars : a - protocol ; ( black ) circles : r - protocol ; and ( purple ) plus : h - protocol .the average phase for both r- and h - protocols is zero .left panels : averages are taken over rtn realizations and all possible pulse realizations .right panels : averages are taken over rtn realizations and pulse realizations.,width=326 ] with deterministic control , the rate of noise suppression quickly improves as the separation between pulses shrinks ( left upper panel ) , until a certain value , , where it suddenly shows a significant recoil , becoming almost as bad as simply not acting on the system at all .equivalently , by fixing , the performance of the a - protocol becomes very poor for ( right upper panel ) . in practice , detailed knowledge of the system dynamicsmight be unavailable , making it impossible to predict which values of might be adverse .randomized schemes , on the other hand , are by their own nature more stable against such interferences . as seen from the figure , the r - protocol shows a slower , but also more consistent improvement as decreases , and might therefore be safer in such conditions . notice also that , in terms of coherence preservation and stability , the h - protocol shows ( as intuitively expected ) an intermediate performance between the a- and r - protocols ._ to summarize : _ in the logical - ip frame , the effects of the rtn can be reduced not only under deterministic pulses , but also with a randomized control , though a comparatively shorter pulse separation is needed in the latter case . in the logical and physical frames , the r - protocol , besides showing the poorest performance among the three considered schemes , may also lead to dangerous interferences between the qubit frequency and the phase gained from the free evolution . such problem , however , does not exist for the h - protocol .the benefits of randomization are most clear when limited knowledge about the system dynamics is available and deterministic control sequences may be inefficient in avoiding unwanted `` resonances '' . combining protocols , where we gain stability from randomness , but also avoid the free phase evolution , is desirable especially when working in the physical frame . in this sense ,the h - protocol emerges as a promising compromise .we now analyze the case of a genuine quantum reservoir , where decoherence arises from the entanglement between the qubit and the environment .the relevant hamiltonian is given by eqs .( [ drift0])-([drift ] ) with and . in the semiclassical limit, the effects of the interaction with the bosonic degrees of freedom may be interpreted in terms of an external noise source whose fluctuations correspond to a gaussian random process .a detailed analysis of deterministic decoherence suppression for this model was carried out in ( see also for an early treatment of the driven spin - boson model in a nonresonant monochromatic field and for related discussions of dynamically modified relaxation rates ) . here ,we discuss how randomized decoupling performs . as in sect .iv , we first focus on understanding the controlled dynamics in a frame that explicitly removes both the control field and the free evolution due to . let us recall some known results related to the uncontrolled dynamics .we have \right\}\ : , \label{u_ip}\ ] ] where under the standard assumptions that the qubit and the environment are initially uncorrelated , and that the environment is in thermal equilibrium at temperature [ the boltzmann constant is set = 1 ] the trace over the environment degrees of freedom may be performed analytically , leading to the following expression for the qubit coherence : \ } \nonumber \\ & & = \rho_{01}(t_0)\exp [ -\gamma ( t , t_0 ) ] \ : .\label{decay}\end{aligned}\ ] ] here , is the harmonic displacement operator of the bath mode , and the decoherence function is explicitly given by in the continuum limit , substituting by the spectral density , one finds }{\omega^2 } \ : .\label{g_nopulse}\ ] ] for frequencies less than an ultraviolet cutoff , may be assumed to have a power - law behavior , the parameter quantifies the overall system - bath interaction strength and classifies different environment behaviors : corresponds to the ohmic case , to the super - ohmic and to the sub - ohmic case .remarkably , the dynamics remains exactly solvable in the presence of randomized bb kicks .we focus first on the r - protocol viewed in the logical - ip frame . between pulsesthe evolution is characterized by eq .( [ u_ip ] ) , while at eq .( [ pclas_ip ] ) applies . using eq .( [ relation ] ) , the propagator in the logical - ip frame , apart from an irrelevant overall phase factor , may be finally written as \hspace*{-.5 mm } \right\},\ ] ] where under the uncorrelated initial conditions specified above and thermal equilibrium conditions , the qubit reduced density matrix is exactly computed as \big\ } \nonumber \\ & & = \rho_{01}(t_0 ) e^{-\gamma_{r } ( t_m , t_0)}\ : . \label{decoherence}\end{aligned}\ ] ] because in eq .( [ func ] ) can be at random , each element in the sum corresponds to a vector in the complex plane with a different orientation at every step .thus , the displacement operator above may be suggestively interpreted as a random walk in the complex plane .the decoherence function is now given by which , in the continuum limit , becomes \ : .\label{main_result}\end{aligned}\ ] ] the decoherence behavior under the a - protocol is obtained by letting .we then recover the result of deterministically controlled decoherence , which may be further simplified as , , }{\omega^2 } \tan ^2 \left ( \frac{\omega \delta t}{2 } \right)\ : .\label{determ}\end{aligned}\ ] ] before proceeding with a numerical comparison between eqs .( [ main_result])-([determ ] ) , some insight may be gained from an analytical lower bound for the average ) ] in the limit of high temperature , , for a system evolving under the a- and r - protocols . for the fixed times chosen , ( upper panel ) and ( lower panel ) , the coherence element has already practically disappeared and can not be seen in the figure , while the a - protocol is able to recover it even for very few cycles .the values of ] for different realizations are not so spread and the standard deviations are narrower than in the high temperature limit .in addition , the average over realizations is very close to the lower bound , to the point that they can not be distinguished in the figure . .upper panel : ; lower panel : .( green ) solid line : no control ; ( blue ) stars : a - protocol ; ( black ) circles : average over realizations and respective standard deviations for the r - protocol ; ( black ) squares : expectation value for the r - protocol.,width=268 ] we now extend our comparison to the three remaining protocols of fig . [fig : scheme ] , see fig .[ fig : sym ] .we choose a high temperature bath with ( upper panel ) [ low temperature , in this case , leads to similar results ] , and a low temperature bath with ( lower panel ) .the s - protocol shows the best performance , which is evident in the upper panel , but hardly perceptible in the lower one . due to the different rearrangement of the time interval between pulses for this protocol, it does not correspond to any of the realizations of random pulses as considered here and represent a special scheme separated from the others .the performance of the ls - protocol , which has half the number of -pulses used in the a - protocol , turns out to be better in all cases of a high temperature bath , but worse for a fully quantum bath with large .this explains why the h - protocol , which combines symmetrization and randomness , also performs better than the a- in the high temperature limit . , and .lower panel : low temperature ohmic bath , , and fixed time .( green ) solid line : no control ; ( blue ) stars : a - protocol ; ( purple ) plus : h - protocol ; ( orange ) diamonds : ls - scheme ; and ( black ) dot - dashed line : s - protocol .average for the h - protocol is taken over realizations ., width=268 ] _ to summarize : _ in terms of performance , we have in decreasing order : s- , ls- , h- , a- , r - protocols for high temperature ; and s- , a- , h- , ls- , r - protocols for low temperature once the number of pulses are sufficient to start slowing down decoherence .different protocols become again , as expected , essentially equivalent in the limit . for finite pulse separations , in the considered case of a time - independent hamiltonian ,it is always possible to identify a deterministic protocol showing the best performance . however ,if a balance is sought between good performance and protocols minimizing the required number of pulses , then the h - protocol again emerges as an interesting compromise .note , in particular , that the latter outperforms the standard a - protocol in some parameter regimes .we now investigate under which conditions decoherence suppression is attainable in the physical frame , when the system is subjected to randomized control . because , in this frame , the qubit natural frequency plays an important role , random decoupling also depends on how small can be made with respect to .the reduced density matrix is obtained following the same steps described so far , but in order to retain the effects of the system hamiltonian , the transformation into the interaction picture is now done with respect to the environment hamiltonian only hence the superscript . upon tracing over the environment degrees of freedom ,we are left with the reduced density operator in the schrdinger picture .the unitary operator between pulses is \bigg\}\ : , \label{u_ipbath } \nonumber\end{aligned}\ ] ] while at it is given by \ : .\label{p_ipbath}\ ] ] by additionally moving to the logical frame we get \\ & & \hspace{0.5 cm } \exp \bigg\ { \frac{\sigma_z}{2 } \otimes \sum_k \left [ b^{\dagger}_k e^{i \omega_k t_0 } \eta ^r_k ( m,\delta t ) - { \rm { \rm h.c . } } \right ] \bigg\ } \ : .\nonumber\end{aligned}\ ] ] tracing over the environment and taking the expectation over control realizations leads to the coherence element in the logical frame : e^{-\gamma_{r}^{(k ) } ( t_m , t_0)}\:,\ ] ] where is given by eq .( [ large_chi ] ) .thus , in addition to the decoherence described as before by eq .( [ main_result ] ) , we now have ensemble dephasing due to the fact that each realization carries a different phase factor proportional to . the results for the ratios in the logical and logical - ip frames for the system ,respectively , are summarized in fig .[ fig : ipbath ] , where is fixed and the system is observed at different times . both a high temperature and a low temperature scenario are considered .the phase for each realization in the logical frame is mostly irrelevant when .the outcomes of the average over all realizations in both frames are then comparable , independently of the bath temperature .the situation changes dramatically when the spin - flip energy becomes large , the worst scenario corresponding to , with odd . here , because is an even number , each realization makes a positive or a negative contribution to the average , which may therefore be very much reduced .such destructive quantum interference is strongly dependent on the bath temperature . and in the logical and logical - ip frames , respectively .a fixed time interval is taken .upper panel : high temperature ohmic bath , , lower panel : low temperature ohmic bath , .( green ) solid line : no control ; ( blue ) stars : a - protocol ; ( purple ) plus : h - protocol ; ( black ) squares : r - protocol in the logical - ip frame ; ( red ) up triangles : r - protocol in the logical frame with small frequency ; ( red ) down triangles : r - protocol in the logical frame with large frequency .average performed over all realizations.,width=259,height=297 ] among all random pulse realizations , the most effective at suppressing decoherence are those belonging to the smaller ensemble of the h - protocol .none of them carries a phase , so they always make large positive contributions to the total ensemble average . in a high temperature bath , the realizations that can make negative contributions have often tiny values of , which explains why even in the extreme case of the r - protocol can still lead to some decoherence reduction . in a low temperature bath , on the other hand , decoherence is slower and for the time considered here , the values of for all realizations are very close , which justifies their cancellation when . in the physical frame ,the average for the density matrix depends on the initial state of the system as e^{-\gamma_{r}^{(k ) } ( t_m , t_0)}\:.\end{aligned}\ ] ] as already discussed in sect .iii , the problem associated with population inversion may be avoided if a classical register is used to record the actual number of spin flips . _ to summarize : _ two conditions need to be satisfied for the r - protocol to become efficient in reducing decoherence : and also .notice , however , that when randomness and determinism are combined in a more elaborated protocol , such as the h - protocol , no destructive interference due to occurs .in addition , the hybrid scheme is still capable to outperform the a - protocol in appropriate regimes . as a final example, imagine that the coupling parameters between the system and the environment are time dependent and let us for simplicity work again in the logical - ip frame .the total hamiltonian is given by eqs .( [ drift0])-([drift ] ) with and .two illustrative situations are considered : changes sign after certain time intervals , or it periodically oscillates in time .suppose that , where describes instantaneous sign changes of the coupling parameter after every interval .for a high temperature bath and a fixed time , fig .[ fig : devil_quantum ] shows that the results for the a - protocol exhibit a drastic drop when and .this is due to the fact that some of sign changes happen very close to or coincide with some of -pulses of the deterministic sequence , canceling their effect .in contrast , the occurrence of spin flips in randomized schemes is irregular , so that the latter are more protected against such `` resonances '' and steadily recover coherence as decreases , even though at a slower pace . , with alternating couplings .( blue ) stars : a - protocol ; ( black ) circles : r - protocol ; ( purple ) plus : h - protocol .the interval considered is .averages taken over all possible realizations . ,width=259 ] note that when dealing with the s- or ls - protocols , the same sort of recoil should be expected for different time dependences and different values of .assume that the coupling parameter is given by , where and is small .this function has two superposed periodic behaviors , one with a long period and the other fast oscillating .the fast oscillations are shown in the left upper panel of fig .[ fig : time_bath ] . , for and .upper right panel : decoherence rate in the absence of control .lower panel : decoherence rate for a fixed time interval . a high temperature ohmic bath , , is considered .( green ) solid line : no control ; ( blue ) stars : a - protocol ; ( black ) circles : r - protocol ; ( purple ) plus : h - protocol .averages taken over realizations .standard deviations for the r - protocol are shown ., width=297 ] we consider a high temperature bath , .the right upper panel shows the qubit decoherence in the absence of pulses .the oscillations in the decay rate are related to the oscillations in the interaction strength between the system and the bath . in the lower panel ,we fix a time and compare the decoherence rate for the cases of absence of control , a- , h- and r - protocols . when the result for the a - protocol suddenly becomes even worse than not acting on the system .random pulses , on the contrary , do not show any significant recoil .the reason for the inefficiency of the a - protocol when becomes evident from the left upper panel of fig .[ fig : time_bath ] . vertical dashed lines indicate where the pulses occur .they mostly coincide with the instants where also changes sign . for the ls - protocol, similar unfavorable circumstances happen for different values of and similar behaviors should be expected for other deterministic protocols and functions . _ to summarize : _ the above examples again reinforce the idea of enhanced stability of randomized controls , and suggest that randomization might represent a safer alternative in reducing decoherence when limited knowledge about the system - bath interaction is available .a quantitative comparison between deterministic and randomized control for the most elementary target system , consisting of a single ( isolated or open ) qubit , was developed in different frames .the main conclusions emerging from this study may be summarized as follows .first , it is always possible to identify conditions under which purely random or hybrid schemes succeed at achieving the desired level of dynamical control .frame considerations play an important role in specifying such conditions , satisfactory performance in a given frame being ultimately determined by a hierarchy of time scales associated with all the dynamical components in the relevant hamiltonian .while all protocols become essentially equivalent in the limit of arbitrarily fast control , the behavior for finite pulse separation is rich and rather sensitive to the details of the underlying dynamics . as a drawback of pure random design , ensemble average tends to introduce , in general , additional phase damping , which may be however circumvented by combining determinism and randomness within a hybrid design .second , for time - independent control settings in this simple system , it was always possible to identify a deterministic protocol with best performance .while deterministic schemes ensuring accurate averaging of a known interaction always exist in principle , such a conclusion remains to be verified under more general circumstances , in particular access to a _ restricted _ set of control operations .the hybrid protocol proved superior to the pure random schemes , as well as to standard asymmetric schemes in certain situations .third , for time - varying systems , randomized protocols typically allow for enhanced stability against parameter variations , which may severely hamper the performance of deterministic schemes .pure random design tends to perform better than hybrid in this respect , both choices , however , improving over purely cyclic controls under appropriate conditions .overall , hybrid design emerges as a preferred strategy for merging advantageous features from different protocols , thereby allowing to better compromise between conflicting needs . from a conceptual standpoint , it is intriguing to realize that complete suppression of decoherence remains possible , in principle , by purposefully introducing a probabilistic component in the underlying control , and perhaps surprising to identify cases where this leads to improved efficiency over pure deterministic methods . in a broader context, however , it is worth mentioning that the philosophy of recognizing a beneficial role of randomness in physical processes has a long history . within nmr , the stochastic averaging of intermolecular interactions in gases and isotropic liquids due to random translational and re - orientational motionsmay be thought of as a naturally occurring random self - decoupling process .in spectroscopic applications of so - called stochastic nmr and stochastic magnetic - resonance imaging , spin excitation via trains of weak rf pulses randomly modulated in amplitude , phase , and/or frequency are used to enhance decoupling efficiencies over a broader frequency bandwidth than attainable otherwise . even more generally , the phenomenon of stochastic resonance is paradigmatic in terms of pointing to a constructive role of noise in the transmission of physical signals . within qip ,strategies aimed at taking advantage of noise and/or stochasticity have been considered in contexts ranging from quantum games , to quantum walks , dissipation - assisted quantum computation , as well as specific coherent - control and quantum simulation scenarios . yet another suggestive example is offered by the work of prosen and nidari , who have shown how static perturbations characterizing faulty gates may enhance the stability of quantum algorithms .more recently , as mentioned , both pure random and hybrid active compensation schemes for static coherent errors have been proposed . while it is important to stress that none of the above applications stem from a general _ control - theoretic framework _ as developed in , it is still rewarding to fit such different examples within a unifying perspective . our present analysis should be regarded as a first step toward a better understanding and exploitation of the possibilities afforded by randomization for coherent and decoherent error control . as such, it should be expanded in several directions , including more realistic control systems and settings , and fault - tolerance considerations . while we plan to report on that elsewhere , it is our hope that our work will stimulate fresh perspectives on further probing the interplay between the field of coherent quantum control and the world of randomness .l. v. warmly thanks manny knill for discussions and feedback during the early stages of this project .the authors are indebted to an anonymous referee for valuable suggestions .l. f. s. gratefully acknowledges support from constance and walter burke through their special projects fund in quantum information science .the control hamiltonian is designed according to the intended modification of the target dynamics in a desired frame . throughout this work, our goal has been to freeze the system evolution by removing any phase accumulated due to the unitary evolution , as well as avoiding nonunitary ensemble dephasing and decoherence . as clarified below, this requires the use of _ identical -pulses in the physical frame_. this condition may be relaxed at the expense of no longer refocusing the unitary evolution .let the control of the system be achieved via the application of an external alternating field ( e.g. , a radiofrequency magnetic field ) , \sigma_x \ : , \label{hc_cos}\ ] ] where the carrier is tuned on resonance with the qubit central frequency , . as described in the text , ] and ] . from the above expression ,the interaction - picture propagators for an instantaneous -pulse applied at are found , respectively , as \exp \left[-i \frac{\pi}{2 } \sigma_x \right ] \exp \left[-i \frac{\omega_0 t_j}{2 } \sigma_z \right ] , \nonumber \\ { \rm ( ii ) } &\ : & p_j^i = \exp \left[-i \frac{\pi}{2 } \sigma_x \right ] .\nonumber \end{aligned}\ ] ] we can then return to the schrdinger picture using the relation ^i \exp [ i \omega_0 t_j \sigma_z /2]\:,\ ] ] leading to the propagators , \label{p_varphi } \\ { \rm ( ii ) } & \ : & p_j = \exp \left[-i \frac{\omega_0 t_j}{2 } \sigma_z \right ] \exp \left[-i \frac{\pi}{2 } \sigma_x \right ] \exp \left[i \frac{\omega_0 t_j}{2 } \sigma_z \right]\ : . \nonumber \end{aligned}\ ] ] thus , pulses with are confirmed to be translationally invariant in time in the physical frame , as directly clear from the dependence in ( [ hc_cos ] ) .the difference between the two choices to the control purposes becomes evident by considering the a - protocol on the isolated qubit . from ( [ p_varphi ] ) , the propagators in the physical frame are , respectively , , \nonumber \end{aligned}\ ] ] which leads to the conclusion that refocusing in the physical frame may only be achieved with identical pulses , that is , if .clearly , for the choice , the accumulated phase may only be disregarded in the frame rotating with the frequency .both choices are equally useful if decoherence suppression becomes the primary objective in the open system case .
we revisit the problem of switching off unwanted phase evolution and decoherence in a single two - state quantum system in the light of recent results on random dynamical decoupling methods [ l. viola and e. knill , phys . rev . lett . * 94 * , 060502 ( 2005 ) ] . a systematic comparison with standard cyclic decoupling is effected for a variety of dynamical regimes , including the case of both semiclassical and fully quantum decoherence models . in particular , exact analytical expressions are derived for randomized control of decoherence from a bosonic environment . we investigate quantitatively control protocols based on purely deterministic , purely random , as well as hybrid design , and identify their relative merits and weaknesses at improving system performance . we find that for time - independent systems , hybrid protocols tend to perform better than pure random and may improve over standard asymmetric schemes , whereas random protocols can be considerably more stable against fluctuations in the system parameters . beside shedding light on the physical requirements underlying randomized control , our analysis further demonstrates the potential for explicit control settings where the latter may significantly improve over conventional schemes .
evolutionary algorithms are a type of general problem solvers that can be applied to many difficult optimization problems . because of their generality , these algorithms act similarly like swiss army knife that is a handy set of tools that can be used to address a variety of tasks . in general ,a definite task can be performed better with an associated special tool . however , in the absence of this tool , the swiss army knife may be more suitable as a substitute .for example , to cut a piece of bread the kitchen knife is more suitable , but when traveling the swiss army knife is fine . similarly , when a problem to be solved from a domain where the problem - specific knowledge is absent evolutionary algorithms can be successfully applied .evolutionary algorithms are easy to implement and often provide adequate solutions .an origin of these algorithms is found in the darwian principles of natural selection . in accordance with these principles, only the fittest individuals can survive in the struggle for existence and reproduce their good characteristics into next generation . as illustrated in fig .[ pic:1 ] , evolutionary algorithms operate with the population of solutions . at first , the solution needs to be defined within an evolutionary algorithm .usually , this definition can not be described in the original problem context directly .in contrast , the solution is defined by data structures that describe the original problem context indirectly and thus , determine the search space within an evolutionary search ( optimization process ) .there exists the analogy in the nature , where the genotype encodes the phenotype , as well .consequently , a genotype - phenotype mapping determines how the genotypic representation is mapped to the phenotypic property . in other words ,the phenotypic property determines the solution in original problem context .before an evolutionary process actually starts , the initial population needs to be generated .the initial population is generated most often randomly .a basis of an evolutionary algorithm represents an evolutionary search in which the selected solutions undergo an operation of reproduction , i.e. , a crossover and a mutation . as a result , new candidate solutions ( offsprings )are produced that compete , according to their fitness , with old ones for a place in the next generation .the fitness is evaluated by an evaluation function ( also called fitness function ) that defines requirements of the optimization ( minimization or maximization of the fitness function ) . in this study ,the minimization of the fitness function is considered . as the population evolves solutions becomes fitter and fitter .finally , the evolutionary search can be iterated until a solution with sufficient quality ( fitness ) is found or the predefined number of generations is reached .note that some steps in fig .[ pic:1 ] can be omitted ( e.g. , mutation , survivor selection ) .an evolutionary search is categorized by two terms : exploration and exploitation .the former term is connected with a discovering of the new solutions , while the later with a search in the vicinity of knowing good solutions .both terms , however , interweave each other in the evolutionary search . the evolutionary search acts correctly when a sufficient diversity of population is present .the population diversity can be measured differently : the number of different fitness values , the number of different genotypes , the number of different phenotypes , entropy , etc .the higher the population diversity , the better exploration can be expected .losing of population diversity can lead to the premature convergence .exploration and exploitation of evolutionary algorithms are controlled by the control parameters , for instance the population size , the probability of mutation , the probability of crossover , and the tournament size . to avoid a wrong setting of these, the control parameters can be embedded into the genotype of individuals together with problem variables and undergo through evolutionary operations .this idea is exploited by a self - adaptation .the performance of a self - adaptive evolutionary algorithm depends on the characteristics of population distribution that directs the evolutionary search towards appropriate regions of the search space . , however , widened the notion of self - adaptation with a generalized concept of self - adaptation .this concept relies on the neutral theory of molecular evolution .regarding this theory , the most mutations on molecular level are selection neutral and therefore , can not have any impact on fitness of individual . consequently ,the major part of evolutionary changes are not result of natural selection but result of random genetic drift that acts on neutral allele .an neutral allele is one or more forms of a particular gene that has no impact on fitness of individual .in contrast to natural selection , the random genetic drift is a whole stochastic process that is caused by sampling error and affects the frequency of mutated allele . on basis of this theory igel and toussaintascertain that the neutral genotype - phenotype mapping is not injective .that is , more genotypes can be mapped into the same phenotype . by self - adaptation , a neutral part of genotype ( problem variables ) that determines the phenotype enables discovering the search space independent of the phenotypic variations . on the other hand ,the rest part of genotype ( control parameters ) determines the strategy of discovering the search space and therefore , influences the exploration distribution .although evolutionary algorithms can be applied to many real - world optimization problems their performance is still subject of the no free lunch ( nfl ) theorem . according to this theoremany two algorithms are equivalent , when their performance is compared across all possible problems .fortunately , the nfl theorem can be circumvented for a given problem by a hybridization that incorporates the problem specific knowledge into evolutionary algorithms . in fig .2 some possibilities to hybridize evolutionary algorithms are illustrated . at first, the initial population can be generated by incorporating solutions of existing algorithms or by using heuristics , local search , etc .in addition , the local search can be applied to the population of offsprings .actually , the evolutionary algorithm hybridized with local search is called a memetic algorithm as well .evolutionary operators ( mutation , crossover , parent and survivor selection ) can incorporate problem - specific knowledge or apply the operators from other algorithms .finally , a fitness function offers the most possibilities for a hybridization because it can be used as decoder that decodes the indirect represented genotype into feasible solution . by this mapping , however , the problem specific knowledge or known heuristics can be incorporated to the problem solver . in this chapter the hybrid self - adaptive evolutionary algorithm ( hsa - ea ) is presented that is hybridized with : * construction heuristic , * local search , * neutral survivor selection , and * heuristic initialization procedure .this algorithm acts as meta - heuristic , where the down - level evolutionary algorithm is used as generator of new solutions , while for the upper - level construction of the solutions a traditional heuristic is applied .this construction heuristic represents the hybridization of evaluation function .each generated solution is improved by the local search heuristics .this evolutionary algorithm supports an existence of neutral solutions , i.e. , solutions with equal values of a fitness function but different genotype representation .such solutions can be arisen often in matured generations of evolutionary process and are subject of neutral survivor selection .this selection operator models oneself upon a neutral theory of molecular evolution and tries to direct the evolutionary search to new , undiscovered regions of search space .in fact , the neutral survivor selection represents hybridization of evolutionary operators , in this case , the survivor selection operator .the hybrid self - adaptive evolutionary algorithm can be used especially for solving of the hardest combinatorial optimization problems .the chapter is further organized as follows . in the sect .2 the self - adaptation in evolutionary algorithms is discussed . there , the connection between neutrality and self - adaptation is explained .3 describes hybridization elements of the self - adaptive evolutionary algorithm .4 introduces the implementations of hybrid self - adaptive evolutionary algorithm for graph 3-coloring in details .performances of this algorithm are substantiated with extensive collection of results .the chapter is concluded with summarization of the performed work and announcement of the possibilities for the further work .optimization is a dynamical process , therefore , the values of parameters that are set at initialization become worse through the run .the necessity to adapt control parameters during the runs of evolutionary algorithms born an idea of self - adaptation , where some control parameters are embedded into genotype .this genotype undergoes effects of variation operators .mostly , with the notion of self - adaptation evolutionary strategies are connected that are used for solving continuous optimization problems .typically , the problem variables in evolutionary strategies are represented as real - coded vector that are embedded into genotype together with control parameters ( mostly mutation parameters ) .these parameters determine mutation strengths that must be greater than zero .usually , the mutation strengths are assigned to each problem variable . in that case , the uncorrelated mutation with step sizes is obtained . here , the candidate solution is represented as .the mutation is now specified as follows : where and denote the learning rates . to keep the mutation strengths greater than zero, the following rule is used frequently , a crossover operator is used in the self - adaptive evolutionary strategies .this operator from two parents forms one offsprings .typically , a discrete and arithmetic crossover is used .the former , from among the values of two parents and that are located on -th position , selects the value of offspring randomly .the later calculates the value of offspring from the values of two parents and that are located on -th position according to the following equation : where parameter captures the values from interval ] ) are observed by the original hsa - ea .conversely , the hsa - ea with the random initialization procedure ( ) gained the worst results by the instances the nearest to the threshold , while these were better while the edge density was raised regarding the original hsa - ea .the turning point represents the instance of graph with .after this point is reached the best results were overtaken by the hsa - ea with the random initialization procedure ( ) . in contrary ,the best results by the instances the nearest to the threshold according to the was observed by the hsa - ea without local search heuristics ( ) . here ,the turning point regarding the performance of the hsa - ea ( ) was observed as well .after this point results of the hsa - ea without local search heuristics becomes worse .conversely , the hsa - ea with random initialization procedure that was the worst by the instances before the turning point becomes the best after this . as illustrated by fig .[ fig : sub_3].f , all versions of the hsa - ea leaved in average less than uncolored vertices by the 3-coloring .the bad result by the original hsa - ea coloring the graph with was caused because of the ordering by saturation local search heuristic that got stuck in the local optima .nevertheless , note that most important measure is .+ + _ the impact of the neutral survivor selection _ + in this experiments the impact of the neutral survivor selection on results of the hsa - ea was analyzed . in this context , the hsa - ea with deterministic survivor selection was developed with the following characteristic : * the equation [ eq : fit1 ] that prevents the generation of neutral solutions was used instead of the equation [ eq : fit ] . *the deterministic survivor selection was employed instead of the neutral survivor solution .this selection orders the solutions according to the increasing values of the fitness function . in the next generationthe first solutions is selected to survive . before starting with the analysis, we need to prove the existence of neutral solution and to establish they characteristics .therefore , a typical run of the hsa - ea with neutral survivor selection is compared with the typical run of the hsa - ea with the deterministic survivor selection .as example , the 3-coloring of the equi - partite graph with was taken into consideration .this graph is easy to solve by both versions of the hsa - ea .characteristics of the hsa - ea by solving it are presented in fig .[ fig : sub_4 ] . in the fig .[ fig : sub_4].a the best and the average number of uncolored nodes that were achieved by the hsa - ea with neutral and the hsa - ea with deterministic survivor selection are presented .the figure shows that the hsa - ea with the neutral survivor selection converge to the optimal solution very fast . to improve the number of uncolored nodes from 140 to 10 only 10,000 solutions to evaluationwere needed .after that , the improvement stagnates ( slow progress is detected only ) until the optimal solution is found .the closer look at the average number of uncolored nodes indicates that this value changed over every generation .typically , the average fitness value is increased when the new best solution is found because the other solutions in the population try to adapt itself to the best solution .this self - adaptation consists of adjusting the step sizes that from larger starting values becomes smaller and smaller over the generations until the new best solution is found .the exploring of the search space is occurred by this adjusting of the step sizes .conversely , the average fitness values are changed by the hsa - ea in the situations where the best values are not found as well .the reason for that behavior is the stochastic evaluation function that can evaluate the same permutation of vertices always differently .more interestingly , the neutral solution occurs when the average fitness values comes near to the best ( fig .[ fig : sub_4].b ) . as illustrated by this figure ,the most neutral solutions arise in the later generations when the population becomes matured .in example from fig . [fig : sub_4].b , the most neutral solutions occurred after 20,000 and 30,000 evaluations of fitness function , where almost of neutral solution occupied the current population . in contrary ,the hsa - ea with deterministic survivor selection starts with the lower number of uncolored vertices ( fig .[ fig : sub_4].c ) than the hsa - ea with neutral selection. however , the convergence of this algorithm is slower than by its counterpart with the neutral selection . a closer look at the average fitness value uncovers that the average fitness value never come close to the best fitness value . a falling and the rising of the average fitness valuesare caused by the stochastic evaluation function .in the fig . [fig : sub_4].d a diversity of population as produced by the hsa - ea with different survivor selections is presented .the diversity of population is calculated as a standard deviation of the vector consisting of the mean weight values in the population .both hsa - ea from this figure lose diversity of the initial population ( close to value 8.0 ) very fast .the diversity falls under the value 1.0 . over the generationsthis diversity is raised until it becomes stable around the value 1.0 . here , the notable differences between curves of both hsa - ea are not observed . to determine what impact the neutral survivor selection has on results of the hsa - ea, a comparison between results of the hsa - ea with neutral survivor selection ( ) and the hsa - ea with deterministic survivor selection ( ) was done .however , both versions of the hsa - ea run without local search heuristics . results of these are represented in the fig .[ fig : sub_5 ] . as reference point, the results of the original hsa - ea hybridized with the swap local search heuristic ( ) that obtains the overall best results are added to the figure .the figure is divided in two graphs where the first graph ( fig .[ fig : sub_5].a ) presents results of the hsa - ea with heuristic initialization procedure and the second graph ( fig .[ fig : sub_5].b ) results of the hsa - ea with random initialization procedure according to the . as shown by the fig .[ fig : sub_5].a the hsa - ea with neutral survivor selection ( ) exposes better results by the instances near to the threshold ( ] ) are presented in table [ tab : summary ] , where these are arranged according to the applied selection ( column ) , the local search heuristics ( column ) and initialization procedure ( column ) . in column average results of the corresponding version of the hsa - ea are presented .additionally , the column denotes the averages of the hsa - ea using both kind of initialization procedure .finally , the column represents the average results according to that are dependent on the different kind of survivor selection only ..average results of various versions of the hsa - ea according to the sr [ cols="<,<,<,^,^,^",options="header " , ] as shown by the table [ tab : summary ] , results of the hsa - ea with deterministic survivor selection without local search heuristics and without random initialization procedure ( , denoted as ) were worse than results or its counterpart with neutral survivor selection ( , denoted as ) in average for more than 10.0% . moreover , the local search heuristics improved results of the hsa - ea with neutral survivor selection and random initialization procedure from ( denoted as ) to ( denoted as ) that amounts to almost 10.0% . finally , the heuristic initialization improved results of the hsa - ea with neutral selection and with local search heuristics from ( denoted as ) to ( denoted as ) , i.e. for 1.5% .note that the represents the best result that was found during the experimentation . in summary ,the construction heuristics has the most impact on results of the hsa - ea .that is , the basis of the graph 3-coloring represents the self - adaptive evolutionary algorithm with corresponding construction heuristic .however , to improve results of this base algorithm additional hybrid elements were developed . as evident , the local search heuristicsimproves the base algorithm for , the neutral survivor selection for another and finally the heuristic initialization procedure additionally .evolutionary algorithms are a good general problem solver but suffer from a lack of domain specific knowledge .however , the problem specific knowledge can be added to evolutionary algorithms by hybridizing different parts of evolutionary algorithms . in this chapter , the hybridization of search and selection operators are discussed . the existing heuristic function that constructs the solution of the problem in a traditional way can be used and embedded into the evolutionary algorithm that serves as a generator of new solutions . moreover, the generation of new solutions can be improved by local search heuristics , which are problem specific . to hybridized selection operatora new neutral selection operator has been developed that is capable to deal with neutral solutions , i.e. , solutions that have the different genotype but expose the equal values of objective function .the aim of this operator is to directs the evolutionary search into new undiscovered regions of the search space , while on the other hand exploits problem specific knowledge . to avoid wrong setting of parameters that control the behavior of the evolutionary algorithm , the self - adaptation is used as well .such hybrid self - adaptive evolutionary algorithms have been applied to the the graph 3-coloring that is well - known np - complete problem .this algorithm was applied to the collection of random graphs , where the phenomenon of a threshold was captured .a threshold determines the instanced of random generated graphs that are hard to color .extensive experiments shown that this hybridization greatly improves the results of the evolutionary algorithms . furthermore , the impact of the particular hybridization is analyzed in details as well . in continuation of workthe graph -coloring will be investigated . on the other hand, the neutral selection operator needs to be improved with tabu search that will prevent that the reference solution will be selected repeatedly .barnett , l. 1998 . ruggedness and neutrality - the nkp family of fitness landscapes , _ in _ c. adami , r. belew , h. kitano c. taylor ( eds ) , _ alife vi : sixth international conference on articial life _ , mit press , pp . 1827 .culberson , j. luo , f. 2006 . exploring the k - colorable landscape with iterated greedy , _ in _d. johnson m. trick ( eds ) , _ cliques , coloring and satisfiability : second dimacs implementation challenge _ , american mathematical society , rhode island , pp .245284 .doerr , b. , eremeev , a. , horoba , c. , neumann , f. theile , m. 2009 .evolutionary algorithms and dynamic programming , _ gecco 09 : proceedings of the 11th annual conference on genetic and evolutionary computation _ , acm , new york , ny , usa , pp. 771778 .eberhart , r. kennedy , j. 1995 . a new optimizer using particle swarm theory , _ proceedings of 6th international symposium on micro machine and human science _ , ieee service center , piscataway , nj , nagoya , pp .3943 .fleurent , c. ferland , j. 1994 .genetic hybrids for the quadratic assignment problems , _ in _p. pardalos h. wolkowicz ( eds ) , _ quadratic assignment and related problems _, dimacs series in discrete mathematics and theoretical computer science , ams : providence , rhode island , pp . 190206 .ganesh , k. punniyamoorthy , m. 2004 .optimization of continuous - time production planning using hybrid genetic algorithms - simulated annealing , _ international journal of advanced manufacturing technology _ * 26 * : 148154 .grosan , c. abraham , a. 2007 .hybrid evolutionary algorithms : methodologies , architectures , and reviews , _ in _ c. grosan , a. abraham h. ishibuchi ( eds ) , _ hybrid evolutionary algorithms _ , springer - verlag , berlin , pp .117 .herrera , f. lozano , m. 1996 .adaptation of genetic algorithm parameters based on fuzzy logic controllers , _ in _f. herrera j. verdegay ( eds ) , _ genetic algorithms and soft computing _ , physica - verlag hd , pp . 95125 .lee , m. takagi , h. 1993 .dynamic control of genetic algorithms using fuzzy logic techniques , _ in _s. forrest ( ed . ) , _ proceedings of the 5th international conference on genetic algorithms _ , morgan kaufmmann , san mateo , pp .7683 .liu , s .- h . ,mernik , m. bryant , b. 2009 . to explore or to exploit : an entropy - driven approach for evolutionary algorithms , _ international journal of knowledge - based and intelligent engineering systems _ * 13 * : 185206 .meyer - nieberg , s. beyer , h .-self - adaptation in evolutionary algorithms , _ in _f. lobo , c. lima z. michalewicz ( eds ) , _ parameter setting in evolutionary algorithms _ , springer - verlag , berlin , pp .
evolutionary algorithms are good general problem solver but suffer from a lack of domain specific knowledge . however , the problem specific knowledge can be added to evolutionary algorithms by hybridizing . interestingly , all the elements of the evolutionary algorithms can be hybridized . in this chapter , the hybridization of the three elements of the evolutionary algorithms is discussed : the objective function , the survivor selection operator and the parameter settings . as an objective function , the existing heuristic function that construct the solution of the problem in traditional way is used . however , this function is embedded into the evolutionary algorithm that serves as a generator of new solutions . in addition , the objective function is improved by local search heuristics . the new neutral selection operator has been developed that is capable to deal with neutral solutions , i.e. solutions that have the different representation but expose the equal values of objective function . the aim of this operator is to directs the evolutionary search into a new undiscovered regions of the search space . to avoid of wrong setting of parameters that control the behavior of the evolutionary algorithm , the self - adaptation is used finally , such hybrid self - adaptive evolutionary algorithm is applied to the two real - world np - hard problems : the graph 3-coloring and the optimization of markers in the clothing industry . extensive experiments shown that these hybridization improves the results of the evolutionary algorithms a lot . furthermore , the impact of the particular hybridizations is analyzed in details as well . _ to cite paper as follows : iztok fister , marjan mernik and janez brest ( 2011 ) . hybridization of evolutionary algorithms , evolutionary algorithms , eisuke kita ( ed . ) , isbn : 978 - 953 - 307 - 171 - 8 , intech , available from : http://www.intechopen.com/books/evolutionary-algorithms/hybridization-of-evolutionary-algorithms_
signal reconstruction is usually formulated as a linear inverse problem where is the true signal , is an real matrix , is the observed data and represents the random noise . over the last decade, sparse reconstruction has received great attentions . in general ,the task of sparse reconstruction is to find a sparse from .obviously , the most natural measure of sparsity is the -norm , defined at as where stands for the cardinality of the set and denotes the support of , that is .when the noise level of is known , we consider the following constrained regularization problem where denotes the -norm , and is the noise level . specially , in the noiseless case , i.e. , when , problem reduces to seeking for the sparsest solutions from a linear system as follows finding a global minimizer of problem is known to be combinational and np - hard in general as it involves the -norm . the number of papers dealing with the -norm is large and several types of numerical algorithms have been adopted to approximately solve the problem . in this paper , we mainly focus on the penalty methods . before making a further discussion on the penalty methods , we briefly review other two types of approaches , greedy pursuit methods and relaxation of the -norm .greedy pursuit methods , such as orthogonal matching pursuit and cosamp , adaptively update the support of by exploiting the residual and then solve a least - square problem on the support in each iteration .this type of algorithms are widely used in practice due to their relatively fast computational speed . however , only under very strict conditions , can they be shown theoretically to produce desired numerical solutions to problem .relaxation methods first replace the -norm by other continuous cost functions , including convex functions and nonconvex functions , such as the bridge functions with , capped- functions , minmax concave functions and so on .then the majorization - minimization strategy is applied to the relaxation problem , which leads to solving a constrained reweighted or subproblems in each step .the cost functions of the iterates are shown to be descent , while the convergence of the iterates themselves are generally unknown .the penalty method is a classical and important approach for constrained optimization . for an overview on the penalty method, we refer the readers to the book .an extensively used penalty method for problem is to solve the following least square penalty problem where is a penalty parameter . as problemstill involve the -norm , the greedy pursuit and relaxation methods mentioned before are applicable for approximately solving the problem .however , it has relatively better structure than problem from the standpoint of algorithmic design .more precisely , the proximal operator of -norm has a closed form , while the least square penalty term in problem is differentiable with a lipschitz continuous gradient .therefore , it can be directly and efficiently solved by proximal - gradient type algorithms or primal - dual active set methods .one specific proximal - gradient type algorithm for problem is the iterative hard thresholding ( iht ) .in general , it is shown in that for arbitrary initial point , the iterates generated by the algorithm converge to a local minimizer of problem . under certain conditions, the iht converges with an asymptotic linear rate .the structure of the least square penalty problem makes it easier to develop convergent numerical algorithms . the penalty parameter balances the magnitude of the residual and the sparsity of solutions to problem .an important issue is that wether problem and share a common global minimizer for some penalty parameter .it seems intuitive that there exists certain connections between these two problems .however , to the best of our knowledge , there is very little theoretical results on penalty methods for problem in general .recently , chen , lu and pong study the penalty methods for a class of constrained non - lipschitz optimization .the non - lipschitz cost functions considered in does not contain the -norm and the corresponding results can not be generalized to the constrained regularization problems .nikolova in gives a detailed mathematical description of optimal solutions to problem and further in establishes their connections with optimal solutions to the following optimization problem where is a given positive integer .problem also involves the -norm and is usually called the sparsity constrained minimization problem .however , optimal solutions of problem and their connections with problem have not been studied in these works .this paper is devoted to describing global minimizers of a class of constrained regularization problems and the corresponding penalty methods . as in , we use `` optimal solutions '' for global minimizers and `` optimal solution set '' for the set of all global minimizers in the remaining part of this paper .the class of constrained regularization problems we considered has the following form where or . obviously , problem has two cases , the constrained problem and the constrained problem as follows the analysis of these two constrained problems and their penalty methods share many common points and thus we shall discuss them in a unified way .moreover , we introduce a class of scalar penalty functions satisfying the following blanket assumption , where .[ hypo:1 ] the function is continuous , nondecreasing with .then the penalty problems for can be in general formulated as where is a penalty parameter . in the case of , when we set the penalty function for , the general penalty problem reduces to the specific problem. we shall give a comprehensive study on connections between optimal solutions of problem and problem .our main contributions are summarized below .* we study in depth the existence , stability and cardinality of solutions to problems and .in particular , we prove that the optimal solution set of problem is piecewise constant with respect to .more precisely , we present a sequence of real number where . we claim that for all , problems with different share completely the same optimal solution set . *we clarify the relationship between optimal solutions of problem and those of problem .we find that , when problem probably * never * has a common optimal solution with problem for any .this means that the penalty problem may fail to be an exact penalty formulation of problem .a numerical example is presented to illustrate this fact in section [ sec : exp ] .* we show that under a mild condition on the penalty function , problem has exactly the same optimal solution set as that of problem once for some . in particular , for we present a penalty function with which the penalty term in problem is differentiable with a lipschitz continuous gradient .this penalty function enables us to design innovative numerical algorithms for problem .the remaining part of this paper is organized as follows . in section [ sec : exist ] we show the existence of optimal solutions to problems and .the stability for problems and are discussed in section [ sec : sta ] .we analyze in section [ sec : relation ] the relationship between optimal solutions of problems and .section [ sec : equal ] establishes a sufficient condition on , which ensures problems and have completely the same optimal solution set when the parameter is sufficiently large .we further investigate the cardinality and strictness of optimal solutions to problems and in section [ sec : minimizer ] .the theoretical results are illustrated by numerical experiments in section [ sec : exp ] .we conclude this paper in section [ sec : con ] .this section is devoted to the existence of optimal solutions to problems and .the former issue is trivial , while the latter issue requires careful analysis . after a brief discussion on the existence of optimal solutions to problem, we shall focus on a constructive proof of the existence of optimal solutions to problem .since the objective function of problem is proper , lower semi - continuous and has finite values , it is clear that problem has optimal solutions as long as the feasible region is nonempty .we present the result in the following theorem . for simplicity , we denote by the feasible region of problem , that is [ thm : existconst ] the optimal solution set of problem is nonempty if and only if , where is defined by . in the rest of this section , we dedicate to discussing the existence of optimal solutions to problem .since the objective function of consists of two terms , we begin with defining several notations by alternating minimizing each term . to this end , for any positive integer , we define [ def : srhoomega ] given satisfying h[hypo:1 ] and , the integer , the sets , , are defined by the following iteration we first show that definition [ def : srhoomega ] is well defined .it suffices to prove both the following two optimization problems and have optimal solutions for any and .clearly , problem has an optimal solution since the objective function is piecewise constant and has finite values .the next lemma tells us that the optimal solution set of problem is always nonempty for any .we denote by the -dimensional vector with as its components .for an matrix and , let formed by the columns of with indexes in .similarly , for any and , we denote by the vector formed by the components of with indexes in .[ lema : l0constsolu ] given satisfying h[hypo:1 ] and , for any , the optimal solution set of problem is nonempty. as function is nondecreasing in , it suffices to show that the optimization problem has an optimal solution .we first prove that for any , the optimization problem has an optimal solution .obviously , is an optimal solution to problem if and only if and in the case of , it is well - known that is an optimal solution to problem , where is the pseudo - inverse matrix of the matrix .when , the optimal solution set of problem is nonempty since is a piecewise linear function and bounded below , see proposition 3.3.3 and 3.4.2 in .therefore , problem has an optimal solution for any . with the help of problem, problem can be equivalently written as clearly , the optimal solution set of problem is nonempty , therefore , problem has an optimal solution .we then complete the proof . by lemma [ lema : l0constsolu ] , definition [ def : srhoomega ]is well defined .we present this result in the following proposition .given satisfying h[hypo:1 ] and , the integer and for can be well defined by definition [ def : srhoomega ] .we then provide some properties of and for defined by definition [ def : srhoomega ] in the following proposition .[ prop : rhosomega ] given satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .then the following statements hold : * and , where denotes the rank of .* .* .* , and , for , . * for .items ( ii ) , ( iii ) , ( iv ) and ( v ) are clear by definition [ def : srhoomega ] .we dedicate to proving item ( i ) following .let . from the proof of lemma [ lema : l0constsolu ] ,since is nondecreasing , for any . let , then .let denote the column of , . since the rank of is , without loss of generality , we assume is the maximal linearly independent group of . then there exists such that .we define as for and for .it is obvious that and , implying .therefore .then by the definition of , we have . from the procedure of the iteration in definition [ def : srhoomega ] and , we obtain . with the help of definition [ def :srhoomega ] , the euclid space can be partitioned into sets : , , .therefore , in order to establish the existence of optimal solutions to problem , it suffices to prove the existence of optimal solutions to problem and for all .we present these desired results in the following lemma .[ lema : partitionsolution ] given satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] . then for any ,the following statements hold : * the optimal solution sets of problems and for are not empty . *the optimal value of problem is and the optimal solution set to problem is . *the optimal value of problem is and the optimal solution set of problem is , for . we only need to prove items ( ii ) and ( iii ) since they imply item ( i ) .we first prove item ( ii ) .it is obvious that restricted to the set , the minimal value of the first term is and can be attained at any . by definition [ def : srhoomega ] , the minimal value of is and can be attained at any .therefore , the optimal value of problem is .let be the optimal solution set of problem. clearly , .we then try to prove .it suffices to prove for any .if not , there exists and , then .then the objective function value at is no less than due to the definition of , contradicting the fact that is an optimal solution of problem .then we get item ( ii ) .item ( iii ) can be obtained similarly , we omit the details here . for convenient presentation ,we define , for , at as now , we are ready to show the existence of optimal solutions to problem .[ thm : exist ] given satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .let for be defined by .then , the optimal solution set of problem is nonempty for any .furthermore , the following statements hold for any fixed : * the optimal value of problem is .* is the optimal solution set of problem , where .we omit the proof here since it is a direct result of the fact that and lemma [ lema : partitionsolution ] . [ remark : l=0 ] according to definition [ def : srhoomega ] , if , then and .then , by theorem [ thm : exist ] , if , the optimal value and the optimal solution set of problem are and respectively for any .in this section , we study the stability for problems and , including behaviors of the optimal values and optimal solution sets with respect to changes in the corresponding parameters .we prove that , the optimal value of problem and that of problem change piecewise constantly and piecewise linearly respectively as the corresponding parameters vary .moreover , we prove that the optimal solution set of problem is piecewise constant with respect to the parameter .we begin with introducing the notion of marginal functions , which plays an important role in optimization theory , see for example .set where . by lemma [ lema : l0constsolu ] , we have is well defined .let defined at as where is defined by .clearly , for a fixed , is the optimal value of problem .thus is well defined due to theorem [ thm : existconst ] .the function is also called the marginal function of problem .similarly , we can define the marginal function of problem .given satisfying h[hypo:1 ] and , we define at as clearly , is well defined due to theorem [ thm : exist ] . in order to study the stability of the optimal solution sets of problems and, we define at as the optimal solution set of the constrained problem , that is , where is defined by and collects all the subsets of . by theorem [ thm : existconst ] , is well defined .we also define at as for a fixed , is the optimal solution set of problem , therefore , is also well defined due to theorem [ thm : exist ] .then , our task in this section is establishing the properties of functions and as well as properties of mappings and .this subsection is devoted to the stability for problem .we explore the behaviors of the marginal function and optimal solution set with respect to .we begin with a lemma which is crucial in our discussion .[ lemma : lambdastrict ] given satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] . for any , , let .then the following hold : * has full column rank . * if , then is the unique minimizer of , that is for any , .we first prove item ( i ) .we prove it by contradiction .if not , there exists such that and such that .therefore , .this contradicts the definition of . then we get item ( i ) .we next prove item ( ii ) .since and is nondecreasing , it follows that .thus , for any .the problem is equivalent to the problem then from item ( i ) , the objective function of the convex optimization problem is strictly convex .therefore , we obtain item ( ii ) .we next show an important proposition which reveals the relationship between and the optimal solution set of problem . clearly , when , .thus , we only discuss cases as .[ prop : partialeauvalence ] given satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .let , and be defined by , and respectively .suppose is strictly increasing .then , the following statements hold : * for any , there must exist such that , and is a subset of , that is moreover , for any .* particularly , if the in item ( i ) satisfies , then is equal to , that is * further , if and the in item ( i ) satisfies , then is a real subset of , that is we first prove item ( i ) .since is strictly increasing , and , there must exist such that due to item ( iii ) of proposition [ prop : rhosomega ] .we then prove .let , we will show .since , and is strictly increasing , we have . in order to prove , we need to show that for any there holds .if not , there exists such that . by the definition of , we have , which contradicts the fact that .therefore , implies .moreover , since we have , where is the marginal function of problem , defined by . therefore , we have for any .we then get item ( i ) .next , we try to prove item ( ii ) . by item ( i ) of this proposition , we have for any . in order to prove , we only need to show for any due to item ( v ) of proposition [ prop : rhosomega ] .we next show it by contradiction .if there exists such that , then due to , the strictly increasing property of and .since , by the definition of , we have , contradicting .thus we get item ( ii ) of this proposition .finally , we prove item ( iii ) .we only need to show that there exists such that .let , we have .let .since is continuous and , there exists such that as long as , .let .then . set .set .obviously , , and .since is strictly increasing , we have . by item ( i ) of this proposition , due to and .we then show .it amounts to showing by item ( v ) of proposition [ prop : rhosomega ] . by item ( ii ) of lemma [ lemma : lambdastrict ] , we have since and share the same support .then by the strictly increasing property of we get .then item ( iii ) of this proposition follows .[ remark : same ] one can easily check that definition [ def : srhoomega ] produces the same and for for different choices of once they are all strictly increasing .therefore , it is reasonable that has connections with for , as shown in proposition [ prop : partialeauvalence ] .now , we are ready to establish the main results of this subsection .[ thm : staconst ] given defined at as and , let and for be defined by definition [ def : srhoomega ] .let and be defined by and respectively .then the following statements hold : * * in particular , if , we have for , . * is a piecewise constant function and has at most values .further , is lower semicontinuous , right - continuous and nonincreasing . * for any satisfying , . items ( i ) and ( ii ) follow from proposition [ prop : partialeauvalence ] .then item ( iii ) of this theorem follows immediately . by item ( i ) and for any satisfying , we obtain item ( iv ) .this subsection is for the stability of problem .we shall study the behaviors of the marginal function and optimal solution set with respect to .according to theorem [ thm : exist ] , with defined by .obviously , each for is a line with slop and intercept .therefore , by items ( ii ) and ( iii ) of proposition [ prop : rhosomega ] , it is easy to deduce that is continuous , piecewise linear and nondecreasing .next we propose an iteration procedure to find the minimal value of for , i.e. , the marginal function , for .[ def : lambda_i ] given satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .then the integer , are defined by the following iteration we present an example to illustrate the iterative procedure in definition [ def : lambda_i ] . [ exam:3.6 ] in this example , we set .by proposition , we set and . then for can be obtained by equation [ def : fi ] .we depict the plots of for in figure [ fig : ex1 ] .we observe from figure [ fig : ex1 ] that , when , .further , the first and largest critical parameter can be obtained by the largest intersection point of and for all . here, we have that , which collects all the indexes that except . in order to search for the second critical parameter value , we first find that on the second interval , , where .thus , can be obtained by the largest intersection point of and for . then .thus , in this case , , . on interval , we have . in this example , for any . [fig : ex1 ] the following proposition provides some basic properties of , and for by definition [ def : lambda_i ] .[ prop : lambda_i ] given satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .let , and for be defined by definition [ def : lambda_i ] . then the following statements hold : * , in particular , if then .* .* . * and , for all , .the proof is outlined in appendix 8.1 . the main results of this subsection are presented in the following theorem . [ thm : stability ] given satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .let , and for be defined by definition [ def : lambda_i ] .let for , and be defined by , and respectively . by remark [ remark : l=0 ] , if , then and . if , then the following statements hold : * ,\\ f_{t_i } , & \mathrm{~~if~~}\lambda\in ( \lambda_i , \lambda_{i-1 } ] , i\in\mathbb{n}_{k-1},\\ f_0(\lambda ) , & \mathrm{~~if~~}\lambda\in(\lambda_0 , + \infty ) .\end{cases}\ ] ] * * is continuous , piecewise linear and nondecreasing , particularly , is strictly increasing in . to improve readability , the proof of theorem [ thm : stability ] and two auxiliary lemma and proposition are given in appendix 8.2 .a direct consequence of theorem [ thm : stability ] is stated below .[ crol : stability ] under the assumptions of theorem [ thm : stability ] , the following statements hold : * if or , , then and hold for any and any . * if , then and hold for any and any . item ( i ) can be obtained by item ( ii ) of theorem [ thm : stability ] as well as item ( v ) of proposition [ prop : rhosomega ] .similarly , item ( ii ) follows from item ( ii ) of theorem [ thm : stability ] , items ( ii ) , ( iii ) , ( v ) of proposition [ prop : rhosomega ] as well as item ( ii ) of proposition [ prop : lambda_i ] . from theorem [ thm : stability ] , the optimal value of problem changes piecewise linearly while the optimal solution set of problem changes piecewise constantly as the parameter varies .in addition , by corollary [ crol : stability ] , the optimal values of both the first and second terms of are piecewise constant with respect to changes in the parameter . we observe from theorem [ thm : staconst ] and [ thm : stability ] that , when , problem share the same optimal solution set as problem if is chosen to be small enough .thus , in the remaining part of this paper , we only focus on the nontrivial case when , where is defined by .in this section we explore the relationship between the constrained problem with and a corresponding special penalty problem . in the case of shall study the relationship between problem and the least square penalty problem . in the case of , we shall discuss the connections between problem and the following -penalty problem in fact , both problems and can be cast into the general penalty problem with defined at , for , as we exhibit the main results of this section in the following theorem .[ thm : leasequv ] for defined by and , let and for be defined by definition [ def : srhoomega ] .let , and for be defined by definition [ def : lambda_i ] .let and .let , and be defined by , and respectively . for ,let be the integer such that . * if and suppose , then further , if , we have for and for . * if , and suppose , then * if , then for any .we first prove item ( i ) . from item ( ii ) of theorem[ thm : stability ] , we have for . by item ( i ) of proposition [ prop : partialeauvalence ] , we get that as .we also obtain that if , for from item ( ii ) of proposition [ prop : partialeauvalence ] . when or , by item ( ii ) of theorem [ thm : stability ] and , we have and for any .since and for all , we get for or . clearly , if , then for or . when $ ] , for and thus .we then prove item ( ii ) .by item ( i ) of proposition [ prop : partialeauvalence ] , for any . from item ( ii ) of theorem [ thm : stability ] , we have due to . then we get item ( ii ) by the fact that for any and for any as .we finally prove item ( iii ) .similarly , we have for any .however , by theorem [ thm : stability ] for any and since .thus we get item ( iii ) .[ remark : p=2 ] in the case of , by proposition [ prop : partialeauvalence ] , if and , then we have when .if , then we have . in such case , item ( i ) of theorem [ thm : leasequv ] holds clearly . according to item ( iii ) of theorem [ thm : leasequv ] , in generalit is possible that problem has * no * common optimal solutions with problem for any .this means that in general problem may not be an exact penalty method for problem .however , in the noiseless case , we show in the next corollary that problem has the same optimal solution set as problem if is large enough .[ coro : least ] let be defined by and . assume the feasible region of problem is nonempty .then , there exists such that problems and with have the same optimal solution set .since the feasible region of problem is nonempty , by definition [ def : srhoomega ] . clearly ,problem is exactly problem with .then , in this case .let and , for be defined by definition [ def : lambda_i ] . from item ( ii ) of proposition[ prop : lambda_i ] , and thus .according to item ( i ) of theorem [ thm : leasequv ] , problems and have the same optimal solution set when .set , we then obtain the desired results .in this section , we first establish exact penalty conditions under which problem have the same optimal solution set as problem , provided that for some . based on the conditions developed , we propose several exact penalty formulations for problem . according to the definition of and in definition [ def : srhoomega ], the penalty term attains its minimal value at any .we also know that attains its minimal value at any when by item ( ii ) of theorem [ thm : stability ] , where is defined by definition [ def : lambda_i ] .since is nondecreasing , we have for any . then we obtain the following theorem .[ thm : equivalence ] let satisfy h[hypo:1 ] and .suppose the feasible region of problem is nonempty , that is , where is defined by .if then there exists such that problems and with have the same optimal solution set . since holds , by the definition of and in definition [ def : srhoomega ] , .thus is the optimal solution set of problem .let , where is defined in definition [ def : lambda_i ] . then , from item ( ii ) of theorem [ thm : stability ] , is also the optimal solution set of problem with .then we have that problems and with have the same optimal solution set . in general, it is not easy to verify condition .the next corollary states a simple and useful condition which also ensures the exact penalty property once is large enough .[ crol : equv ] let satisfy h[hypo:1 ] and .assume that the feasible region of problem is nonempty .if satisfies then there exists such that problems and with have the same optimal solution set .the desired results follows from the fact that condition implies condition .next we present several examples of satisfying condition . for ,we denote by the positive part of , that is , given .let scalar function be defined at as .then satisfies h1 and condition . withthe choice of , the penalty term in problem becomes the penalty term with is well exploited in penalty method for quadratically constrained minimization with in .one can easily check that the penalty term is convex but non - differentiable for . given .let be defined at as then satisfies h1 and condition .with the choice of , the penalty term in problem becomes by choosing , for problem reads as according to corollary [ crol : equv ] , the following proposition concerning the exact penalization of problem holds .[ prop : env ] suppose that the feasible region of problem is nonempty , then the penalty problem shares the same optimal solution set with the constrained problem , provided that for some .in particular , when , problem reduces to the least square penalty problem . from proposition[ prop : env ] , we deduce that has the same optimal solution set as the linear constrained problem when is sufficiently large .this result coincides with that obtained in corollary [ coro : least ] .in addition to the advantage of exact penalization , we prove in the next proposition that the penalty term in is differentiable with a lipschitz continuous gradient . for simplicity, we define at as we denote by the largest singular value of . we also need to recall the notion of firmly nonexpansive .operator is called firmly nonexpansive ( resp . ,nonexpansive ) if for all obviously , a firmly nonexpansive operator is nonexpansive .now , we are ready to show is differentiable with a lipshitz continuous gradient . [ prop : diff ] let be defined by .then function is differentiable with a lipschitz continuous gradient . in particular , for * * is lipschitz continuous with the lipschitz constant .we define , at as obviously , for any .one can check that for any , thus and item ( i ) follows .we next prove item ( ii ) .we first show is nonexpansive .let be the projection operator onto the set .we have . since the set is nonempty closed convex , operator is firmly nonexpansive , by proposition 12.27 in .thus , is nonexpansive . for any , we have that the above inequality leads to item ( ii ) .as mentioned in the introduction , the differentiability of the penalty term has important effect on the algorithmic design for problem .according to proposition [ prop : diff ] , by choosing , the penalty term is differentiable with a lipschitz continuous gradient in problem for . as a result, we are able to develop efficient numerical algorithms with theoretical convergence guarantee for solving this problem , e.g. , proximal - gradient type algorithms .in this section , we investigate the cardinality of optimal solution sets of problems and .we also discuss the strictness of their optimal solutions as a byproduct . as it is shown in section [ sec : sta ] , optimal solution sets of these two problems are closely related to , for , defined by definition [ def : srhoomega ] .therefore , we shall first consider the cardinality of .the cases of and are discussed in separate subsections , since the analysis and results are different for these two cases . for the case of , we have the following proposition concerning the cardinality of .[ prop : cardip2 ] for satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] . for , if , then we have by the definition of , we have for any .let .we will first prove that for any , , where . for ,if there doest not exist such that , then we have , that is . otherwise , if there exists such that , we then prove . by the definition of we have for any . by item ( ii ) of lemma [ lemma : lambdastrict ], we have that for all .then , we get for all due to .thus , , that is .therefore , we get for any .further , since and , we get of this proposition .the next corollary is obtained immediately .[ coro : cardip2 ] for satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .if is strictly increasing , then for any .we first present an important lemma . to this end , we define , for any and , . for convenience ,we adopt an assumption for a vector in .[ hypo:2 ] for a vector , there holds for all .[ lema : l1 ] let an real matrix and .let satisfy . then * if and satisfies h[hypo:2 ] , then , where ; * if and , then for any , there exists such that . for simplicity , we set .we first prove item ( i ) .let .obviously . then for any , we set .it is clear that due to h[hypo:2 ] .then we have . therefore , .we get item ( i ) .we next prove item ( ii ) . since , we set , where is the component - wise signum function ( is assigned to ) . set .we have .then there exists such that for any , there holds for any .let and . clearly , .therefore , we have .obviously , is an euclid linear space with dimension no less then .thus , for any , there exists such that .set .then , we have , for and .thus , we have . then , we get this lemma . now , we are ready to present a proposition on the cardinality of for . [ prop : cardip1 ] for satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] . for ,the following statements hold : * for , . * for , if , all the columns of satisfy h[hypo:2 ] and , then . * for with or ,if there exists such that , then for any there exists such that , therefore , . item ( i ) follows from the fact that .we next prove item ( ii ) . by the definition of and , we have and for any . since , then we assume satisfies .let for .then . by item ( i ) of lemma [ lema : l1 ] , we have for . then we get item ( ii ) of this proposition .we finally prove item ( iii ) .let and .since , we have and . set . for any , set .then by item ( ii ) of lemma [ lema : l1 ] , there exists such that and .then we have and by the definition of .thus . item ( iii ) follows .this subsection is devoted to the cardinality of the optimal solution set of problem when .we begin with recalling the notion of strict minimizers . for a function and a set , is a strict minimizer of the problem if there exists such that for any there holds .it is clear that as , and is the unique minimizer of problem .we next only consider the case when .[ thm : strictp2 ] for defined at as and , let and for be defined by definition [ def : srhoomega ] .let , where is defined by .let be defined by .then the following statements are equivalent : * there exits such that .* for any . *any is a strict minimizer of problem .* is finite .we first prove item ( i ) implies item ( iv ) . by item ( ii ) of theorem [ thm : staconst ] , we have since .then we have is finite due to corollary [ coro : cardip2 ] .thus , we get item ( iv ) .it is obvious that item ( iv ) implies item ( iii ) .we next prove item ( iii ) implies item ( ii ) by contradiction .suppose there exists such that .we will show is not a strict minimizer of problem .let .then there exists such that for any there holds .let .set .then for any we have for and .thus , for any . clearly , .then , for any , there exists , such that , contradicting the fact that is a strict minimizer of problem .therefore , we obtain that item ( iii ) implies item ( ii ) .we finally show that item ( ii ) implies item ( i ) .we also prove it by contradiction .suppose item ( i ) doest not hold .then by item ( i ) of proposition [ prop : partialeauvalence ] , there exists , where and due to item ( iii ) of proposition [ prop : partialeauvalence ] . by item ( ii ) , for any , contradicting the fact that due to the definition of .thus we get item ( ii ) implies item ( i ). then we complete the proof .the next corollary follows from theorem [ thm : strictp2 ] . for defined at as and , let and for be defined by definition [ def : srhoomega ] . for , , the following hold : * there exits such that is not a strict minimizer of problem .* . in this subsection , we discuss cardinality of optimal solution set of problem when .we state the main results in the following theorem .[ thm : penacarp2 ] for satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .let , and for be defined by definition [ def : lambda_i ] .let , for and defined by .then for , the following statements hold : * if , then for any , is finite and any is a strict minimizer of problem . * if and for all , then is finite and any is a strict minimizer of problem .we first prove item ( i ) .since , is finite . by ( ii ) of theorem [ thm : stability ] , we have for any . then we get item ( i ) .we then prove item ( ii ) .since for all , is finite for all . then , by item ( ii ) of theorem [ thm : stability ] , is finite. then we get item ( ii ) of this theorem .a direct and useful consequence of theorem [ thm : penacarp2 ] is presented below .[ coro : penacarp2 ] let satisfy h1 and .if is strictly increasing , then is finite and every is a strict minimizer of problem for any . with the help of the above corollary, we characterize the cardinality of optimal solution set of problem in the next proposition .our result coincides with that obtained in .the cardinality of the optimal solution set of problem is finite for any and every optimal solution is a strict minimizer .problem is a special case of problem with for . since in this case is strictly increasing , we obtain the conclusions by corollary [ coro : penacarp2 ] .this subsection provides results on cardinality of optimal solution sets of problems and when .[ thm : probcarp1 ] for satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .let , and for be defined by definition [ def : lambda_i ] .let .let and be defined by and respectively .then the following statements hold : * for and . * for with or , if there exists such that , then we have for any and is not a strict minimizer of problem .* suppose for , then for all , ; further , for and satisfying with or , if there exists such that , then we have , and is not a strict minimizer of problem . by item ( ii ) of theorem [ thm : stability ] andthe definition of , item ( i ) follows immediately . by theorem [ thm : stability ] , for . then we have item ( ii ) due to item ( iii ) of proposition [ prop : cardip1 ] . for , , from item ( ii ) of theorem [ thm : staconst ] , . then any satisfies .similar to the proofs of theorem [ thm : strictp2 ] , for any , there exits such that and .thus , . therefore , for . for , we have .then by item ( iii ) of proposition [ prop : cardip1 ] we obtain item ( iii ) of this theorem .in this section , we provide numerical illustrations for some of our main theoretical findings , including : * the marginal function of problem is piecewise constant , while that of problem is piecewise linear ; * it is possible that the least square penalty problem has no common optimal solutions with the constrained problem for any ; * there exists such that problem is an exact penalty formulation of the constrained problem for .all the optimization problems involved are solved by an exhaustive combinational search . for better readability ,all numerical digits are round to four decimal places .we present in detail a concrete example for sparse signal recovery with as follows ,\\ \label{eq : x } x^*=\left [ \begin{array}{ccccc } 0 & 1 & 1 & 0 & 0\\ \end{array } \right]^\top,\\ \label{eq : b } b = ax+\eta= \left[\begin{array}{ccccc } 14.43 & 7.21 & 4.49 & 13.02 \end{array}\right]^\top,\end{aligned}\ ] ] where denotes random noise following a nearly normal distribution .the coefficients of , , and in , , and are exact .the noise level of is .as is sparse , we naturally expect recovering by solving problem with , in , and . to this end, we shall first verify that is an optimal solution to problem .let , and . then according to definition [ def : srhoomega ], we have that , and by theorem [ thm : staconst ] , the marginal function of problem has the form of specially , we find that when .this together with implies that is an optimal solution to problem with .next we consider the least square penalty problem with , in and . by remark [remark : same ] , in this case the marginal function of problem is where , , , are given in and . then , , for , and for are produced by the iterative procedure in definition [ def : lambda_i ] .we obtain that , the plots of for and their intersection points are depicted in figure [ fig : ex2 ] . by theorem [ thm : stability ] , in reads as we note that for any . by theorem [ thm : stability ], we deduce that for any and any .therefore , given , in , , problem has no common optimal solution as problem with for any .in other words , problem fails to recover or approximate the true signal . in the next numerical test ,we discuss penalty problem with , in , and . as shown in section [ sec : equal ] , problem has the same optimal solution set with problem , provided that for some .we shall find the exact numerical value of .set as in . from the iteration in definition [ def : srhoomega ], we have that , according to the proof of theorem [ thm : equivalence ] , actually equals to produced by the iteration procedure in definition [ def : lambda_i ] . given , in , , , it holds that therefore , we conclude that for , in , and , penalty problem share the same optimal solution set as constrained problem when the penalty parameter .the quadratically constrained regularization problem arises in many applications where sparse solutions are desired .the least square penalty method has been widely used for solving this problem .there is little results regarding properties of optimal solutions to problems and , although these two problems have been well studied mainly from a numerical perspective . in this paper, we aim at investigating optimal solutions of a more general constrained regularization problem and the corresponding penalty problem .we first discuss the optimal solutions of these two problems in detail , including existence , stability with respect to parameter , cardinality and strictness .in particular , we show that the optimal solution set of the penalty problem is piecewise constant with respect to the penalty parameter .then we clarify the relationship between optimal solutions of the two problems .it is proven that , when the noise level there does not always exist a penalty parameter , such that problem and the constrained problem share a common optimal solution . under mild conditions on the penalty function , we prove that problem has the same optimal solution set with problem if the penalty parameter is large enough . with the help of the conditions , we further propose exact penalty problems for the constrained problem .we expect our theoretical findings can offer motivations for the design of innovative numerical schemes for solving constrained regularization problems .[ lemma : lam ] given satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .let , and for be defined by definition [ def : lambda_i ] . then conversely , for any , for . by the definition of , for and there exists such that .thus , and for . by the definition of , one has .we then prove item ( iii ) . from the definition of , for .then by lemma [ lemma : lam ] , it follows that for any due to . then we have for .according to the definition of , for . item ( iii ) follows immediately .[ lema : lambdai ] given satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .let , and for be defined by definition [ def : lambda_i ] . for ,next we prove item ( iii ) .since , and , it holds that this means that holds for .we assume for , holds when .then we try to show holds as , that is one finds that then follows .therefore , we obtain item ( iii ) immediately . next , we prove item ( iv ) . item ( ii ) implies that for all .thus , we only need to prove for , which is equivalent to for . by item ( ii ) , for , it holds that for . from the fact that and , we obtain holds for . by items ( i ) and( iii ) , for .this together with and item ( i ) implies that holds for .[ prop : fi ] given satisfying h[hypo:1 ] and , let and for be defined by definition [ def : srhoomega ] .let , and for be defined by definition [ def : lambda_i ] .let for .then the following statements hold : * for , for all .* for . * for any and any . * for any and any . * for ,there holds for any and any .we first prove items ( i ) and ( ii ) . from the definition of and , for any . item ( i )is obtained immediately by the definition of for in . item ( ii )follows from .we next prove item ( iii ) .thanks to items ( ii ) and ( iii ) of proposition [ prop : rhosomega ] , item ( iii ) amounts to for all .it is clear that holds by the definition of in definition [ def : lambda_i ] and . finally , we prove item ( v ) . by item ( iv ) of lemma [ lema : lambdai ] , for and .thus , for any and any . by the definition of , we have for and .then , it follows that for any and any . by item ( ii ) of theorem [ thm : exist ] and item ( i ) of proposition [ prop : fi ] , .using this fact , it suffices to show for any . from items ( i ) and ( iv ) of lemma [ lema : lambdai ] , for . by the definition of and , we have for , which implies for .hence for any . , _ convergence of descent methods for semi - algebraic and tame problems : proximal algorithms , forward - backward splitting , and regularized gauss - seidel methods _ , mathematical programming , 137 ( 2013 ) , pp .91129 . height 2pt depth -1.6pt width 23pt , _ relationship between the optimal solutions of least squares regularized with -norm and constrained by k - sparsity _ , applied and computational harmonic analysis , ( 2016 ) .
the constrained regularization plays an important role in sparse reconstruction . a widely used approach for solving this problem is the penalty method , of which the least square penalty problem is a special case . however , the connections between global minimizers of the constrained problem and its penalty problem have never been studied in a systematic way . this work provides a comprehensive investigation on optimal solutions of these two problems and their connections . we give detailed descriptions of optimal solutions of the two problems , including existence , stability with respect to the parameter , cardinality and strictness . in particular , we find that the optimal solution set of the penalty problem is piecewise constant with respect to the penalty parameter . then we analyze in - depth the relationship between optimal solutions of the two problems . it is shown that , in the noisy case the least square penalty problem probably has no common optimal solutions with the constrained problem for any penalty parameter . under a mild condition on the penalty function , we establish that the penalty problem has the same optimal solution set as the constrained problem when the penalty parameter is sufficiently large . based on the conditions , we further propose exact penalty problems for the constrained problem . finally , we present a numerical example to illustrate our main theoretical results . optimal solutions , constrained regularization , penalty methods , stability
in many applications such as complex system design or hyperparameter calibration for learning systems , the goal is to optimize some output value of a non - explicit function with as few evaluations as possible .indeed , in such contexts , one has access to the function values only through numerical evaluations by simulation or cross - validation with significant computational cost .moreover , the operational constraints generally impose a sequential exploration of the solution space with small samples .the generic problem of sequentially optimizing the output of an unknown and potentially _ non - convex _ function is often referred to as _ global optimization _ ( ) , black - box optimization ( ) or derivative - free optimization ( ) .there are several algorithms based on various heuristics which have been introduced in order to address complicated optimization problems with limited regularity assumptions , such as genetic algorithms , model - based algorithms , branch - and - bound methods ... see for a recent overview .this paper follows the line of the approaches recently considered in the machine learning literature ( ) .these approaches extend the seminal work on lipschitz optimization of and they led to significant relaxations of the conditions required for convergence , _e.g. _ only the existence of a local _ smoothness _ around the optimum is required ( ) .more precisely , in the work of and , specific conditions have been identified to derive a finite - time analysis of the algorithms .however , these guarantees do not hold when the unknown function is not assumed to be locally smooth around ( one of ) its optimum . in the present work ,we propose to explore concepts from ranking theory based on overlaying estimated level sets ( ) in order to develop global optimization algorithms that do not rely on the smoothness of the function .the idea behind this approach is simple : even if the unknown function presents arbitrary large variations , most of the information required to identify its optimum may be contained in its induced ranking rule , _i.e. _ how the level sets of the function are included one in another . to exploit this idea ,we introduce a novel optimization scheme where the complexity of the function is characterized by the underlying pairwise ranking which it defines .our contribution is twofold : first , we introduce two novel global optimization algorithms that learn the ranking rule induced by the unknown function with a sequential scheme , and second , we provide mathematical results in terms of statistical consistency and convergence to the optimum .moreover , the algorithms proposed lead to efficient implementation and display good performance on the classical benchmarks for global optimization as shown at the end of the paper . the rest of the paper is organized as follows . in section [ sec : setup ]we introduce the framework and give the main definitions . in section [ sec : rankopt ] we introduce and analyze the rankopt algorithm which requires a prior information on the ranking structure underlying the unknown function . in section [ sec : adarank ] , an adaptive version of the algorithm is presented .companion results which establish the equivalence between learning algorithms and optimization procedures are discussed in section [ sec : equivalence ] as they support implementation choices . finally , the adaptive version of the algorithm is compared to other global optimization algorithms in section [ sec : implementation ] .all proofs are postponed to the appendix section .* setup . *we consider the problem of sequentially maximizing an unknown real - valued function where is a compact and convex set . the objective is to identify some point with a minimal amount of function evaluations .the setup we consider is the following : at each iteration , an algorithm selects an evaluation point which depends on the previous evaluations and receives the evaluation of the unknown function at this point .after iterations , the algorithm returns the argument of the highest value observed so far : the analysis provided in the paper considers that the number of evaluation points is not fixed and it is assumed that function evaluations are noiseless .* notations . * for any , we define the standard -norm as , we denote by the corresponding inner product and we denote by the -ball centered in of radius . for any bounded set , we define its inner - radius as , its diameter as and we denote by its volume where stands for the lebesgue measure .we denote by the set of continuous functions defined on taking values in and we denote by the set of ( multivariate ) polynomial functions of degree defined on .finally , we denote by the uniform distribution over a bounded measurable domain and we denote by the indicator function taking values in . in this section, we introduce the ranking structure as a complexity characterization for a general real - valued function to be optimized .first , we observe that real - valued functions induce an order relation over the input space , and the underlying ordering induces a ranking rule which records pairwise comparisons between evaluation points .( induced ranking rule ) the ranking rule induced by a function is defined by : for all .the key argument of the paper is that the optimization of any weakly regular real - valued function only depends on the nested structure of its level sets .hence there is an equivalence class of real - valued functions that share the same induced ranking rule as shown by the following proposition .( ranking rule equivalence ) [ prop : rankingequivalence ] let be any continuous function .then , a function shares the same induced ranking rule with , i.e. , , if and only if there exists a strictly increasing ( not necessary continuous ) function such that . [ pics : sameranking ] proposition [ prop : rankingequivalence ] states that even if the unknown function admits non - continuous or large variations , up to a transformation , there might exist a simpler function that shares the same induced ranking rule .figure [ pics : sameranking ] gives an example of three functions that share the same ranking while they display highly different regularity properties . as a second example, we may consider the problem of maximizing the function if and otherwise over ] which presents two local maxima . * a perturbed version of the styblinski - tang function which has four local maxima over ^ 2 ] .the levi function has a strong global structure but presents about 100 local maxima . * the himmelblau function over ^ 2 ] where .this function has 17 local maxima and presents a large discontinuity around its unique optimum . for this problem ,the convex rankings were temporally used . * the hlder table function over the domain ^ 2 ] .the griewank function has many widespread and regulary distributed local minima . * the function over ^{10} ] defined by : we start to show that there exists a strictly increasing function such that . to properly define the function ,we show by contradiction that the function is constant over the iso - level set . fix any , pick any and assume , without loss of generality , that .the equality of the rankings implies that ( i ) , and ( ii ) . putting ( i ) and ( ii ) altogether and using the continuity of leads to the contradiction then , denoting by the unique value of the function over , we can now introduce the function defined by since , , the continuity of implies that , .therefore , where is any strictly increasing extension of the function over . reproducing the same steps, one can show that there exists a strictly increasing function such that .hence where is a strictly increasing function . ( adarankopt process ) [ prop : adarankopt ]let be any compact and convex set , let be any sequence of nested ranking structures , fix any and let be any function such that .then , the adarankopt , , , algorithm evaluates the function on a sequence of random variables defined by : where , is a bernoulli random variable of parameter independent of and and are as defined as in the algorithm .[ prop : rankconcentration ] ( from ) let be a sequence of independent copies of , let be any ranking structure , let be any real - valued function and let be the empricial ranking loss taken over . then , denoting by the rademacher average of and by the true ranking loss defined in section [ sec : adarank ] , for any , with probability at least , * proof of proposition [ prop : cons_adarank ] * fix any and let be the corresponding level set .we show by induction that , since and , we have that and the result holds when .assume that the statement holds for a given and let be the sequence of random variables defined in proposition [ prop : adarankopt ] .conditioning on gives that .\end{aligned}\ ] ] since ( see proposition[prop : adarankopt ] ) , we have that therefore , ( [ eq : consada ] ) is proved by plugging the induction assumption into the previous equation .we finally get that using the fact that and that ( condition [ cond : id ] ) . + * proof of proposition [ prop : model ] * fix any , let where is the integer part of the upper bound and let be the sequence of random variables defined in proposition [ prop : adarankopt ] . then , denoting by the empirical ranking loss taken over the first samples and since forms a nested sequence , we have that .hence we start to lower bound the empirical risk by only keeping the first ( i.i.d . )explorative samples : where .conditioning on , we know that is a sequence of independent random variables , uniformly distributed over ( see proposition[prop : adarankopt ] ) .therefore , on the event , the right hand term of inequality ( [ eq : model1 ] ) has the same distribution as where is a sequence of independent copies of , also independent of .hence where the empirical ranking loss is taken over .for any we have that and so , by proposition [ prop : concentration ] , we have with probability at least that hence , noticing that the right hand term of the previous inequality is stricly positive , due to the definition of , gives that .finally , by hoeffding s inequality we have that and we deduce that . + * proof of theorem [ coro : upper_ada ] * fix any , let be the integer part of the upper bound of proposition [ prop : model ] ( with probability ) and let be the upper bound of the theorem [ coro : upperbound ] ( with probability ) .now , fix any and let be the sequence of random variables defined in proposition [ prop : adarankopt ] .then , denoting , applying the bayes rules gives that due to the definition of , we have that by proposition [ prop : model ] .moreover , on the event , the true ranking structure is identified for any .therefore , the distance can be bounded using the last samples with a similar technique as the one used in the proof of theorem [ coro : upperbound ] ( where the ranking structure is assumed to be known ) and we get that . finally , noticing that ends the proof . pick any . by definition of the -ball, there exists such that and ( resp . where and ) .the convexity of implies that , hence and we deduce that is a convex set . [ lem : equi ] let be any continuous ranking structure and let be any sample satisfying .then , if denotes the empirical ranking loss taken over the sample , we have the following equivalence : be any ranking rule satisfying . since is a continuous ranking , there exists a function such that , .pick any and assume , without loss of generality , that . using the function , we have that hence , , and so .* proof of proposition [ prop : binary ] * assume that there exists satisfying . since is a polynomial ranking , there exists a polynomial function where such that , . since , applying lemma [ lem : equi ] gives that , hence , there exists s.t . , .+ assume that there exists s.t . , .pick any constant , define the polynomial function and let be its induced polynomial ranking rule . then , , we have that and applying lemma [ lem : equi ] gives that . + * proof of lemma [ lem : zero ] * note that .therefore , one can assume without loss of generality that , by replacing by .+ assume that there exists such that , and assume by contradiction that . since , we know by proposition [ def : cvx ] that there exists such that , and , and it gives the contradiction assume that . since and are finite , is a closed , compact and convex set and exists and the condition implies that . now , let be the ( unique ) point of the convex hull satisfying .we show by contradiction that , .assume that there exists s.t .the convexity of the convex hull implies that the whole line also belongs to the convex hull .however , since and , the line is not tangent to the ball and intersects it .hence , there exists s.t .since belongs to the convex hull , it leads us to the contradiction we deduce that , .finally , since , there exists such that , . + * proof of corollary [ coro : lp ] * combining proposition [ prop : binary ] and lemma [ lem : zero ] gives the next equivalences : by proposition [ def : cvx ] , we know that if and only if there exists such that , and , .putting those constraints into matricial form gives the result . + * proof of proposition [ prop : binary_cvx ] * assume that there exists such that and let be the sequence of classifiers defined by . by lemma [ lem : equi ] , we know that , and so . on the other hand , by definition of the convex rankings of degree , we know that all the classifiers are of the form .+ that there exists a sequence of classifiers satisfying : ( i ) , ( ii ) and ( iii ) , .define the non - continuous function and observe that , and so .now , let be an approximation of the function where \\ 1 - \frac{x - u}{\epsilon } & \ \ \\text{if}\ \\ x \in ] u , u + \epsilon ] \\ 0 & \ \ \\text{otherwise}. \end{cases}\ ] ] first , by continuity of we know that is continuous .second , note that and , we have that and so . finally , using the same decomposition , it is easy to see that , the level sets of are unions of at most segements ( convex sets ) . hence , there exists satisfying . now , fix any and observe that the set is a convex set by convexity of the ranking rule .however , since is the smallest convex set that contains ( see proposition [ def : cvx ] ) , we necessarily have that .therefore , putting the previous statements altogether gives that , by proposition [ def : cvx ] , it implies that , there does not exist any such that , and .putting those constraints into matricial form gives the result .+ assume that all the polyhedrons are empty .reproducing the same ( inverse ) steps as in the first equivalence gives that , and so now , define the non - continuous function , observe that and let be an approximation of the function , where , if and otherwise .first , note that for any convex set , the function is continuous and so is the function .second , note that and , we have that and so . finally , since the -ball of any convex set is also a convex set ( see lemma [ lem : epsiball ] ) , and , the level set is a convex set .hence , there exists satisfying . ( rankopt process ) [ prop : chain ] let be any compact and convex set , let be any ranking structure and let be any function such that . then , the rankopt algorithm evaluates the function on a random sequence defined by : where denotes the sampling area and and are defined as in the algorithm .the result is proved by induction . since ,the result trivially holds when .assume that the satement holds for a given .by definition of the algorithm and using the induction assumption , we know that the rankopt algorithm evaluates the function on a sequence of random variables where rankopt , and is a sequence of independant random variables uniformly distributed over and independant of . therefore , for any borelian , and the result is proved by induction .noticing that is a subset of gives the first inclusion . to state the second inclusion ,pick any and observe that . since always perfectly ranks the sample ( _ i.e. , _ ) , there exists such that and we deduce that . [ def : pas ] ( pure adaptive search process , from ) .let be any compact and convex set and let be any function such that .we say that the sequence is distributed as a pure adaptive search process pas if it has the same distribution as the markov process defined by : where is the level set of the highest value observed so far .[ lem : fasteruniform ] let be a sequence of random variables distributed as a pas process .then , for any ] and denotes the sampling area of the pas process at time .note that if the result trivially holds for any and any .therefore , we assume without loss of generality that and we set some additional notations : ] , , we are ready to prove the lemma by induction .+ fix any ] be a random variable independent of .since and , we have that and the result holds when .assume that the statement holds for a given , fix any ] be a random variable independent of . since is uniformly distributed over ] . taking the logarithm on both sidesgives that . since ) ] , there exists which satisfies . fix any ] defined by which returns the value of over the segment ] , applying the intermediate value theorem gives us that there exists ] .we deduce that which proves the second inclusion .+ we use a similar technique to prove the first inclusion .assume that there exists s.t . . by introducing the function , one can show that there exists s.t .hence , . on the other hand ,it leads to a similar contradiction and we deduce that .introduce the similarity transformation defined by : and let be the image of by the similarity transformation . by definitionwe have that .hence , the convexity of implies that and we have that .now , since is a similarity transformation ( and conserves the ratios of the volumes before / after transformation ) we have that noticing that , where stands for the standard gamma function gives the result .* proof of proposition [ th : fasterprs ] * the statement is proved by induction . since ,the result trivially holds when .assume that the statement holds for a given and let be a sequence of random variables distributed as a rankopt process . since the result trivially holds , fix any ] and let be the corresponding level set .then , denoting by the sampling area of and by the level set of the highest value observed so far , reproducing the same steps as in the proof of proposition [ th : fasterprs ] gives that . \end{aligned}\ ] ] lemma [ lem : rep ] gives the following inclusion .hence .\end{aligned}\ ] ] for any random variable ] .we finally get that by lemma [ prop : concentration ] .
in this paper , we consider the problem of maximizing an _ unknown _ function over a compact and convex set using as few observations as possible . we observe that the optimization of the function essentially relies on learning the induced bipartite ranking rule of . based on this idea , we relate global optimization to bipartite ranking which allows to address problems with high dimensional input space , as well as cases of functions with weak regularity properties . the paper introduces novel meta - algorithms for global optimization which rely on the choice of any bipartite ranking method . theoretical properties are provided as well as convergence guarantees and equivalences between various optimization methods are obtained as a by - product . eventually , numerical evidence is given to show that the main algorithm of the paper which adapts empirically to the underlying ranking structure essentially outperforms existing state - of - the - art global optimization algorithms in typical benchmarks . global optimization , ranking , statistical analysis , convergence rate bounds
of agent and the cooperation between agents in multi - agent system ( mas ) , a framework , named distributed constraint optimization problem ( dcop ) in terms of constraints that are known and enforced by distinct agents comes into being with it .in last decade , the research effort of dcop has been dedicated on the following three directions : 1 ) the development of dcop algorithms which are able to better balance the computational complexity and the accuracy of solution , such as large neighborhood search method ; markov chain monte carlo sampling method and distributed junction tree based method 2 ) the extension of classical dcop model in order to make it more flexible and effective for practical application , such as expected regret dcop model , multi - variable agent decomposition model and dynamic dcop model 3 ) the application of dcop in modeling environmental systems , such as sensor networks , disaster evacuation , traffic control and resource allocation . in this paper ,we take more attention to the application of dcop .more precisely , we leverage dcop to solve user association problem in the downlink of multi - tier heterogeneous networks with the aim to assign mobile users to different base stations in different tiers while satisfying the qos constraint on the rate required by each user .is generally regarded as a resource allocation problem in which the resource is defined by the resource blocks ( rbs ) . in this case , the more rbs allocated to a user , the larger rate achieved by the user .the methods to solve the user association problem are divided into centralized controlled and distributed controlled . with regard to the centralized way , a central entityis set up to collect information , and then used to decide which particular bs is to serve which user according to the collected information .a classical representation of centralized method is max - sinr .distributed controlled methods attract considerable attention in last decade since they do not require a central entity and allow bss and users to make autonomous user association decisions by themselves through the interaction between bss and users . among all available methods ,the methods based on lagrange dual decomposation ( ldd ) and game theory have better performance .hamidreza and vijay put forward a unified distributed algorithm for cell association followed by rbs distribution in a -tier heterogeneous network .with aid of ldd algorithm , the users and bss make their respective decisions based on local information and a global qos , expressed in terms of minimum achievable long - term rate , is achieved .however , the constraint relaxation and the backtrack in almost each iteration are needed to avoid overload at the bss .in addition , as we will show later , the number of out - of - service users will increase since a user always selects a best - rate thereby taking up a large number of rbs and leaving less for others .nguyen and bao proposed a game theory based method in which the users are modeled as players who participate in the game of acquiring resources .the best solution is the one which can satisfy nash equilibrium ( ne ) . loosely speaking ,such solution is only a local optima .in addition , it is difficult to guarantee the quality of the solution .there is no research of modeling user association problem as mas .however , some similar works have been done focusing on solving the resource management problem in the field of wireless networks or cognitive radio network by dcop framework .these methods can not be directly applied to user association problem mainly due to the scale of the models for these practical applications is relatively small .for instance , monteiro formalized the channel allocation in a wireless network as a dcop with no more than 10 agents considered in the simulation parts .however , the amount of users and resource included in a hetnet is always hundreds and thousands . in this case ,a good modeling process along with a suitable dcop algorithm is necessary . according to the in - depth analysis above, it motivates us to explore a good way to solve user association problem by dcop .the main contributions of this paper are as follows : * an ecav ( each connection as variable ) model is proposed for modeling user association problem using dcop framework .in addition , we introduce a parameter with which we can control the scale ( the number of variables and constriants ) of the ecav model . * a dcop algorithm based on markov chain ( mc ) is proposed which is able to balance the time consumption and the quality of the solution . *the experiments are conducted which show that the results obtained by the proposed algorithm have superior accuracy compared with the max - sinr algorithm .moreover , it has better robustness than the ldd based algorithm when the number of users increases but the available resource at base stations are limited .the rest of this paper is organized as follows . in section[ preliminary ] , the definition of dcop and the system model of user association problem along with its mixed integer programming formulation are briefly introduced . in section [ formulation_with_dcop ], we illustrate the ecav- model .after that , a mc based algorithm is designed in section [ markov_chain_algorithm ] .we explore the performance of the dcop framework by comparing with the max - sinr and ldd methods in section [ experimental_evaluation ] .finally , section [ conclusion ] draws the conclusion .this section expounds the dcop framework and system model of user association problem along with its mixed integer programming formulation . the definitions of dcop have a little difference in different literatures . in this paper, we formalize the dcop as a four tuples model where consists of a set of agents , is the set of variables in which each variable only belongs to an agent .each variable has a finite and discrete domain where each value represents a possible state of the variable .all the domains of different variables consist of a domain set , where is the domain of .a constraint is defined as a mapping from the assignments of variables to a positive real value : the purpose of a dcop is to find a set of assignments of all the variables , denoted as , which maximize the utility , namely the sum of all constraint rewards : consider a -tier hetnet where all the bss in the same tier have the same configurations .for example , a two - tier network including a macro bs ( ) and a femto bs , ( ) , is shown in fig.[ecav_instance ] .the set of all bss is denoted as where is the total number of bss .all the bss in the tier transmit with the same power .the total number of users is denoted by and the set of all users is . with ofdma technology in lte - advanced networks , the resource , time - frequency , is divided into blocks where each block is defined as a resource block ( rb ) including a certain time duration and certain bandwidth . in this paper ,the resource configured at each bs is in the format of rb so that its available rbs are decided by the bandwidth and the scheduling interval duration allocated to that bs .we assume the bss in the hetnet share the total bandwidth such that both intra- and inter - tier interference exist when the bss allocate rbs to the users instantaneously . assuming the channel state information is available at the bss , the experienced by user , served by in the tier is given by in ( [ sinr ] ) , is the channel power gain between and , represents all the bss in except , is the bandwidth and is noise power spectral density .the channel power gain includes the effect of both path loss and fading .path loss is assumed to be static and its effect is captured in the average value of the channel power gain , while the fading is assumed to follow the exponential distribution . from the above ,the efficiency of user powered by bs , denoted as , is calculated as given the bandwidth , time duration and the scheduling interval configured at each rb , we attain the unit rate at upon one rb as follows on the basis of formula ( [ rate_rb ] ) , the rate received at with rbs provided by in the tier is associated with each user is a quality - of - service ( qos ) constraint .this is expressed as the minimum total rate the user should receive . denoting the rate requiremnt of the user by , the minimum number of rbs required to satisfy is calculated by : which is a ceiling function .the formulations of user association problem by mixed linear programming are similar in a series of papers ( see the survey literature ) . in this paper, we present a more commonly used formulation as follows the first constraint ensures the rate qos requirement from users . constraint ( [ formula_rb_constraint ] ) indicates that the amount of rbs consumed at the same bs is no more than the total rbs configurated at the bs . constraint ( [ formula_link_constraint ] )guarantees one user associated with a unique bs .constraint ( [ formula_nrb_constraint ] ) guarantees the number of rbs a bs allocates to a user falls within the range from zero and .the last constraint ( [ formula_vector_constraint ] ) guarantees the connection between a user and a bs has two states denoted by a binary variable .the objective function ( [ formula_objective ] ) refers to the sum of rate rather than a function acted on the rate such as ( e.g. ) in some references .generally , two phases are needed to gain the solution including : 1 ) transforming original problem into a satisfied one through relaxing constraint ( [ formula_nrb_constraint ] ) by ; 2 ) the left rbs in each bs will be allocated to users in order to maximize the objective function .in this section , we expound and illustrate the ecav model along with its modified version ecav- . before giving the formulation based on dcop, we firstly introduce the definition of candidate bs : [ candidate_bs ] we declare is a candidate bs of if the rate at is above the threshold with rbs provided by .simultaneously , should be less than the total number of rbs ( ) configurated at . after confirming the set of candidate bss of , denoted by , , sends messages to its candidate bss so that each gets knowledge of its possible connected users .we define each possible connection between and its candidate bs as a variable , denoted by . in this case , all the variables are divided into groups according to the potential connection between users and different bss .the domain of each variable , denoted by , where if no rb is allocated to , otherwise , .we define each group as an agent .thus , an -ary constraint exists among variables ( intra - constraint ) to guarantee that there is no overload at .note that a user may have more than one candidate bs , there are constraints ( inter - constraints ) connecting the variables affiliated to different agents on account of the assumption that a unique connection exists between a user and a bs .generally speaking , the utility ( objective ) function in the dcop model is the sum of constraint rewards which reflects the degree of constraint violations .we define the reward of inter- and intra - constraints in the ecav model as follows . for r(c)= [ ecav_constraint_1_b ] - , & + [ ecav_constraint_1_a ] 0 , &otherwise for r(c)= [ ecav_constraint_2_a ] - , & + [ ecav_constraint_2_b ] , & otherwise in constraint ( [ ecav_constraint_1_b ] ) , is the subset of variables connected by constraint . represents the assignment of .a reward ( we use in this paper ) is assigned to the constraints if there at least two variables are non - zero at the same time ( unique connection between a user and a bs ) .otherwise , the reward is equal to zero . in constraint ( [ ecav_constraint_2_a ] ), the reward is once there is a overload at the bs . otherwise , the reward is the sum of the rates achieved at users .it is easy to find that a variable in the ecav model with non - zero assignment covers constraint ( [ formula_rate_constraint ] ) and ( [ formula_nrb_constraint ] ) in the mixed integer programming formulation .moreover , intra and inter - constraints respectively cover constraint ( [ formula_rb_constraint ] ) and constraint([formula_link_constraint ] ) .therefore , the global optimal solution obtained from the ecav model is consistent with the one obtained from the mixed integer programming formulation , denoted as .is consistent with when the total rate calculated by objective function ( [ formula_optimization_function ] ) and ( [ formula_objective ] ) is equal .this is because there may be no more than one optimal solution . ] to better understand the modeling process , we recall the instance in fig.[ecav_instance ] where the candidate bss of and are the same , denoted as , while the candidate bss of and are respectively and .we assume the total rbs configurated at and is 8 and 10 . for simplicity, we assume the rate of each user served by one rb provided by is 0.8 bit / s . and 1 bit / s of each user is served by . then , the ecav model is shown in fig.[ecav_instance : b ] .there are two agents named and .the variables in are and where refers to a connection between user and .similarly , the variables in are and . assuming the threshold rate is 3 bit / s, we can calculate that at least rbs needed for the users served by , thus the domain of each variable in is . also , the domain of each variable in is .the black lines in each agent are two 3-nry intra - constraints , thus .the red lines connecting two agents are two intra - constraints , thus .we use and to illustrate how the reward of constraint works in different conditions . considering ,the reward is when all the variables associated with have the same assignment 4 .thus the total number of rbs consumed by three users is 12 which is more than 8 rbs configurated at .otherwise , the reward is 0.8 4 3 = 9.6 ( bit / s ) calculated according to ( [ rate ] ) . considering ,the reward is when the assignment of is 3 and the assignment of is 4 because it means will connect with more than one bss ( and ) , which violates the assumption of unique connection .otherwise , the reward is 0 ( [ ecav_constraint_1_a ] ) ) .if there is no constraint violated , the final utility calculated by the objective function is the total rate in the whole hetnet ( constraint ( [ ecav_constraint_2_b ] ) ) .the scale of an ecav model , referring to the number of agents and constraints , is related to the number of users , bss and the candidate bss hold at each user .however , some candidate bss of the user can be ignored because these bss are able to satisfy the requirement of the user but with massive rbs consumed .it can be illustrated by the number of rbs a bs allocate to a user is inversely proportional to the geographical distance between them . in this way , we introduce a parameter with which we limit the number of candidate bss of each user is no more than .the following algorithms present the selection of top candidate bss ( denoted by ) and the modeling process of ecav- .[ alg1allcsstart ] [ alg1allcsend ] bubblesort ( ) [ alg1sortstart ] [ alg1sortbysinrstart ] exchane and [ alg1sortbysinrend ] algorithm [ alg1 ] is the pseudo code for determining .it is executed by each user distributely .more precisely , a user estimates its total candidate bss by the procedure from line [ alg1allcsstart ] to [ alg1allcsend ] .based on [ unit_efficiency ] to [ unit_rb ] , the candidate bss of a user is ordered according to the unit number of rbs consumed at such user served by different bss ( from line [ alg1sortbysinrstart ] to [ alg1sortbysinrend ] ) .the time consumption of algorithm [ alg1 ] mainly consists of two parts .one is the dermination of with time complexity .the other is the ordering operation with time complexity . as a result ,the total time expended of algorithm [ alg1 ] is . with ,we present the pseudo code in relation to the building of ecav- model . [ ecav_agent ] [ ecav_user_start ] [ ecav_user_end ] [ ecav_intra_c_start ] [ ecav_intra_c_end ] as for algorithm [ alg2 ] , it firstly sets up the agents distributely ( line [ ecav_agent ] ) .it takes .after that , each user determines variables , domains as well as inter - constraints from line [ ecav_user_start ] to [ ecav_user_end ] .this is also carried out in parallel with .finally , the intra - constraints are constructed by each agent with ( line [ ecav_intra_c_start ] to [ ecav_intra_c_end ] ) .the total time complexity is .dcop , to some degree , is a combinatorial optimization problem in which the variables select a set of values to maximize the objective function without or with the minimum constraint violation .we use to denote the set of all possible combination of assignments of variables .also , we call each element as a * candidate solution*. considering an ecav- model in which the four tuples are as follows : * * a connection between and * * we are able to rewrite the model in the following way : [ transformation_1 ] after that , a convex log - sum - exp approximation of ( [ transformation_1_objective_function ] ) can be made by : where is a positive constant .we then estimate the gap between log - sum - exp approximation and ( [ transformation_1_objective_function ] ) by the following proposition in : [ gap ] given a positive constant and nonnegative values , we have + in addition , the objective function ( [ transformation_1_objective_function ] ) has the same optimal value with the following transformation : in which is the reward with a candidate solution .for simplicity , we use . hence , on the basis of formulations ( [ log_sum_exp_approximation ] ) and ( [ gap_equation ] ) , the estimation of ( [ transformation_1_objective_function ] ) can be employed by evaluating in the following way : assuming and are the primal and dual optimal points with zero duality gap . by solving the karush - kuhn - tucker ( kkt ) conditions , we can obtain the following equations : [ kkt ] then we can get the solution of as follows : on the basis of above transformation , the objective is to construct a mc with the state space being and the stationary distribution being the optimal solution when mc converges . in this way ,the assignments of variables will be time - shared according to and the system will stay in a better or best solution with most of the time .another important thing is to design the nonnegative transition rate between two states and . according to , a series of methodsare provided which not only guarantee the resulting mc is irreducible , but also satisfy the balance equation : . in this paper, we use the following method : ^{-1}\label{transit_rate}\ ] ] the advantage of ( [ transit_rate ] ) is that the transition rate is independent of the performance of . a distributed algorithm , named wait - and - hp in , is used to get the solution after we transform dcop into a mc .however , as the existence of inter- and intra- constriants in , a checking through the way of message passing is made in order to avoid constraint violation .in this section , we test the performance of the mc based algorithm with different assginments of in the ecav model .a simulated environment including a three - tiers hetnet created within a square is considered . in the system , there is one macro bs , 5 pico bss and 10 femto bss with their transmission powers respectively 46 , 35 , and 20 dbm .the macro bs is fixed at the center of the square , and the other bss are randomly distributed .the path loss between the macro ( pico ) bss and the users is defined as , while the pass loss between femto bss and users is .the parameter represents the euclidean distance between the bss and the users in meters .the noise power refers to the thermal noise at room temperature with a bandwidth of 180khz and equals to -111.45 dbm .one second scheduling interval is considered . without special illustration ,200 rbs are configured at macro bs , as well as 100 and 50 rbs are configured at each pico and femto bs . in addition , all the results are the mean of 10 instances .we firstly discuss the impact of different assignments of on the performance of ecav model from the point of view of the runtime and the quality of solution .more precisely , we generate different number of users ranging from 20 to 100 with the step interval of 10 . the time consumed by the mc based algorithmis displayed in fig.[runtime ] .it is clear to see that more time is needed when the number of users increases .also , the growth of runtime is depended on the value of .specially , there is an explosive growth of runtime when we set from four to five .as previously stated , this is caused by more candidate bss considered by each user .however , the quality of the solutions with different values of is not obviously improved according the results in table [ table_model_variables ] . for instance , the average rate achieved at each user is only improved no more than 0.1 bit / s when the number of users are 100 with the values of are 3 and 5 .it is difficult to make a theoretical analysis of the realationship between and the quality of the solution .we leave this research in future works . from above analysis , we set in the following experiments in order to balance the runtime and performance of the solution .in addition , we test the performance of the mc based algorithm comparing with its counterparts max - sinr and ldd based algorithms ..the average rate ( bit / s ) achieved at each user [ cols="^,^,^,^,^,^",options="header " , ] ( ) with different number of users in the hetnet [ runtime ] ] ] ] in fig.[distributed_200 ] , we check the connection state between 200 users and bss in different tiers .a phenomenon we can observe from the figure is that there are more or less some users out of service even we use different kinds of algorithms .it is not only caused by the limited resource configured at each bss , but also related to the positions of such kinds of users .they are located at the edge of the square and hardly served by any bs in the system .further , more users are served by macro bs in max - sinr algorithm because a larger sinr always eixsts between the users and macro bs . as a result , the total non - served users in mmax - sinr algorithms are more than the other two if there is no scheme for allocating the left resource . on the other hand , the number of non - served users in mc are less than ldd when since the user will select a bs with the maximal in each iteration of the ldd algorithm . in other words ,the users prefer to connect with a bs which can offer better qos even when more resources are consumed .therefore , some bss have to spend more rbs which leads to the resource at these bss being more easily used up . ] ] in fig.[non served ] , we produce a statistic of the number of non - served users when we change the total number of users configured in the hetnet .the average number of non - served users for each algorithm along with the standard deviation is presented in the figure . compared with fig.[distributed_200 ] , a more clear results imply that more than 60 ( at worst , around 70 ) non - served users in the max - sinr algorithm . the ldd based algorithm comes the second with approximate 20 users .the best resutls are obtained by the mc algorithm with no more than 20 users even the total users in the hetnet is 240 . in fig.[cdf_200 ] , we compare the cumulative distribution function ( cdf ) of the rate . the rate of the users seldomly drops below the threshold ( 3 bit / s ) when we use the distributed algorithms ( ldd and mc based algorithms ) , while max - sinr algorithm is unable to satisfy the rate qos constraints .moreover , the rate cdfs of the mc based algorithm never lie above the corresponding cdfs obtained by implementing the max - sinr algorithm ( the gap is between ) .likewise , at worst gap eixts between the mc based algorithm and ldd when we set . at last ,another intesest observation is made by configurating different number of rbs at macro bs ( fig.[number of rb in mac ] ) .when we change the number of rbs from 150 to 250 at macro bs , it is clear to see that the total rate obtained by ldd is not sensitive to the variation of the resource hold by macro bs .this result is also related to the solving process in which two phases are needed when employing a ldd based algorithm . as we have discuss in the introduction section , a solution which can satisfy the basic qos requirementwill be accepted by the ldd based algorithm .it finally affects the allocation of left resource at marco bs . as a result, the algorithm easily falls into the local optima .this problem , to some degree , can be overcome by the ecav model since there is only one phase in the model . with the ecav model, a constraint satisfied problem is transformed into a constraint optimizaiton problem . andthe advantage of dcop is successfully applied into solving user assocation problem .an important breakthrough in this paper is that we take the dcop into the application of hetnet .more preisely , we propose an ecav model along with a parameter to reduce the number of nodes and constraints in the model .in addition , a markov basesd algorithm is applied to balance the quality of solution and the time consumed . from experimental results, we can draw a conclusion that the quality of the solution obtained by the ecav-3 model solved with the mc based algorithm is better than the centralized algorithm , max - sinr and distributed one ldd , especially when the number of users increases but they are limited to the available rbs . in future work, we will extend our research to the following two aspects : in some algorithms , like k - opt and adopt for dcop , there are already a theoretial analysis on the completeness of solution .however , it is still a chanllenge job in most research of dcop algorithm , like the mc based algorithm proposed in this paper .thus , we will explore the quality of the solution assoicated with different values of . in practice ,the bss in small cells ( like pico / femto bss ) have properties of plug - and - play .they are generally deployed in a home or small business where the environment is dynamic . in this way, we should design a dcop model which is fit for the variations in the environment such as the mobility of users and different states ( active or sleep ) of bss . to this end, a stochastic dcop model can be considered like the one in .10 f. fioretto , f. campeotto , a. dovier , e. pontelli , and w. yeoh , large neighborhood search with quality guarantees for distributed constraint optimization problems , in _ proceedings of the 2015 international conference on autonomous agents and multiagent systems_.1em plus 0.5em minus 0.4eminternational foundation for autonomous agents and multiagent systems , 2015 , pp .18351836 .f. fioretto , w. yeoh , and e. pontelli , a dynamic programming - based mcmc framework for solving dcops with gpus , in _ international conference on principles and practice of constraint programming_.1em plus 0.5em minus 0.4emspringer , 2016 , pp .813831 .t. le , f. fioretto , w. yeoh , t. c. son , and e. pontelli , er - dcops : a framework for distributed constraint optimization with uncertainty in constraint utilities , in _ proceedings of the 2016 international conference on autonomous agents & multiagent systems_.1em plus 0.5em minus 0.4eminternational foundation for autonomous agents and multiagent systems , 2016 , pp . 606614 .w. yeoh , p. varakantham , x. sun , and s. koenig , incremental dcop search algorithms for solving dynamic dcops , in _ the 10th international conference on autonomous agents and multiagent systems - volume 3_.1em plus 0.5em minus 0.4eminternational foundation for autonomous agents and multiagent systems , 2011 , pp .10691070 .g. mao and b. d. anderson , graph theoretic models and tools for the analysis of dynamic wireless multihop networks , in _ 2009 ieee wireless communications and networking conference_.1em plus 0.5em minus 0.4emieee , 2009 , pp .a. a. kannan , b. fidan , and g. mao , robust distributed sensor network localization based on analysis of flip ambiguities , in _ ieee globecom 2008 - 2008 ieee global telecommunications conference_.1em plus 0.5em minus 0.4emieee , 2008 , pp .k. kinoshita , k. iizuka , and y. iizuka , effective disaster evacuation by solving the distributed constraint optimization problem , in _ advanced applied informatics ( iiaiaai ) , 2013 iiai international conference on_.1em plus 0.5em minus 0.4emieee , 2013 , pp .399400 .f. amigoni , a. castelletti , and m. giuliani , modeling the management of water resources systems using multi - objective dcops , in _ proceedings of the 2015 international conference on autonomous agents and multiagent systems_.1em plus 0.5em minus 0.4eminternational foundation for autonomous agents and multiagent systems , 2015 , pp .821829 .p. rust , g. picard , and f. ramparany , using message - passing dcop algorithms to solve energy - efficient smart environment configuration problems , in _ international joint conference on artificial intelligence _ ,n. guan , y. zhou , l. tian , g. sun , and j. shi , qos guaranteed resource block allocation algorithm for lte systems , in _ proc .conf . wireless and mob ., netw . and commun .( wimob ) _ , shanghai , china , oct .2011 , pp . 307312 .d. liu , l. wang , y. chen , m. elkashlan , k .- k .wong , r. schober , and l. hanzo , user association in 5 g networks : a survey and an outlook , _ ieee communications surveys & tutorials _ , vol . 18 , no . 2 , pp .10181044 , 2016 . q. ye , b. rong , y. chen , c. caramanis , and j. g. andrews , towards an optimal user association in heterogeneous cellular networks , in _ global communications conference ( globecom ) , 2012 ieee_.1em plus 0.5em minus 0.4emieee , 2012 , pp .41434147 .t. l. monteiro , m. e. pellenz , m. c. penna , f. enembreck , r. d. souza , and g. pujolle , channel allocation algorithms for wlans using distributed optimization , _ aeu - international journal of electronics and communications _ , vol .66 , no . 6 , pp .480490 , 2012 .t. l. monteiro , g. pujolle , m. e. pellenz , m. c. penna , and r. d. souza , a multi - agent approach to optimal channel assignment in wlans , in _ 2012 ieee wireless communications and networking conference ( wcnc)_.1em plus 0.5em minus 0.4emieee , 2012 , pp .26372642 .m. vinyals , j. a. rodriguez - aguilar , and j. cerquides , constructing a unifying theory of dynamic programming dcop algorithms via the generalized distributive law , _ autonomous agents and multi - agent systems _ , vol . 22 , no . 3 , pp .439464 , 2011 .
multi - agent systems ( mas ) is able to characterize the behavior of individual agent and the interaction between agents . thus , it motivates us to leverage the distributed constraint optimization problem ( dcop ) , a framework of modeling mas , to solve the user association problem in heterogeneous networks ( hetnets ) . two issues we have to consider when we take dcop into the application of hetnet including : ( i ) how to set up an effective model by dcop taking account of the negtive impact of the increment of users on the modeling process ( ii ) which kind of algorithms is more suitable to balance the time consumption and the quality of soltuion . aiming to overcome these issues , we firstly come up with an ecav- ( each connection as variable ) model in which a parameter with an adequate assignment ( in this paper ) is able to control the scale of the model . after that , a markov chain ( mc ) based algorithm is proposed on the basis of log - sum - exp function . experimental results show that the solution obtained by dcop framework is better than the one obtained by the max - sinr algorithm . comparing with the lagrange dual decomposition based method ( ldd ) , the solution performance has been improved since there is no need to transform original problem into a satisfied one . in addition , it is also apparent that the dcop based method has better robustness than ldd when the number of users increases but the available resource at base stations are limited .
the accumulated experience in helioseismology of inverting mode frequency data provides a good starting point for asteroseismic inversion . in some rather superficial waysthe circumstances of the two may seem very different : helioseismologists can use many more mode frequencies than will ever be possible for a more distant star , so that the resolution that can be achieved in helioseismic inversion is beyond the grasp of asteroseismology ( an exception may be situations such as where modes are trapped in a narrow range of depth within the star ) .another difference is that global parameters of the sun such as its mass , radius and age are much better known than they are for other stars ; hence the structure of the sun is constrained _ a priori _ much more accurately than it is for a distant star , even if the input physics had been known precisely , which of course is not the case for the sun or for other stars . in more fundamental ways , though , helioseismic and asteroseismic inversion are much more similar than they are different .helioseismology has not always been blessed with such a wealth of data , and helioseismologists in the early days of their subject learned the value of model calibration and asymptotic description ( in particular of low - degree modes ) for making inferences about the sun .they also learned some of the dangers and limitations of drawing inferences from real data .the modal properties in many asteroseismic targets will be similar to the low - degree solar modes .moreover , the principles of inversion are the same in both fields : obtaining localized information about the unseen stellar interior ; assessing resolution ; taking account of the effects of data errors ; assessing what information the data really contain about the object of study ; judiciously adding additional constraints or assumptions in making inferences from the data .this paper touches on a few of these points with regard to inverting asteroseismic data , including the usefulness of optimally localized averaging ( ola ) kernels and model calibration using large and small frequency separations for solar - type oscillation spectra .much more detail on those two topics can be found in the papers by basu et al .( 2001 ) and monteiro et al .( 2001 ) , both in these proceedings .the inversion problem predicates that one is able to perform first the forward problem . in the present context ,that means the computation of the oscillation data ( the mode frequencies ) from the assumed structure of the star . solvingthe forward problem always involves approximations or simplifying assumptions , because we can not model the full complexities of a real star .the inverse procedure , inferring the structure ( or dynamics ) of the star from the observables , is ill - posed because , given one solution , there will almost invariably formally be an infinite number of solutions that fit the data equally well .the art of inverse theory is in no small part concerned with how to select from that infinity of possibilities .actually , in various asteroseismic applications to date , e.g. to boo and certain scuti stars , the problem is rather that one has so far been unable to find even a single solution that fits the data ( see christensen - dalsgaard , bedding & kjeldsen 1995 , pamyatnykh et al .this may indicate that the some of the approximations made in the forward problem are inappropriate ; it could also indicate that the errors in the data have been assessed incorrectly . in the present work ,unless otherwise stated , we assume both that the forward problem can be solved correctly and that the statistical properties of the data errors are correctly known .a relatively straightforward approach to the inverse problem is model calibration . at its conceptually simplest ,this entails computing the observables for a set of models , possibly a sequence in which one or more physical parameters vary through a range of values , and choosing from among that set the one that best fit the data . `best ' here is often taken to mean that model which minimizes the chi - squared value of a least - squares fit to the data .given today s fast computers , searches through large sets of models are feasible , either blindly or by using some search algorithm such as genetic algorithm or monte carlo . approaches similar to this have been undertaken for white dwarf pulsators ( metcalfe , nather & winget 2000 ; metcalfe 2001 ) scuti stars ( pamyatnykh et al .1998 ) , and pulsating sdb stars ( charpinet 2001 ) .depending on the way in which the method is applied , the resulting model may be constrained to be a member of the discrete set of calibration models , or could lie `` between '' them if interpolation in the set of models is permitted .model calibration is powerful and dangerous .it is powerful because it allows one to incorporate prejudice into the search for a solution and , suitably formulated , it can always find a best fit .it is dangerous for the same reaons : perhaps unwittingly on the part of the practitioner , it builds prejudice into the space of solutions that is considered . also , even a satisfactory fit to the data does not mean that the solution model is necessarily like the real star : one can make a one - parameter model calibration to a single datum , but it is unclear what aspects of the resulting stellar model the datum is actually able to constrain .examples of prejudice that model calibration may incorporate , for good or ill , are the choice of physics used to construct the models , and possible assumptions about the smoothness of the structure of the star .an insidious problem is that , if the approximations in the forward modelling introduce errors into the computed model observables , this can introduce a systematic error into the result of the model calibration .the sun and solar - type stars provide a good example . herethe near - surface structure and the mode physics in that region are poorly modelled at present , and the simple approximations made introduce a systematic shift in the computed frequencies : low - frequency p modes have their upper turning point relatively deep in the star and are almost unaffected by the treatment of the surface layers , whereas modes of higher frequency have turning points closer to the surface and the error in their frequencies grows progressively bigger . in these circumstances ,it is preferable not to calibrate to the frequency data themselves but rather to data combinations chosen to be relatively insensitive to the known deficiency in the forward modelling . in the near - surface layers , the eigenfunctions of solar p modes of low or intermediate degree are essentially independent of ,so the error in the frequencies introduced by incorrectly modelling this region is just a function of frequency , scaled by the inverse of the mode inertia .this suggests that one should calibrate frequencies of solar - type stars to data combinations chosen to be insensitive to such an error .asymptotic analysis suggests other data combinations which can be used for model calibration and which are more discriminating than the raw frequency data .monteiro et al . ( 2001 ) consider in some detail the use of the so - called large and small separations : respectively . as monteirodemonstrate , the results of such a calibration , even for such global characteristics as the mass and age of the star , depend on the other physics assumed .model calibration is just one approach to the inverse problem of inferring the stellar structure , which we may indicate schematically as , from the frequency data .how else may the nonlinear dependence be inverted ?a way which then allows application of a variety of techniques is to linearize about a reference model : inversion takes as input , the difference between the observed data and the corresponding values that model predicts , and produces as output , the estimated difference in structure between the star and . since the structure of is known , the structure of the star can then be reconstructed . of course , that is a naive hope , because of the inherent nonuniqueness discussed earlier , quite apart from issues of data errors .but one may at least produce a refinement on the initial model and indeed this new model can then be used as reference for a subsequent inversion as the next step of an iterative approach .depending on the technique adopted , it is helpful and may be essential to have a reasonable starting guess in the form of the reference model .model calibration using the large and small separations is a reasonable way to find such a model for solar - type stars .model calibration produces a solution that is in the span ( suitably defined ) of the calibration set of models , illustrated schematically as a surface in fig .[ fig : modelspace ] .inversion of the kind just described may then be used to proceed from to a new model which may be ` close ' to but outside the span of the original calibration models .one hopes that the final model depends only weakly on the initial model , so that if by some other calibration ( assuming different physics when calibrating the large and small separations , for example ) one produces some other model , then the inversion step takes one close once more to the same final model .one can imagine making all structural quantities in both the reference model and the target star dimensionless by taking out appropriate factors of the gravitational constant and the stars masses and radii .then the frequency differences between the same mode of order and degree in the two stars can be related to the differences in dimensionless structural quantities by an equation such as where here the choice has been made to express the structural differences in terms of , the ratio of pressure to density , and , the helium abundance by mass : for discussion of this choice , see basu et al .( 2001 ) . here, is the difference in dimensionless between the two stars , the differences being evaluated at fixed fractional radius in the star .the first term on the right - hand side is a constant and just reflects the homologous dependence of the frequencies .the final term is some function of frequency , divided by mode inertia , which absorbs uncertainties from the near - surface layers .for some details of the construction of the kernel functions and , which are known functions derivable from the reference model , see the appendix . in an inversion, the left - hand side will be known ; all terms on the right - hand side are to be inferred , including the difference in between the reference model and target star .to motivate the rest of the paper we consider first the structural differences between a few stellar models .these will already indicate that rather small uncertainties in global parameters of stars , e.g. mass or age , can lead to large uncertainties in their structure .of course the positive side of that is that the stellar structure is sensitive to those parameters and so , if the observable mode frequencies are in turn sensitive to those aspects of the structure then we may have some hope of using the observations to constrain e.g. the mass and age of the star rather precisely .-0.2 cm panel ( a ) of figure [ fig : prhodiffs ] shows the relative differences in pressure and density , at fixed fractional radius , between two zams stars , of masses and .we note that even for two stars with rather similar masses the differences are large , of order unity .panel ( b ) shows the corresponding differences after the homology scaling has been taken out : this scaling is assumed taken out in the formulation presented in eq .( [ eq : diffeq ] ) .the effect is essentially to shift the two curves by a constant : although the differences are smaller , they are still large .these changes arise from nonhomologous differences in the surface layers which change the entropy of the convection zone . indeed , writing in the stars convective envelopes ( in this context only , denotes polytropic index ) , and noting that sound speed there is essentially determined by surface gravity , one finds that in that region possibly of more direct relevance for inversion of eq .( [ eq : diffeq ] ) is the difference in adiabatic sound speed ( , where is the first adiabatic exponent ) .[ fig : fig3 ] shows , the relative differences in are smaller than the differences in pressure and density , but still quite substantial for stars that differ in mass by as little 10 per cent .homology scaling has little effect on the differences in this case .for a reasonable range of masses , these differences scale linearly with the mass difference , so for example the corresponding scaled differences between and zams stars are half those illustrated here , to a very good approximation .a number of different linearized inversion methods have been developed in helioseismology , geophysics and diverse other areas of inversion applications .two flexible approaches are optimally localized averages ( ola ) and regularized least squares ( see e.g. christensen - dalsgaard et al .1990 for a description ) .the ola method explicitly constructs a linear combination of kernels that is localized at some location in the star and is small elsewhere : the corresponding linear combination of relative frequency differences is then a measure of the localized average of the structural differences .the application of sola in the asteroseismology of solar - type stars is discussed and illustrated by basu et al .we emphasize here that ola reveals the true extent to which the seismic data alone can resolve the aspect of the stellar interior under study .methods which on specific classes of problems appear to have superior performance to ola in resolving aspects of the stellar interior are introducing nonseismic information or assumptions in addition to the frequency data .the form of least - squares method used most extensively in helioseismology is regularized least - squares : the idea is to represent the solution with a set of basis functions more finely than can be resolved by the data and with ideally no bias about the form of the solution built into the basis ; but then to minimize the sum of chi - squared fit to the data and a penalty term which is large if the solution has undesirable characteristics .the most used penalty function is the integral over the star of the squared second derivative of the function under study with respect to radius . a different way of regularizing the least - squares solution , without introducing a penalty term , is to choose a drastically smaller set of basis functions .this alternative , which we do not claim is intrinsically superior , may give apparently better results from few data if the basis functions are chosen with appropriate intuition or good fortune .the reason is that one can introduce a huge amount of prejudice into the solution by forcing it to have a form determined by the basis functions . such a basis could , for example , force the buoyancy frequency to be zero in a convective core and permit a single discontinuity at the core boundary but not elsewhere : such assumptions may be reasonable , but it should be realized that they are additional to the seismic data .if the star actually had a second discontinuity , or a more gradual variation at the core boundary , such an inversion would not generally reveal those features .a helioseismic example is in finding the location of the base of the convective envelope , where remarkable precision ( taking into account uncertainties in abundance profiles , if the only uncertainty comes from data noise ) has been claimed ( basu & antia 1997 ) : this is credible only insofar as the base of the convection zone has precisely the form assumed in the inversion , because the true resolution at that location is much coarser than that .the true resolution ( e.g. vertically ) is essentially limited by the reciprocal of the largest vertical wavenumber of the eigenfunctions corresponding to the available data ( thompson 1993 ) .as a simple illustration of the apparent ability of least - squares inversion with a limited basis to infer structure even where the mode set has little resolving power , we take a basis of five functions chosen for algebraic convenience .they are approximately ( but only approximately ) able to represent the difference in between zams models of solar - like stars .the basis represents a function that is quadratic in for with zero derivative at , piecewise constant between radii , and , quadratic between and and zero for .the values of and , which are initially set to and respectively , are adjusted by hand to find a minimum of the chi - squared fit . in the first of two illustrative applications, we consider the inversion of frequency data ( 65 modes : , ; , , , , , ) from a zams star . in a real applicationwe would first calibrate the star by computing large and small separations from the data and using the approach of monteiro et al .( 2001 ) to arrive at a reference model ; in fact we simply took a reference model which was a zams star . before discussing the results of the inversion ,we first consider the data , i.e. the relative frequency differences , shown in fig .[ fig : fig4 ] .( for clarity , no noise has been added to the data , though it was added for the inversions . )the dominant trends are that the values are negative , around , because of the difference in between the two stars ( in fact , for these two stellar models ) ; and there is a roughly linear trend with frequency , in this frequency range , coming from near - surface differences .these two contributions can be estimated and removed by fitting an expression of the form ( [ eq : diffeq ] ) : for illustration , we show in fig . [ fig : fig5 ] what would remain : this is the signal from the interior , which contains the information that the inversion will use to infer conditions inside the star .the impression now is that the data contain a signal which has some oscillatory component ( from the rather abrupt change in the structural differences at the base of the convective envelope ) but is otherwise only weakly a function of frequency and increases with decreasing , indicating that the more deeply penetrating ( i.e. lower- modes ) sense increasing in the deep interior as one gets closer to the centre of the star .these features are indeed revealed by the inversion ( fig .[ fig : fig6 ] ) , which compares the exact -differences with the least - squares solutions for noise - free data and for data with gaussian noise with zero mean and uniform standard deviations , , and .the noise realization in the two panels is different , but within each panel the noise differs from case to case only by a multiplicative scaling .it can be seen that for low noise levels this very small basis enables the differences to be recovered rather well , including the structure beneath the convection zone and the variation of in the core . for larger noise levels the artificiality of the basis functionsbecomes more apparent ; also the solution for the higher noise levels is rather different for the two noise realizations , which gives some indication of the uncertainty even in this highly constrained solution .a second example is an application to the same mode set but with data from model of monteiro et al .( 2001 ) and gaussian data errors with .this star is slightly more massive and more evolved than the sun .again , we omitted the calibration step and inverted relative to a reference model of the present - age sun .the results are shown in fig .[ fig : fig7 ] .again the qualitative behaviour of the differences is recovered reasonably , including the downturn in the core .the discrepancies are perhaps partly attributable to the fact that the basis functions are not so well suited for representing this case as the zams case .it is a prejudice of some stellar astrophysicists ( it was indeed expressed a few times in cordoba during the workshop ) that helioseismic experience with the sun provides a poor example when it comes to asteroseismic inversion .but the similarities between the two applications are much more significant than their differences .an extremely important aspect of inversion in any context is to assess what the data really tell you and what information is being introduced by other assumptions or constraints .ola kernels ( basu et al . 2001 ) indicate what resolution can really be achieved without additional assumptions .but other approaches can advance our knowledge by allowing introduction of reasonable prejudices : e.g. , looking for signatures of sharp features , or introducing specific basis functions .the best apparent results are likely to be achieved if those functions are physically motivated , because the solution will accord with our physical intuition ( prejudice ). this may of course be dangerous .our example of a highly constrained least - squares inversion illustrates that qualitatively reasonable results can be obtained throughout a star by introducing assumptions about the form of the solution , even in regions where in fact the mode set provides little or no localized information ( cf .basu et al .one may have other grounds on which to believe that such a solution is plausible , but on the basis of the data alone it should be viewed sceptically .model calibration is a useful tool in its own right and for obtaining possible starting models for linearized asteroseismic inversions .carefully used , model calibration allows one to build in some prejudice ; and the combination of calibration and inversion extends the space of solutions that one explores . calibrating on large and small separations can be effective ( monteiro et al . 2001 ) , always assuming that one has not neglected some important aspect of the physics : in that regard , the effects of rotation and magnetic fields need to be borne in mind ( see dziembowski & goupil 1998 ) , in solar - type stars as in many other pulsating stars .finally we note that mode identification may be a problem , even in solar - type stars .the asymptotic pattern of high - order p - mode frequencies may allow to be determined , but there may be some uncertainty in .such an uncertainty can be allowed for in the inversion by adding to the right - hand side an extra term where is some function of frequency .we have verified that , with our zams example , even restricting to be a constant function removes quite satisfactorily the effect of a misidentification of .more generally , could judiciously be chosen to reflect the variation of the large separation with frequency .this work was supported by the danish national research foundation through the establishment of the theoretical astrophysics center , and by the uk particle physics and astronomy research council .basu , s. , antia , h. m. , 1997 , mnras , 287 , 189 basu , s. , christensen - dalsgaard , j. , thompson , m. j. , 2001 , these proceedings charpinet , s. , 2001 , in proc .iau colloq .185 , radial and nonradial pulsations as probes of stellar physics , eds aerts , c. , bedding , t. r. & christensen - dalsgaard , j. , in press christensen - dalsgaard , j. , bedding , t. r. , kjeldsen , h. , 1995 , apj , 443 , l29 christensen - dalsgaard , j. , schou , j. , thompson , m. j. , 1990 , mnras , 242 , 353 dziembowski , w. a. , goupil , m .- j . , 1998 , in proc .`` workshop on science with a small space telescope '' , eds kjeldsen h. & bedding t.r . , aarhus university , p. 69gough , d. o. , 1993 , in astrophysical fluid dynamics , les houches session xlvii , eds zahn , j .- p . & zinn - justin , j. , elsevier , amsterdam , p. 399gough , d. o. , thompson , m. j. , 1991 , in solar interior and atmosphere , eds cox , a. n. , livingston , w. c. & matthews , m. , space science series , university of arizona press , p. 519 kosovichev , a. g. , 1999 , journal of computational and applied mathematics , 109 , 1 metcalfe , t. s. , 2001 , computational asteroseismology , ph.d .thesis , university of texas at austin , usa metcalfe , t. s. , nather , r. e. , winget , d. e. , 2000 , apj , 545 , 974 monteiro , m. j. p. f. g. , christensen - dalsgaard , j. , thompson , m. j. , 2001 , these proceedings pamyatnykh , a. a. , dziembowski , w. a. , handler , g. , pikall , h. , 1998 , a&a , 333 , 141 thompson , m. j. , 1993 , in proc .gong 1992 : seismic investigation of the sun and stars , ed .brown , t. m. , asp conference series , san francisco , vol .the derivation of kernels relating the linearized differences in structure to the differences in frequency ( cf .eq . [ eq : diffeq ] ) has been discussed by , e.g. , gough & thompson ( 1991 ) , gough ( 1993 ) , and kosovichev ( 1999 ) .as all those authors show , the kernels for either of the pairs of variables or are quite straightforward to derive from the equations of linear adiabatic oscillations together with the linearized equation of hydrostatic support .obtaining kernels for various other pairs , including the pair used in this paper , can be accomplished by first obtaining kernels for one of the other two pairs and then using the following piece of manipulation .it is sufficiently ubiquitous ( occurring often when one wishes to transform from kernels for a pair including to some other variable pair ) that we write it rather generally .let be a solution of where prime denotes differentiation with respect to ( or if everything including is expressed in dimensionless variables ) , for a given function ( so is a functional of ) , with boundary conditions and .then , provided and , where is the mass interior to radius , and letting denote integration from to ( to ) , note that holds in general ; and in our present application , we scale all structural quantities by , and to make them dimensionless , so is forced to be true .these two conditions also mean that is zero , so any multiple of may be added to a kernel multiplying .note that such an additional contribution to makes no change to the right - hand side of eq .( [ eq : a1 ] ) and hence the contribution to from such an addition is zero .obtaining kernels for additionally requires an assumption about the equation of state through , since the oscillations do not know directly about the chemical abundances . in the following we write for convenience we record the following transformations : with ; with ; and with .
some issues of inverting asteroseismic frequency data are discussed , including the use of model calibration and linearized inversion . an illustrative inversion of artificial data for solar - type stars , using least - squares fitting of a small set of basis functions , is presented . a few details of kernel construction are also given .
homodyne detection is an experimental method that is used to reconstruct quantum states of coherent light by repeatedly measuring a discrete set of field quadratures .usually , a very high detection efficiency and ad - hoc designed apparatuses with low electronic noise are required .new methods capable of discriminating between different quantum states of light , even with low detection efficiencies , will pave the road to the application of quantum homodyne detection for studying different physical systems embedded in a high noise environment .for this purpose , specific quantum statistical methods , based on minimax and adaptive estimation of the wigner function , have been developed in .these approaches allow for the efficient reconstruction of the wigner function under any noise condition , at the price of acquiring larger amounts of data .hence , they overcome the limits of more conventional pattern function quantum tomography . the important consequence of this novel statistical approach is that the detection efficiency threshold can be overcome and quantum tomography is still practicable when the signals are measured with appropriate statistics .the scope of this paper is to report the results of this method tested by performing numerical experiments .indeed , we consider a linear superposition of two coherent states and numerically generate homodyne data according to the corresponding probability distribution distorted by an independent gaussian noise simulating efficiencies lower than . by properly expanding the set of numerically generated data ,we are able to reconstruct the wigner function of the linear superposition within errors that are compatible with the theoretical bounds .our results support the theoretical indications that homodyne reconstruction of linear superposition of quantum states is indeed possible also at efficiencies lower than 0.5 .let us consider a quantum system with one degree of freedom described by the hilbert space of square integrable functions over the real line .the most general states of such a system are density matrices , namely convex combinations of projectors onto normalised vector states any density matrix can be completely characterised by the associated wigner function on the phase - space ; namely , by the non positive - definite ( pseudo ) distribution defined by =\frac{1}{2\pi}\int_{\mathbb{r}}{\rm d}u\,{\rm e}^{i\,u\,p}\,\left < q - v/2\vert\hat{\rho}\vert q+v/2\right>\ . \label{wigner}\ ] ] here and are the position and momentum operators obeying the commutation relations =i ] provides a complete characterization of the signal state . using the annihilation and creation operators one constructs position and momentum - like operators , and . with respect to the latter ,the quadrature operator reads : quadrature operators have continuous spectrum extending over the whole real line , ; given a generic one - mode photon state associated with a density matrix , its diagonal elements with respect to the ( pseudo ) eigenvectors represent the probability distribution over the quadrature spectrum . in homodyne detection experiments the collected data consist of pairs of quadrature amplitudes and phases :these can be considered as independent , identically distributed stochastic variables .given the probability density , one could reconstruct the wigner function by substituting the integration with a sum over the pairs for a sufficiently large number of data .however , the measured values are typically not the eigenvalues of , rather those of where is a normally distributed random variable describing the possible noise that may affect the homodyne detection data and parametrizes the detection efficiency that increases from to with increasing from to . the noise can safely be considered gaussian and independent from the statistical properties of the quantum state , that is , can be considered as independent from .as briefly summarised in appendix a , the wigner function is reconstructed from a given set of measured homodyne pairs , , by means of an estimator of the form -\frac{x_\ell}{\sqrt{\eta}}\right)\ , \\ \label{estimator2 } & & k_h^\eta\left([(q , p);\phi_\ell]-\frac{x_\ell}{\sqrt{\eta}}\right)=\int_{-1/h}^{1/h}{\rm d}\xi\,\frac{|\xi|}{4\pi}\ , { \rm e}^{i\xi(q\cos\phi_\ell+p\sin\phi_\ell - x_\ell/\sqrt{\eta})}\ , { \rm e}^{\gamma\xi^2}\ .\end{aligned}\ ] ] this expression is an approximation of the wigner function in by a sum over homodyne pairs .the parameter serves to control the divergent factor , while , through the characteristic function of a circle of radius around the origin , restricts the reconstruction to the points such that .both parameters have to be chosen in order to minimise the reconstruction error which is conveniently measured by the -distance between the true wigner function and the reconstructed one , . since such a distance is a function of the data through , the -norm has to be averaged over different sets , , of quadrature data : \equiv e\left[\int_{\mathbb{r}^2}{\rm d}q{\rm d}p\ , \left|w_\rho(q , p)-w^{\eta , r}_{h , n}(q , p)\right|^2\right]\ , \ ] ] where denotes the average over the data samples , each sample consisting of quadrature pairs corresponding to measured values of with ] .] the quadrature probability distribution can be conveniently related to the wigner function by passing to polar coordinates , , such that and : \\ \nonumber & = & \int_0^\pi{\rm d}\phi\int_{-\infty}^{+\infty}{\rm d}\xi\frac{|\xi|}{(2\pi)^2}\int_{-\infty}^{+\infty}{\rm d}x\ , { \rm e}^{i\xi(q\cos\phi+p\sin\phi - x)}\,p_\rho(x,\phi)\\ \label{fourier } & = & \int_0^\pi{\rm d}\phi\int_{-\infty}^{+\infty}{\rm d}\xi\frac{|\xi|}{(2\pi)^2}\int_{-\infty}^{+\infty}{\rm d}x\ , { \rm e}^{i\xi(q\cos\phi+p\sin\phi)}\,f[p_\rho(x,\phi)](\xi)\ , \end{aligned}\ ] ] where (\xi) ] into , one can finally write the wigner function in terms of the noisy probability distribution : reconstruction is particularly useful to expose quantum interference effects that typically spoil positivity of the wigner function : it is exactly these effects that are claimed not to be accessible by homodyne reconstruction in presence of efficiency lower than , namely when in is smaller than .however , in it is theoretically shown that only requires increasingly larger data sets for achieving small reconstruction errors .however , this claim was not put to test in those studies as the values of in the considered numerical experiments were close to . instead , we here consider values and reconstruct the wigner function of the following superposition of coherent states with any complex number .the wigner function corresponding to the pure state is shown in figure [ true ] for .its general expression together with that of its fourier transform and of the probability distributions and are given in appendix b. ( ; ).,scaledwidth=45.0% ] in appendix c , a derivation is provided of the -errors and of the optimal dependence of and on the number of data and on a parameter that takes into account the fast decay of both the wigner function and its fourier transform for large values of their arguments .the following upper bound to the mean square error in is derived : with and as given in . as explained in appendix c ,the quantities do not depend on , and . by taking the derivatives with respect to and ,one finds that the upper bound to the mean square deviation is minimised , for large , by choosing + we generated samples of quadrature data distributed according to the noisy probability density explicitly given in of appendix b , considering an efficiency lower than ( ) .starting from each set of simulated quadrature data we reconstructed the associated wigner function by means of and .the averaged reconstructed wigner functions ] over samples of noisy quadrature data ( efficiency ) .two different values of are considered.,scaledwidth=65.0% ] the mean square error of the reconstructed wigner functions has been computed as in and compared with the mathematically predicted upper bounds .the dependence of the upper bound reconstruction error on the parameter is discussed at the end in appendix c. in table [ tab2 ] , we compare the reconstruction errors with their upper bound for two significant values of ..calculated for samples of noisy quadrature data ( ) for two different values of .comparison with the mathematical prediction of the upper bound .[ cols="^,^,^",options="header " , ] + despite common belief , the interference features clearly appear in the reconstructed wigner function also for efficiencies lower than and the reconstruction errors are compatible with the theoretical predictions . in the next section ,we make a quantitative study of the visibility of the interference effects .the interference effects in the state can be witnessed by an observable of the form with respect to an incoherent mixture of the two coherent states , its mean value is given by therefore , from it follows that the phase - space function associated to is for details see and in appendix b. let us denote by , the estimated wigner function in for the -th set of collected quadrature data .it yields a reconstructed mean value of which one can compute mean , , and standard deviation , , with respect to the sets of collected data : we computed and with simulated sets of noisy data with for two different numbers of simulated quadrature data ( see figure [ ovsbeta ] ) .we repeated the procedure for different values of the parameter .the results are presented in figure [ ovsbeta ] , where the error bars represent the computed . as a function of .the error bars represent . for each , set of noisy quadrature data have been considered .the square markers refer to ( blue marker and green markers ) while the round ones refer to ( ) .the error bars for have been multiplied by in order to make them more visible.,scaledwidth=60.0% ] + in order to be compatible with the interference term present in , the reconstructed wigner functions should yield an average incompatible with the incoherent mean value in , namely such that we thus see that the condition in is verified for , that is the reconstructed wigner functions are not compatible with incoherent superpositions of coherent states , if enough data are considered .we also notice that the same behavior is valid for the high efficiency .the dependence of the errors on can be understood as follows : when decreases the integration interval in becomes larger and approaches the exact interval ]. moreover , its fourier transform reads \right](w)&=&\int_{-\infty}^{+\infty}{\rmd}q\int_{-\infty}^{+\infty}{\rm d}p\,{\rm e}^{-i(qw_1+pw_2)}\,e\left[w^{\eta}_{h , n}(q , p)\right]\\ \label{favrec } & = & \chi_{[-1/h,1/h]}(\|w\|)\,f\left[w_\rho\right](w)\ , \qquad w=(w_1,w_2)\ , \end{aligned}\ ] ] where }(\|w\|) ]. then , by means of plancherel equality , one gets -w_\rho(q , p)\right|^2\leq \int_{\mathbb{r}^2}{\rm d}q{\rm d}p\,\left| e\left[w^{\eta}_{h , n}(q , p)\right]-w_\rho(q , p)\right|^2\\ \nonumber & & \hskip 0.5cm=\big\|e\left[w^\eta_{h , n}\right]-w_\rho\big\|_2 ^ 2=\frac{1}{4\pi^2 } \big\|f\left[e\left[w^\eta_{h , n}\right]\right]-f\left[w_\rho\right]\big\|_2 ^ 2\\ \label{error4 } & & \hskip 0.5 cm = \frac{1}{4\pi^2}\big\|f\left[w_\rho\right]\,\chi_{[-1/h,1/h]}-f\left[w_\rho\right]\big\|_2 ^ 2=\frac{1}{4\pi^2}\int_{\|w\|\geq1/h}{\rm d}w\ , \big|f\left[w_\rho\right](w)\big|^2\ .\end{aligned}\ ] ] the variance contribution can be estimated as follows : firstly , by using and , one recasts it as -\left| e\left[w^{\eta}_{h , n}(q , p)\right]\right|^2\right)= \nonumber\\ & & \hskip-2.8 cm = \frac{1}{\pi^2\,n}\left\{e\left[\left\|k^{\eta}_{h}\left([\,(q , p)\,;\phi]-\frac{x } { \sqrt{\eta}}\right)\,\chi_r(q , p)\right\|^2\right ] \,-\,\left\|e\left[k^{\eta}_{h}\left([\,(q , p)\,;\phi]-\frac{x}{\sqrt{\eta}}\right)\,\chi_r(q , p)\right]\right\|^2\right\ } \ . \label{varrec}\end{aligned}\ ] ] then , a direct computation of the first contribution yields the upper bound -\frac{x } { \sqrt{\eta}}\right)\,\chi_r(q , p)\right\|^2\right]\leq \sqrt{\frac{\pi}{\gamma}}\frac{r^2}{16\,h}\,{\rm e}^{\frac{2\gamma}{h^2}}\,(1+o(1))\ , \ \gamma:=\frac{1-\eta}{4\eta}\ , \ ] ] with denoting a quantity which vanishes as when . on the other hand ,the second contribution can be estimated by extending the integration over the whole plane and using together with : -\frac{x}{\sqrt{\eta}}\right)\,\chi_r(q ,p)\right]\right\|^2\leq\frac{1}{4\pi^2}\left\|f\left[w_\rho\right]\right\|^2= \left\|w_\rho\right\|^2\leq \frac{1}{2\pi}\ .\ ] ] let us consider now the specific case of , the superposition of coherent states defined in .the auxiliary parameter labelling the class of density matrices in footnote [ foot1 ] with can be used to further optimize the reconstruction error . in particular , since , we get the upper bounds (w_1,w_2)\right|\leq 2 \\\nonumber \left|w_\alpha(q , p)\right|^2&\leq&\frac{3}{4\pi^2}\left ( { \rm e}^{-2(q-\sqrt{2}\alpha_1)^2 - 2(p-\sqrt{2}\alpha_2)^2}\,+\ , { \rm e}^{-2(q+\sqrt{2}\alpha_1)^2 - 2(p+\sqrt{2}\alpha_2)^2}\right.\\ \label{bound2 } & & \left .+ 4\,{\rm e}^{-2(q^2+p^2)}\right)\leq \frac{3}{2\pi^2}\left({\rm e}^{-(\sqrt{2}r-|\alpha|)^2}+2{\rm e}^{-2r^2}\right ) \ , \\\nonumber \left|f\left[w_\alpha\right](w_1,w_2)\right|^2&\leq & \frac{3}{4}\left ( { \rm e}^{-\frac{(w_1 + 2\sqrt{2}\alpha_2)^2+(w_2 - 2\sqrt{2}\alpha_1)^2)}{2}}\,+\ , { \rm e}^{-\frac{(w_1 - 2\sqrt{2}\alpha_2)^2+(w_2 + 2\sqrt{2}\alpha_1)^2)}{2}}\right .\\ \label{bound3 } & & \left .+ 4\,{\rm e}^{-\frac{w_1 ^ 2+w_2 ^ 2}{2}}\right)\leq\frac{3}{2}\left({\rm e}^{-(s/\sqrt{2}-2|\alpha|)^2}+2{\rm e}^{-s^2/2}\right)\ , \end{aligned}\ ] ] where in and in .then , one derives the upper bounds (w_1,w_2)\right|^2\ , { \rm e}^{2\beta(w_1 ^ 2+w_2 ^ 2)}\\ & & \hskip+2.5 cm \leq\frac{6\pi\left(1+{\rm e}^{16\beta|\alpha|^2/(1 - 4\beta)}\left(1+\frac{2\sqrt{\pi}|\alpha|}{\sqrt{1 - 4\beta}}-{\rm e}^{-4|\alpha|^2/(1 - 4\beta)}\right)\right)}{1 - 4\beta}\ , \end{aligned}\ ] ] which simultaneously hold for .then , by means of the cauchy - schwartz inequality , one can estimate the contribution to the error , and similary for , (w_1,w_2)\right|^2\leq{\rm e}^{-\beta / h^2}\,\delta_3(h)\ , \end{aligned}\ ] ] where if , otherwise , and altogether , the previous estimates provide the following upper bound to the mean square error in - : where do not depend on , and and is the leading order term in . by setting the derivatives with respect to and of the right hand side equal to , one finds whenever is such that the arguments of the logarithms are much smaller than the number of data , to leading order in the upper bound to the mean square deviation is minimised by the range of possible values of is .however , the upper bound becomes loose when and . in the first case , it is the quantity which diverges , in the second one , it is the variance contribution which diverges as the logarithm of the number of data .it thus follows that the range of values $ ] where the numerical errors are comparable with their upper bounds is roughly between and for as indicated by the following figure [ upperbound ] .smithey , d. t. and beck , m. and raymer , m. g. and faridani , a. , measurement of the wigner distribution and the density matrix of a light mode using optical homodyne tomography : application to squeezed states and the vacuum , " _ phys .lett . _ * 70 * , 1244 ( 1993 ) n. b. grosse , n. owschimikow , r. aust , b. lingnau , a. koltchanov , m. kolarczik , k. ldge , and u. woggon , pump - probe quantum state tomography in a semiconductor optical amplifier , " _ opt .express _ * 22 * , 32520 ( 2014 ) m. esposito , f. benatti , r. floreanini , s. olivares , f. randi , k. titimbo , m. pividori , f. novelli , f. cilento , f. parmigiani , and d. fausti , pulsed homodyne gaussian quantum tomography with low detection efficiency " , _new j. phys ._ , * 16 * , 043004 ( 2014 ) .
standard quantum state reconstruction techniques indicate that a detection efficiency of is an absolute threshold below which quantum interferences can not be measured . however , alternative statistical techniques suggest that this threshold can be overcome at the price of increasing the statistics used for the reconstruction . in the following we present numerical experiments proving that quantum interferences can be measured even with a detection efficiency smaller than . at the same time we provide a guideline for handling the tomographic reconstruction of quantum states based on homodyne data collected by low efficiency detectors .
the term ` stochastic resonance ' was introduced in the early 80s ( see and ) in the study of periodic advance of glaciers on earth .the stochastic resonance is the effect of nonmonotone dependence of the response of a system on the noise when this noise ( for instance the temperature ) is added to a periodic input signal ( see e.g. , in which the author explains also differences and similarities with the notion of stochastic filtering ) .an extensive review on stochastic resonance and its presence in different fields of applications can be found in .following , as stochastic resonance we intend the phenomenon in which the transmission of a signal can be improved ( in terms of statistical quantities ) by the addition of noise . from the statistical point of viewthe problem is to estimate a signal transmitted through a channel .this signal has to be detected by a receiver that can reveal signals louder than a threshold .if is bounded from above by , the signal is not observable and the problem has not a solution .but , if some noise is added to the signal , the perturbed signal may be observable and inference can be done on .too few noise is not sufficient to give good estimates and too much noise deteriorates excessively the signal .the optimal in some sense level of the noise will be called stochastic resonance in this framework .usually ( see ) the criterion applied to measure optimality of estimators are the shannon mutual information or the kullback divergence .more recently the fisher information quantity have been also proposed ( see and ) .here we are concerned with the fisher information quantity .it happens that this quantity , as a function of the noise , can be maximized for certain noise structures . if there is only one global maximum , the corresponding noise level is the value for which we have stochastic resonance , if several local maxima are present , the phenomenon is called stochastic multi - resonance . in this paperwe study the problem of estimation and hypotheses testing for the following model : we suppose to have a threshold and a subthreshold constant and non negative signal , .we add , in continuous time , a noise that is a trajectory of a diffusion process and we observe the perturbed signal where is the level of the noise .we propose two schemes of observations : _i ) _ we observe only the proportion of time spent by the perturbed signal over the threshold and _ ii ) _ we measure the energy of the perturbed signal when it is above the threshold .the asymptotic is considered as time goes to infinity .this approach differs from the ones in the current statistical literature basically for two reasons : the noise structure is an ergodic diffusion process and not a sequence of independent and identically distributed random variables and data are collected in continuous time .this second aspect is a substantial difference but it is not a problem from the point of view of applications for the two schemes of observations proposed if one thinks at analogical devices .we propose two different estimators for the schemes and we study their asymptotic properties .we present an example where , in both cases , it emerges the phenomenon of stochastic resonance . for the same model we also solve the problem of testing the simple hypothesis against the simple alternative by applying the bayesian maximum a posterior probability criterion .it emerges that the overall probability of error is nonmonotonically dependent on .we show again that there exists a non trivial local minimum of this probability that is again the effect of stochastic resonance .the presence of stochastic resonance in this context is noted for the first time here . the paper is organized as follows . in section[ sec : model ] we set up the regularity assumptions of the model . in sections[ sec : time ] and [ sec : energy ] we prove some asymptotic properties estimators for the two schemes and we calculate numerically the points where the fisher information quantity attains its maximum for both models .it turns out that the estimators proposed are asymptotically equivalent to the maximum likelihood estimators .section [ sec : test ] is devoted to the problem of hypotheses testing .all the figures are collected at the end of the paper .let be the threshold and a constant signal . taking will not influence the calculations that follows butmay improve the exposition , so we use this assumption .let be a given diffusion process solution to the following stochastic differential equation with non random initial value .the process is supposed to have the ergodic property with invariant measure and invariant distribution function )$ ] as .the functions and satisfy the global lipschitz condition where is the lipschitz constant . under condition [ cond : c1 ], equation has a unique strong solution ( see e.g. ) but any equivalent condition to [ cond : c1 ] can be assumed because we do not use explicitly it in the sequel .the following conditions are needed to ensure the ergodicity of the process .if and then there exists the stationary distribution function and it takes the following form again , any other couple of conditions that imply the existence of can be used instead of [ cond : c3 ] and [ cond : c3 ] .we perturb the signal by adding , proportionally to some level , the trajectory diffusion process into the channel. the result will be the perturbed signal .this new signal will be detectable only when it is above the threshold .moreover , is still ergodic with trend and diffusion coefficients respectively and and initial value , but we will not use directly this process . we denote by the observable part of the trajectory of , being the indicator function of the set .we consider two possibile schemes of observation : * we observe only the proportion of time spent by over the threshold * we measure the energy of the signal in the next sections , for the two models we establish asymptotic properties of estimators given by the generalized method of moments . in different properties of the generalized method of moments for ergodic diffusion processes are studied . in this notewe follows the lines given in the paper of for the i.i.d . setting .these results are interesting in themselves independently from the problem of stochastic resonance .we give an example of stochastic resonance based on the process where the phenomenon of stochastic resonance appears pronounced and in which results in a closed form can be written down .the random variable can be rewritten in terms of the process as by the ergodic property of we have that where has as distribution function . from itderives that so that is a one - to - one continuous function of . from the glivenko - cantelli theorem ( see e.g. ) for the empirical distribution function ( edf ) defined by follows directly that is a -consistent estimator of thus also is a -consistent estimator for .we can calculate the asymptotic variance of this estimator .it is known that ( see and ) the edf is asymptotically gaussian and in particular where is the inverse of the analogue of the fisher information quantity in the problem of distribution function estimation : where and .the quantity is also the minimax asymptotic lower bound for the quadratic risk associated to the estimation of , so that is asymptotically efficient in this sense .the asymptotic variance of can be derived by means of the so - called -method ( see e.g. ) : thus where is the density of .the quantity can also be derived from the asymptotic minimal variance of the edf estimator .in fact , with little abuse of notation , by putting and we have that we now show that also maximizes the approximate likelihood of the model .in fact , for the central limit theorem for the edf we have where is a standard gaussian random variable .thus where is the distribution function of .we approximate the likelihood function of by that is maximal when thus , the maximum likelihood estimator of ( constructed on the approximated likelihood ) reads so if the approximation above is acceptable , one can infer the optimality property of of having minimum variance from being also the maximum likelihood estimator . to view the effect of stochastic resonance on the fisher information we consider a particular example . by setting and the noise become a standard process solution to the stochastic differential equation in such a case , the ergodic distribution function is the gaussian law with zero mean and variance 1/2 .the asymptotic variance assumes the following form where is the classical error function .in figure [ fig1 ] it is shown that for this model there exists the phenomenon of stochastic resonance . for a fixed level of noise the fisher information increases as the signal is closer to the threshold .for a fixed value of the signal , the fisher information , as a function of , has a single maximum , that is the optimal level of noise .for example , if then the optimal level is and for it is .suppose that it is possibile to observe not only the time when the perturbed process is over the threshold but also its trajectory above , say , .we now show how it is possibile to estimate the unknown signal from the equivalent of the energy of the signal for : literally from the quantity we use the following general result from on the estimation of functionals of the invariant distribution functions for ergodic diffusion processes . _let and be such that .then is a -consistent estimator for where is distributed according to . in our case and .the estimator can be rewritten as and it converges to the quantity that is a continuous and increasing function of , .its inverse allows us to have again . by applying the -method once again, we can obtain the asymptotic variance of from the asymptotic variance of . where the asymptotic variance of is given by ( see ) where and its inverse is also the minimal asymptotic variance in the problem of estimation of functionals for ergodic diffusion .thus , the asymptotic variance of is given by * remark : * _ by the asymptotic normality of follows that is also the value that maximizes the approximate likelihood function .in fact , as in the previous example , if we approximate the density function of with it is clear that is its maximum . _ as before , we put in evidence the phenomenon of stochastic resonance by using the process as noise .the quantities involved ( and ) transform into the following ( from which it appears that is an increasing function of ) and in figure [ fig1 ] it is plotted the fisher information of the model as a function of and .also in this case there is evidence of stochastic resonance . for a fixed value of then possibile to find the optimal noise level .for example , taking then we have stochastic resonance at and for , .we now study a problem of testing two simple hypotheses for the model discussed in the previous section . as in , we apply the maximum _ a posteriori _ probability ( map ) criterion .we will see that the decision rules for our model are similar to the one proposed by chapeau - blondeau in the i.i.d . setting .given the observation we want to verify the null hypothesis that the unknown constant signal is against the simple alternative , with : suppose that , before observing , we have a _prior _ information on the parameter , that is and .the map criterion uses the following likelihood ratio and the decision rule is to accept whenever ( decision ) or refuse it otherwise ( decision ) .the overall probability of error is let now be then , the likelihood appears as to write explicitly the decision rule and then study the effect of stochastic resonance we have to distinguish three cases : , and . 1 .let it be , then put then , if accept and .if , then if reject otherwise accept it . in both cases 2 .let it be , then put then , if reject and .if , then if accept otherwise reject it . in both cases 3 .let it be , then put then , if reject otherwise accept it . in both cases as before , we apply this method to the model . in this casethe variance , for a fixed threshold and noise level , is a non decreasing function of being only a scale factor ( figure [ fig2 ] gives a numerical representation of this statement ) .thus , the is , in general , given by formula .what is amazing is the behavior of . in figure [ fig3 ]it is reported the graph of as a function of and given and . for around 1/2the shows the effect of stochastic resonance .so it appears that in some cases the noise level can reduce sensibly the overall probability of making the wrong decision .this kind of behavior is non outlined in the work of .* remark : * _ following the same scheme , similar results can be obtained for the model _ii ) _ when we observe the energy . in this caseit is sufficient to replace in the values of and with the quantities and with in the decision rule . _the use of ergodic diffusions as noise in the problem of stochastic resonance seems quite powerful .characterizations of classes of ergodic process that enhance the stochastic resonance can be done ( see e.g. ) but not in a simple way as in the i.i.d .case as calculations are always cumbersome .the problem of a parametric non constant signal can also be treated while the full nonparametric non constant signal requires more attention and will be object for further investigations . for i.i.d .observations , and considered the problem of non parametric estimation for regression models of the form , . their approach can be applied in this context. other criterion of optimality than the fisher information quantity can be used as it is usually done in information theory ( e.g. shannon mutual information or kullback divergence ) .the analysis of the overall probability of error seems to put in evidence something new with respect to the current literature ( see e.g. ) .it is worth noting that in a recent paper models driven by ergodic diffusions have also been used but the effect of stochastic resonance is not used to estimate parameters .barndorff - nielsen , o. , cox , d.r . , _asymptotic techniques for use in statistics_. chapman & hall , 1989 , bristol .benzi , r. , sutera , a. , vulpiani , a. , the mechanism of stochastic resonance ._ j. phys .a _ , v * 14 * , ( 1981 ) , l453-l458 .berglund , n. , gentz , b. , a sample - path approach to noise - induced synchronization : stochastic resonance in a double - well potential .wias preprint no .627 , ( 2000 ) .chapeau - blondeau , f. , nonlinear test statistic to improve signal detection in non - gaussian noise ._ ieee sig ._ , v * 7 * , n 7 , ( 2000 ) , 205 - 207 .chapeau - blondeau , f. , rojas - varela , j. , information - theoretic measures improved by noise in nonlinear systems .14th int . conf . on math .theory of networks and systems _ , perpignane , france , ( 2000 ) , 79 - 82 .greenwood , p.e . ,ward , l.m . , wefelmeyer , w. , statistical analysis of stochastic resonance in a simple setting ._ phys . rev ., v * 60 * , n 4 , ( 2000 ) , 4687 - 4695 .gammaitoni , l. , hnggi , p. , jung , p. , marchesoni , f. , stochastic resonance ._ rev . of modern phys ._ , 70 , ( 1998 ) , 223 - 288 .klimontovich , yu.l ., what are stochastic filtering and stochastic resonance ?_ physics - uspecky _ , v * 42 * , n 1 , ( 1999 ) , 37 - 44 . kutoyants , yu.a . ,efficiency of the empirical distribution function for ergodic diffusion ._ bernoulli _ , v * 3 * , n 4 , ( 1997a ) , 445 - 456 . kutoyants , yu.a . , on semiparametric estimation for ergodic diffusion .a. razmadze math ._ , 115 , ( 1997b ) , 45 - 58 .kutoyants , yu.a . , _ statistical inference for ergodic diffusion processes_. monography to appear ( 2000 ) .ibragimov , i.a . ,khasminskii , r.z . , _ statistical estimation ( asymptotic theory)_. springer , 1981 , new york .liptser , r.s . ,shiryayev , a.n ., _ statistics of random processes _ , part .springer - verlag , 1977 , new york .mller , u. u. , nonparametric regression for threshold data ._ canadian j. statist _ , * 28 * , ( 2000 ) , to appear .mller , u. u. , ward , l. m. , stochastic resonance in a statistical model of a time - integrating detector _e ( 3 ) _ , * 61 * , ( 2000 ) , to appear .negri , i. ( 1998 ) .stationary distribution function estimation for ergodic diffusion processes . _ statistical inference for stochastic processes _ , v * 1 * , n 1 , 61 - 84 .nicolis , c. , stochastic aspects of climatic transitions response to a periodic forcing ._ tellus _ , * 34 * , n 1 , ( 1982 ) , 1 - 9 .[ cols="^ " , ] as a function of for different values of given and . from left - topto right - bottom : = 0.1 , 0.25 , 0.5 , 0.75 , 0.9 , 0.95 .the bottom plot is to show the effect of stochastic resonance for = 0.5 . ]
a subthreshold signal is transmitted through a channel and may be detected when some noise with known structure and proportional to some level is added to the data . there is an optimal noise level , called stochastic resonance , that corresponds to the highest fisher information in the problem of estimation of the signal . as noise we consider an ergodic diffusion process and the asymptotic is considered as time goes to infinity . we propose consistent estimators of the subthreshold signal and we solve further a problem of hypotheses testing . we also discuss evidence of stochastic resonance for both estimation and hypotheses testing problems via examples . * keywords : * stochastic resonance , diffusion processes , unobservable signal detection , maximum a posterior probability . * msc : * 93e10 , 62m99 .
while most readers will be familiar with the notion of _ feedback control _, for completeness we begin by defining this term .feedback control is the process of monitoring a physical system , and using this information as it is being obtained ( in real time ) to apply forces to the system so as to control its dynamics .this process , which is depicted in figure [ fig1 ] , is useful if , for example , the system is subject to noise . since quantum mechanical systems , including those which are continually observed , are dynamical systems , in a broad sense the theory of feedback control developed for classical dynamical systems applies directly to quantum systems . however , there are two important caveats to this statement .the first is that most of the exact results which the theory of feedback control provides , especially those regarding the optimality and robustness of control algorithms , apply only to special subclasses of dynamical systems . in particular , most apply to linear systems driven by gaussian noise . since observed quantum systems in general obey a non - linear dynamics , an important question that arises is whether exact results regarding optimal control algorithms can be derived for special classes of quantum systems .in addition to the need to derive results regarding optimality which are specific to classes of quantum systems , there is a property that sets feedback control in quantum systems apart from that in other systems .this is the fact that in general the act of measuring a quantum system will alter it .that is , measurement induces dynamics in a quantum system , and this dynamics is noisy as a result of the randomness of the measurement results .thus , when considering the design of feedback control algorithms for quantum systems , the design of the algorithm is not independent of the measurement process . in general different ways of measuringthe system will introduce different amounts of noise , so that the search for an optimal feedback algorithm must involve an optimization over the manner of measurement . in what followswe will discuss a number of explicit examples of feedback control in a variety of quantum systems , and this will allow us to give specific examples of the dynamics induced by measurement . before we examine such examples however , it is worth presenting the general equations which describe feedback control in quantum systems , in analogy to those for classical systems . in classical systems ,the state - of - knowledge of someone observing the system is given by a probability density over the dynamical variables ( a phase - space probability density ) .let us consider for simplicity a single particle , whose dynamical variables are its position , and momentum , .if the observer is continually monitoring the position of the particle , then her stream of measurement results , is usually described well by where in each time interval , is a gaussian random variable with variance and a mean of zero .such a gaussian noise process is called wiener noise .the constant determines the relative size of the noise , and thus also the _ rate _ at which the measurement extracts information about ; when is increased , the noise decreases , and it therefore takes the observer less time to obtain an accurate measurement of .as the observer obtains information , her state - of - knowledge regarding the system , , evolves .the evolution is given by the kushner - stratonovich ( k - s ) equation .this is p dt + \sqrt{\gamma } ( x - \langle x(t ) \rangle ) p dw , \ ] ] where is the mass of the particle , is the force on the particle , is the expectation value of at time , and turns out to be a wiener noise , uncorrelated with the probability density . because of this we can alternatively write the stream of measurement results as the above will no doubt be familiar to the majority of the readership . for linear dynamical systems the k - s equation reduces to the equations of the well - known kalman - bucy filter .the k - s equation is the essential tool for describing feedback control ; it tells us what the observer knows about the system at each point in time , and thus the information that he or she can use to determine the feedback forces at each point in time .in addition , when we include these forces in the system dynamics , the resulting k - s equation , in telling us the observer s state - of - knowledge is also telling us how effective is our feedback control : the variance of this state - of - knowledge , and the fluctuations of its mean ( note that these are two separate things ) tell us the remaining uncertainty in the system .the k - s equation thus allows us to design and evaluate feedback algorithms .the description of dynamics and continuous measurement in quantum mechanics is closely analogous to the classical case described above . in quantum mechanics , however , the observer s state - of - knowledgemust be represented by a matrix , rather than a probability density .this matrix is called the _ density matrix _ , and usually denoted by .the dynamical variables are also represented by matrices .if the position is represented by the matrix , then the expectation value of the particle s position at time is given by ] for a given matrix called the hamiltonian ( is planck s constant ) . if an observer makes a continuous measurement of a particle s position , then the full dynamics of the observer s state - of - knowledge is given by the quantum equivalent of the kushner - stratonovich equation .this is - ( \gamma/8 ) [ x , [ x , \rho]]dt \nonumber\\ & & + \frac{\sqrt{\gamma}}{2 } ( ( x - \langle x(t ) \rangle)\rho + \rho ( x - \langle x(t ) \rangle ) ) dw , \label{sme}\end{aligned}\ ] ] where the observer s stream of measurement results is this is referred to as the _ stochastic master equation _ (sme ) , and was first derived by belavkin .this is very similar to the k - s equation , but has the extra term $ ] that describes the noisy dynamics ( or _ quantum back - action _ ) which is introduced by the measurement . in the quantum mechanicsliterature the measurement rate is often referred to as the _ measurement strength_. the sme is usually derived directly from quantum measurement theory without using the mathematical machinery of filtering theory . a recent derivation for people familiar with filtering theorymay be found in .armed with the quantum equivalent of the k - s equation , we can proceed to consider feedback control in quantum systems . interpreting `` applications of feedback control in quantum systems '' in a broad sense, it would appear that one can break most such applications into three general classes . while these classes may be somewhat artificial , they are a useful pedagogical tool , and we will focus on specific examples from each group in turn in the following three sections .the first group is the application of results and techniques from classical control theory to the general theory of the control of quantum systems .this includes the application control theory to obtain optimal control algorithms for special classes of quantum systems .an example of this is the realization that classical lqg control theory can be applied directly to obtain optimal control algorithms for observed linear quantum systems .we will discuss this and other examples in section [ systems ] .it is worth noting at this point that while the direct application of classical control theory to quantum systems is very useful , it is not the only approach to understanding the design of feedback algorithms in quantum systems .another approach is to try to gain insights into the relationship between measurement ( information extraction ) and disturbance in quantum mechanics which are relevant to feedback control .such questions are also of fundamental interest to quantum theorists since they help to elucidate the information theoretic structure of quantum mechanics .references and take this approach , elucidating the information - disturbance trade - off relations for a two - state system ( the simplest non - linear quantum system ) , and exploring the effects of this on the design of feedback algorithms .the second group is the application of feedback control to classes of control _ problems _ which arise in quantum systems , some of which are analogous to those in classical systems , and some of which are peculiar to quantum systems .a primary example of this is in adaptive measurement , where feedback control is used during the measurement process to change the properties of the measurement .this is usually for the purpose of increasing the information which the measurement obtains about specific quantities , or increasing the rate at which information is obtained .we will discuss such applications in section [ problems ] .the third group is the design and application of feedback algorithms to control specific quantum systems .examples are applications to the cooling of a nano - mechanical resonator and the cooling of a single atom trapped in an optical cavity .we will discuss these in section [ physical ] .most readers of this article will certainly be familiar with the classical theory of optimal linear - quadratic - gaussian ( lqg ) control .this provides optimal feedback algorithms for linear systems driven by gaussian noise , and in which the observer monitors some linear combination of the dynamical variables . in lqg control ,the control objective is the minimization of a quadratic function of the dynamical variables ( such as the energy ) .it turns out that for a restricted class of observed quantum systems ( those which are linear in a sense to be defined below ) , this optimal control theory can be applied directly .this was first realized by belavkin in , and later independently by yanagisawa and kimura and doherty and jacobs .quantum mechanical systems whose hamiltonians are no more than quadratic in the dynamical variables are referred to as linear quantum systems , since the equations of motion for the matrices representing the dynamical variables are linear .further , these linear equations are precisely the same as those for a classical system subject to the same forces .the simplest example is the harmonic oscillator . if we denote the matrices for position and momentum as and respectively , then the hamiltonian is where is the mass of the particle , and is the angular frequency with which it oscillates .the resulting equations for and are which are of course identical to the classical equations for the dynamical variables and in a classical harmonic oscillator .further , it turns out that if an observer makes a continuous measurement of any linear combination of the position and momentum , then the sme for the observer s state of knowledge of the quantum system reduces to the kushner - stratonovich equation , which in this case , because the system is linear , is simply the kalman - bucy filter equations .however , there is one twist .the kalman - bucy equations one obtains for the quantum system are those for a classical harmonic oscillator driven by gaussian noise of strength .this comes from the extra term in the sme which describes the `` quantum back - action '' noise generated by the measurement .since the observer s state of knowledge evolves in precisely the same way as for the equivalent linear classical system , albeit driven by noise , we can apply classical lqg theory to these quantum systems .if we apply linear feedback forces , then , for a fixed measurement strength , the quantum mechanics will tell us how much noise the system is subject to , and lqg theory will tell us the resulting optimal feedback algorithm for a given quadratic control objective .note however , that since the noise driving the system depends upon the strength of the measurement , then the performance of the feedback algorithm will also depend upon the strength of the measurement .moreover , the performance of the algorithm will be influenced by two competing effects : as the measurement gets stronger , we can expect the algorithm to do better as a result of the fact that the observer is more rapidly obtaining information .however , as the measurement gets stronger , the induced noise also increases , which will reduce the effectiveness of the algorithm .we can therefore expect that there will be an optimal measurement strength at which the feedback is most effective . in a linear quantum system onetherefore must first find the optimal feedback algorithm using lqg control theory , and then perform a second optimization over the measurement strength .this is not the case in classical control .an explicit example of optimizing measurement strength may be found in .the application of lqg theory to linear quantum systems will be useful when we examine the control of a nanomechanical resonator in section [ physical ] . the close connection between linear quantum and classical systems allows one to apply other results from classical control theory for linear systems to quantum linear systems .transfer function techniques have been applied to linear quantum systems by yanagisawa and kimura , and dhelon and james have elucidated how the small gain theorem can be applied to linear quantum optical networks .finally , it is possible to obtain exact results for the control of linear quantum systems for at least one case beyond lqg theory : james has extended the theory of risk - sensitive control to linear quantum systems . for nonlinear quantum systems ,naturally many of the approaches developed for classical non - linear systems can be expected to be useful .a few specific applications of methods developed for classical nonlinear systems have been explored to date .one example is the use of linearization to obtain control algorithms , and another is the application of a classical guidance algorithm to the control of a quantum system .a third example is the application of the projection filter technique to obtain approximate filters for continuous state - estimation of nonlinear quantum optical systemsthe bellman equation has also been investigated for a two - state quantum system in ; it is not possible to obtain a general analytic solution to this equation for such a system , and as yet no - one has attempted to solve this problem numerically .as quantum systems become increasingly important in the development of technologies , no doubt many more techniques and results from non - linear control theory will be applied in such systems .the objective of lqg control is to use feedback to minimize some quadratic function of the dynamical variables , and this is natural if one wishes to maintain a desired behavior in the presence of noise .while stabilization and noise reduction in dynamical systems is a very important application of feedback control , it is by no means the only application .another important class of applications is _ adaptive measurement_. an adaptive measurement is one in which the measurement is altered as information is obtained .that is , a process of feedback is used to alter the measurement as it proceeds , rather than altering the system the primary distinction between adaptive measurement and more traditional control objectives however is that the goal of the former is usually to optimize some property of the information obtained in the measurement process rather than to control the dynamics of the system .adaptive measurement has , even at this relatively early stage in the development of the field of quantum feedback control , found many potential applications in quantum systems .the reason for this is due to the interplay of the following things .the first is that unlike classical states the majority of quantum states are not fully distinguishable from each other , even in theory , and only carefully chosen measurements will optimally distinguish between a given set of states .the second is that because quantum measurements generally disturb the system being measured , one must also choose one s measurements very carefully in order to extract the maximal information about a given quantity ( lest the measurement disturb this quantity ) . combining these two thingswith the fact that one is usually limited in the kinds of measurements one can perform , due to the available physical interactions between a given system and measuring devices , it is frequently impossible to implement optimal measurements in quantum systems .the use of adaptive measurement increases the range of possible measurements one can make in a given physical situation , and in some cases allows optimal measurements to be constructed where they could not be otherwise .as far as the author is aware , the first application of quantum adaptive measurement was introduced by dolinar in 1973 . herethe problem involves communicating with a laser beam , where each bit is encoded by the presence or absence of a pulse of laser light .quantum effects become important when the average number of photons in each pulse is small ( e.g. ) .in fact , it is not possible to completely distinguish between the presence or absence of a pulse .the reason for this is that the quantum nature of the pulse of laser light is such that there is always a finite probability that there are no photons in the pulse .it turns out that the optimal way of distinguishing the two states is by mixing the pulse with another laser beam at a beam splitter , and detecting the resulting combined beam . in this caseboth input states will produce photon clicks .the optimal procedure is to vary the amplitude of the mixing beam with time , and in particular to use a process of feedback to change this amplitude after the detection of each photon .this feedback procedure distinguishes the states maximally well within the limits imposed by quantum mechanics ( referred to as the helstrom bound ) .the objective of dolinar s adaptive measurement scheme is to discriminate maximally _ well _ between two states .alternatively one may wish to discriminate maximally _fast_. to put this another way , one may wish to maximize the amount of information which is obtained in a specified time , even if it is not possible to obtain all the information in that time .such considerations can be potentially useful in optimizing information transmission rates when the time taken to prepare the states is significant . in itis shown that adaptive measurement can be used to increase the speed of state - discrimination .of particular interest in quantum control theory are situations which reveal differences between measurements on quantum systems and those on their classical counterparts .the rapid - discrimination adaptive measurement scheme of is one such example .the reason for this is that in the case considered in it is only possible to use an adaptive algorithm to increase the speed of discrimination if quantum mechanics forbids perfect discrimination between them . since all classical states are completely distinguishable ( at least in theory ) ,this adaptive measurement is only applicable to quantum systems .the question of whether or not there are more general situations which provide classical analogues of this adaptive measurement is an open question , however .the problem of quantum state - discrimination is a special case of _parameter estimation_. in parameter estimation , the possible states that a system could have been prepared in ( or alternatively the possible hamiltonians that may describe the dynamics of a system ) are parametrized in some way . the observer then tries to determine the value of the parameter by measuring the system .when discriminating two states the parameter has only two discrete values : in the above case it is the amplitude of the laser pulse , which is either zero or non - zero . in a more general case one wishes to determine a continuous parameter .an example of this is the detection of a force . in this casea simple system which feels the force , such as a quantum harmonic oscillator , is monitored , and the force is determined from the observed dynamics .two examples of this are the atomic force microscope ( afm ) and the detection of gravitational waves .adaptive measurement is useful in parameter estimation because measurements which are not precisely tailored will disturb the system so as to degrade information about the parameter . to the author s knowledgeno - one has yet investigated adaptive measurement in force estimation , although the subject is discussed briefly in .however , the use of adaptive measurement for the estimation of a magnetic field using a cold atomic cloud has been investigated in , and for the estimation of a phase shift imparted on a light beam has been explored in references . in the authors show that adaptive measurement outperforms other known kinds of measurements for estimation of phase shifts on continuous beams of light .it turns out that there are certain kinds of measurements which it is very difficult to make because the necessary interactions between the measurement device and the system are not easily engineered .one such example in quantum mechanics is a measurement of what is referred to as the `` pegg - barnett '' or `` canonical '' phase .there are subtleties in defining what one means by the phase of a quantum mechanical light beam ( or , more precisely , in obtaining a definition with all the desired properties ) .astonishingly , the question was not resolved until 1988 , when pegg and barnett constructed a definition which has the desired properties for all practical purposes .this is called the _ canonical phase _ of a light beam .now , the most practical method of measuring light is to use a photon counter .however , it turns out that it is not possible to use a photon counter , even indirectly , to make a measurement which measures precisely canonical phase nevetheless , in ( see also ) wiseman showed that the use of an adaptive measurement process allows one to more closely approximate a canonical phase measurement .for a light pulse which has at most one photon , this adaptive phase measurement measures precisely canonical phase .wiseman s adaptive phase measurement has now been realized experimentally .one application of feedback control is in preparing quantum systems in well - defined states . due to noise from the environment ,the state of quantum systems which have been left to their own devices for an appreciable time will contain considerable uncertainty .many quantum devices , for example those proposed for information processing , require that the quantum system be prepared in a state which is specified to high precision . naturally one can prepare such states by using measurement followed by control - that is , using a measurement to determine the state to high precision , and then applying a control field to move the system to the desired point in phase space .since the rate at which a given measuring device will extract information is always finite , one can ask whether it is possible to increase the rate at which information is extracted by using a process of adaptive measurement .that is , to adaptively change the measurement as the observer s state - of - knowledge changes , so that the uncertainty ( e.g. the entropy ) of the observer s state - of - knowledge reduces faster .we will refer to the process of reducing the entropy as _ purifying _ the qubit ( a term taken from the jargon of quantum mechanics ) .it turns out that it is indeed possible to increase the rate of purification using adaptive measurement .for a two - state quantum system ( often referred to as a _ qubit _ ) , an adaptive measurement is presented in that will speed - up the rate of purification in a certain sense .specifically , it will increase the rate of reduction of the _ average entropy _ of the qubit ( where the average is taken over the possible realizations of the measurement - e.g. over measurements on many identical qubits ) by a factor of two .this adaptive measurement has two further interesting properties .the first is that there is no analogous adaptive scheme for the equivalent measurement on a classic two - state system ( a single bit ) ; the speed - up in the reduction of the average entropy is a quantum mechanical effect .the second property has to do with the statistics of the entropy reduction . for a fixed measurement , while the entropy decreases with time _ on average _ , on any given realization of the measurement the entropy fluctuates randomly as measurement proceeds . for the adaptive measurement however , in the limit of strong feedback ,the entropy reduces _deterministically_. for a finite feedback force , there will always remain some residual stochasticity in the entropy reduction , but this will be reduced over that for the fixed measurement . thus if one is preparing many qubits in parallel , this adaptive measurement will reduce the spread in the time it takes the qubits to be prepared .the above adaptive algorithm does not speed up purification in every sense of the word , however : wiseman and ralph have recently shown that while the above adaptive measurement reduces the _ average entropy _ of an ensemble of qubits more quickly , quite surprisingly it actually _ increases _ the _ average time _it takes to prepare a given value of the entropy by a factor of two ! in this sense , therefore , the above measurement strategy does not , in fact , speed up preparation .the reason for this is that when one considers an ensemble of spins , the majority purify quickly , whereas the average value of the entropy across the ensemble is increased considerably by a small number of straggling qubits that purify slowly .thus , the adaptive measurement works by decreasing the time taken for the stragglers , but _ increasing _ the time taken for the majority , so that , for strong feedback , all qubits take the same time to reach a given purity .thus the feedback algorithm constructed in does not speed up the average time a given qubit will take to reach a target entropy , and this will be the important quantity if one is preparing qubits in sequence .application of the above results to rapid purification of superconducting qubits is analyzed in .is it possible , therefore , to use adaptive measurement to speed up the average preparation time ? while the answer for a single qubit is almost certainly no , the answer is almost certainly yes in general : in the authors show that , for a quantum system with states it is possible to speed up the rate at which the average entropy is reduced by a factor proportional to . while it is not shown directly that this algorithm also decreases the _ average time _ to reach a given target purity , it is fairly clear that this will be the case , although more work remains to be done before all the answers are in .another application of feedback control involving purification of states has been explored in . in this case , rather than the rate of purification , the authors are concerned with using feedback control during the measurement to obtain a specific final state , and in particular when the control fields are restricted . in this sectionwe have been considering the application of feedback in quantum systems to problems which lie outside the traditional applications of noise reduction and stabilization .most of these fall under the category of adaptive measurement , and these we have discussed above .one that does not is quantum error correction .the goal of such a process is noise reduction , but with the twist that the state of the system must be encoded in such a way that the controller does not disturb the information in the system .while we will not discuss this further here , the interested reader is referred to and references therein .we now present examples of feedback control applied to two specific quantum systems .the first is a nano - mechanical resonator , and the second is a single atom trapped in an optical cavity . in both casesthe goal of the feedback algorithm will be to reduce the entropy of the system and prepare it in its ground state .a nano - mechanical resonator is a thin , fairly ridged bridge , perhaps 200 nm wide and a few microns long .such a bridge is formed on a layered wafer by etching out the layer beneath .if one places a conducting strip along the bridge and passes a current through it , the bridge can be made to vibrate like a guitar string by driving it with a magnetic field .so long as the amplitude of the oscillation is relatively small , the dynamics is essentially that of a harmonic oscillator .one of the primary goals of research in this area is to observe quantum behavior in these oscillators .the first step in such a process is to reduce the thermal noise which the oscillator is subject to in order to bring it close to its ground state .typical nanomechanical resonators have frequencies of the order of tens of megahertz .this means that to cool the resonator so that its average energy corresponds to its first excited state requires a temperature of a few millikelvin .dilution refrigerators can obtain temperatures of a few hundred millikelvin , but to reduce the temperature further requires something else . in reference the authors show that , at least in theory , feedback control could be used to obtain the required temperatures . .on the near side of the resonator is a single electron transistor ( set ) , whose central island is formed by the two junctions marked `` j '' . on the far side is a t - shaped electrode or `` gate '' .the voltage on this gate is varied and this results in a varying force on the resonator .image courtesy of keith schwab.,width=268 ] to perform feedback control one must have a means of monitoring the position of the resonator and applying a feedback force .the position can be monitored using a single electron transistor , and this has recently been achieved experimentally .a feedback force can be applied by varying the voltage on a gate placed adjacent to the resonator .this configuration is depicted in figure [ fig2 ] .since the oscillator is harmonic , classical lqg theory can be used to obtain an optimal feedback algorithm for minimizing the energy of the resonator , so long as one takes into account the quantum back - action noise caused by the measurement as described in .the details involved in obtaining the optimal feedback algorithm and calculating the optimal measurement strength are given in .it has further been shown in that adaptive measurement and feedback can be used to prepare the resonator in a squeezed state .the second example of feedback control we consider is that of cooling an atom trapped in an optical cavity .an optical cavity consists of two parallel mirrors with a single laser beam bouncing back and forward between them .the laser beam forms a standing wave between the mirrors , and if the laser frequency is chosen appropriately , a single atom inside the cavity will feel a sinusoidal potential due its interaction with the standing wave . it is therefore possible to trap an atom in one of the wells of this potential .it turns out that information regarding the position of the atom can be obtained by monitoring the phase of the light which leaks out one of the mirrors .specifically , the phase of the output light tells the observer how far up the side of a potential well the atom is .in addition , by changing the intensity of the laser beam that is driving the cavity , one changes the height of the standing wave , and thus the height of the potential wells . in this systemwe therefore have a means to monitor the atom and to apply a feedback force . in authors present a feedback algorithm which can be used to cool the atom to its ground state . actually , the algorithm will prepare the atom either in its ground state , or its first excited state , each with a probability of .however , from the measurement record the observer know which one , and can take appropriate action if the resulting state is not the desired one .if the location of the atom was known very accurately , then we could use the following feedback algorithm to reduce its energy : increase the height of the potential when the atom is climbing up the side of a well , and reduce it when the atom is falling down towards the centre . this way the energy of the atom is reduced on each oscillation , and the atom will eventually be stationary at the centre of the well . however , it turns out that this algorithm is not effective , either classically or quantum mechanically , when the variance of atom in phase space is appreciable .the reason is that the cyclic process of raising and lowering the potential , which reduces the energy of the atom s mean position and momentum , actually increases the variance of the phase - space probability density .classically we can eliminate this problem by observing the atom with sufficient accuracy , but quantum mechanically heisenberg s uncertainty relation prevents us from reducing the variance sufficiently . as a result , an alternative algorithm is required . if turns out that one can obtain an effective cooling algorithm by calculating the derivative of the total motional energy of the atom with respect to changes in the height of the potential . in doingso one finds that the energy change is maximal and minimal at a certain points in the oscillatory motion of the atom . as a result, one can use a bang - bang algorithm to switch the potential high when the energy reduction is maximal , and switch it low when the resulting energy increase is minimal , in a similar fashion to the classical algorithm described above .the result is that the atom will lose motional energy on each cycle . the curious effect whereby the atom will cool to the ground state only half the time is due to the symmetry of the system , and the fact that the feedback algorithm respects this symmetry .specifically , the feedback process can not change the average parity of the initial probability density .since the ground state has even parity , and the first excited state odd parity , if the initial density of the atom has no particular parity ( a reasonable assumption ) , then to preserve this on average the process must pick even and odd final states equally often . full details regarding the feedback algorithm and the resulting dynamics of the atom is given in .although we do not have the space to describe them here , applications of feedback control have been proposed in a variety of other quantum systems .three of these are cooling the motion of a cavity mirror by modulating the light in the cavity , controlling the motion of quantum - dot qubits , and preparing spin - squeezed states in atomic clouds .in addition , feedback control has now been experimentally demonstrated in a number of quantum systems .namely in optics , cold atom clouds , and trapped ions .to summarize , feedback control has a wide variety of applications in quantum systems , particularly in the areas of noise reduction , stabilization , cooling and precision measurement .such applications can be expected to grow more numerous as quantum systems become important as the basis of new technologies .while exact results from modern control theory can be used to obtain optimal control algorithms for some quantum systems , this is not true for the majority of quantum control problems due to their inherent non - linearity . as a result many techniques developed for the control of nonlinear classical systems are of considerable use in designing algorithms for quantum feedback control , and the efficacy of many of these techniques still remain to be explored in quantum systems .hopefully as further quantum control problems arise in specific systems , and effective control algorithms are developed , rules of thumb will emerge for the control of classes of quantum devices .this work was supported by the hearne institute for theoretical physics , the national security agency , the army research office and the disruptive technologies office .v. p. belavkin , `` non - demolition measurement and control in quantum dynamical systems , '' in _ information , complexity and control in quantum physics _ , a. blaquiere , s. diner , and g. lochak , eds.1em plus 0.5em minus 0.4emspringer - verlag , new york , 1987 .v. b. braginsky , m. l. gorodetsky , f. y. khalili , a. b. matsko , k. s. thorne , and s. p. vyatchanin , `` noise in gravitational - wave detectors and other classical - force measurements is not influenced by test - mass quantization , '' _ phys .d _ , vol .67 , no . 8 , p. 082001, 2003 .d. t. pope , h. m. wiseman , and n. k. langford , `` adaptive phase estimation is more accurate than nonadaptive phase estimation for continuous beams of light , '' _ physical review a _ , vol .70 , p. 043812, 2004 .j. f. ralph , e. j. griffith , and c. d. h. a. d. clark , `` rapid purification of a solid state charge qubit , '' e. j. donkor , a. r. pirich , and h. e. brandt , eds .6244.1em plus 0.5em minus 0.4emspie , 2006 , p. ( in press ) .w. p. smith , j. e. reiner , l. a. orozco , s. kuhr , and h. m. wiseman , `` capture and release of a conditional state of a cavity qed system by quantum feedback , '' _ phys ._ , vol . 89 , p. 133601, 2002 .p. bushev , d. rotter , a. wilson , f. dubin , c. becher , j. eschner , r. blatt , v. steixner , p. rabl , and p. zoller , `` feedback cooling of a single trapped ion , '' _ phys ._ , vol .96 , p. 043003
we give an introduction to feedback control in quantum systems , as well as an overview of the variety of applications which have been explored to date . this introductory review is aimed primarily at control theorists unfamiliar with quantum mechanics , but should also be useful to quantum physicists interested in applications of feedback control . we explain how feedback in quantum systems differs from that in traditional classical systems , and how in certain cases the results from modern optimal control theory can be applied directly to quantum systems . in addition to noise reduction and stabilization , an important application of feedback in quantum systems is adaptive measurement , and we discuss the various applications of adaptive measurements . we finish by describing specific examples of the application of feedback control to cooling and state - preparation in nano - electro - mechanical systems and single trapped atoms .
graphical representations of complex multivariate systems are increasingly prevalent within systems biology . in general a graph or _ network _ is characterised by a set of vertices ( typically associated with molecular species ) and a set of edges , whose interpretation will be context - specific .in many situations the edge set or _ is taken to imply conditional independence relationships between species in . forfixed and known vertex set , the data - driven characterisation of network topology is commonly referred to as _ network inference_. in the last decade many approaches to network inference have been proposed and exploited for several different purposes . in some settingsit is desired to infer single edges with high precision ( e.g. * ? ? ?* ) , whereas in other applications it is desired to infer global connectivity , such as subnetworks and clusters ( e.g. * ? ? ?* ) . in cellular signalling systems, the scientific goal is often to identify a set of upstream regulators for a given target , each of which is a candidate for therapeutic intervention designed to modulate activity of the target .the output of network inference algorithms are increasingly used to inform the design of experiments and may soon enter into the design of clinical trials .it is therefore important to establish which network inference algorithms work best for each of these distinct scientific goals .assessment of algorithm performance can be achieved _ in silico _ by comparing inferred networks to known data - generating networks .it can also be achieved using data obtained _ in vitro _ ; however this requires that the underlying biology is either known by design , well - characterised by interventional experiments , or estimated from larger corpora of data . in either case an estimated network , typically represented as a weighted adjacency matrix , is compared against a known or assumed benchmark network .community - wide blind testing of network inference algorithms is performed at the regular dream challenges ( see http://www.the-dream-project.org/ ; * ? ? ?* ; * ? ? ?there is broad agreement in the network inference literature regarding the selection of suitable performance scores ( described below ) , facilitating the comparison of often disparate methodologies across publications . in this literature ,the quality of an estimated network with respect to a benchmark is assessed using techniques from classifier analysis .that is , each possible edge has an associated class label , where is the indicator function .a network estimator may then be seen as an attempt to estimate for each pair .two of the main performance scores from classifier analysis are area under the receiver operating characteristic curve ( auroc ) and area under the precision - recall curve ( aupr ) , though alternative performance scores for classification also exist ( e.g. ) .these scores , formally defined in sec. [ classification ] , are based on _ confusion matrices _ of true / false positive / negative counts and represent essentially the only way to quantify performance at a local level ( i.e. based on individual edges ) . at present , performance assessment in the network inference literature does not typically distinguish between the various scientific purposes for which network inference algorithms are to be used .yet network inference algorithms are now frequently employed to perform diverse tasks , including identifying single edges with high precision , eliciting network motifs such as cliques or learning a coherent global topology such as connected components .whilst performance for local ( i.e. edge - by - edge ) recovery is now standardised , there has been comparatively little attention afforded to performance scores that capture ability to recover higher - order features such as cliques , motifs and connectivity .recent studies , including , proposed to employ spectral distances as a basis for comparing between two networks on multiple length scales . in this articlewe present several additional multi - scale scores ( msss ) for network inference algorithms , each of which reflects ability to solve a particular class of inference problem .much of the popularity of existing scores derives from their objectivity , interpretability and invariance to rank - preserving transformation . unlike previous studies , we restrict attention only to mss that satisfy these desiderata .the remainder of this paper proceeds as follows : in section [ methods ] we formulate the assessment problem , state our desiderata and present novel performance scores that satisfy these requirements whilst capturing aspects of network reconstruction on multiple scales . using a large corpus of estimated and benchmark networks from the dream5 challenge in section [ results ] , we survey estimator performance andconduct an objective , data - driven examination of the statistical power of each mss .the proposed msss provide evidence that the `` wisdom of crowds '' approach , that demonstrated superior ( local ) performance in the dream5 challenge , also offers gains on multiple length scales .sections [ discussion ] and [ conclude ] provide a discussion of our proposals and suggest directions for future work .matlab r2013b code ` net_assess ` is provided in the supplement , to accelerate the dissemination of ideas discussed herein .we proceed as follows : sections [ assumptions ] and [ desiderata ] clarify the context of the assessment problem for network inference algorithms amd list certain desiderata that have contributed to the popularity of local scores . sections[ notation ] and [ classification ] introduce graph - theoretic notation and review standard performance assessment based on recovery of individual edges . in sections[ mss one ] and [ mss two ] we introduce several novel msss for assessment of network inference algorithms .we require each mss to satisfy our list of desiderata ; however these scores differ from existing scores by assessing inferred network structure on several ( in fact all ) scales . for each msswe discuss associated theoretical and computational issues .finally section [ test ] describes computation of -values for the proposed msss .performance assessment for network inference algorithms may be achieved by comparing estimated networks against known benchmark information .the interpretation of the estimated networks themselves has often been confused in existing literature , with no distinction draw between the contrasting notions of significance and effect size . in this sectionwe therefore formally state our assumptions on the interpretation of both the benchmark network and the network estimators or estimates . *all networks are directed , unsigned and contain no self - edges .a network is _ signed _ if each edge carries an associated symbol .( a1 ) is widely applicable since an undirected edge may be recast as two directed edges and both signs and self - edges may simply be removed .the challenge of inferring signed networks and more generally the problem of predicting interventional effects requires alternative performance scores that are not dealt with in this contribution , but are surveyed briefly in sec . [ discussion ] .the preclusion of self - edges aids presentation but it not required by our methodology . the form of this benchmark information will influence the choice of performance score and we therefore restrict attention to the most commonly encountered scenario : * the benchmark network is unweighted. a network is _ weighted _ if each edge has an associated weight .note that the case of unweighted benchmark networks is widely applicable , since weights may simply be removed if necessary .we will write for the space of all directed , unweighted networks that do not contain self - edges and write for the corresponding space of directed , weighted networks that do not contain self - edges .* the benchmark network contains at least one edge and at least one non - edge .* network estimators are weighted ( ) , with weights having the interpretation that larger values indicate a larger ( marginal ) probability of the corresponding edge being present in the benchmark network .in particular we do not consider weights that instead correspond to effect size ( see sec .[ discussion ] ) .* in all networks , edges refer to a _direct _ dependence of the child on the parent at the level of the vertex set ; that is , not occurring via any other species in . assumptions ( a1 - 5 ) are typical for comparative assessment challenges such as dream . fix a benchmark network .a _ performance score _ is defined as function that accepts an estimated network and a benchmark network and returns a real value that summarises some aspect of with respect to .examples of performance scores are given below .our approach revolves around certain desiderata that any ( i.e. not just multi - scale ) performance score ought to satisfy : * ( interpretability ) ] and denote the entry by .should a network inference procedure produce edge weights in then by dividing through by the largest weight produces a network with weights in ] with 1 representing perfect performance and representing performance that is no better than chance . for finitely many test samples , curves may be estimated by linear interpolation of points in roc - space .precision - recall ( pr ) curves are an alternative to roc curves that are useful in situations where the underlying class distribution is skewed , for example when the number of negative examples greatly exceeds the number of positive examples . for biological networks that exhibit sparsity , including gene regulatory networks and metabolic networks , the number of positive examples ( i.e. edges ) is frequently smaller than the number of negative examples ( i.e. non - edges ) .in this case performance is summarised by the area under the pr curve where is the _ positive predictive value_. ( in this paper we adopt the convention that whenever . ) also takes values in ] , and is characterised as the probability that a randomly selected pair from is assigned a higher weight than a randomly selected pair from .* captures : * mss1 captures the ability to identify ancestors and descendants of any given vertex .mss1 scores therefore capture the ability of estimators to recover connected components on all length scales .* desiderata : * ( d1 ; computability ) is satisfied due to the well - known warshall algorithm for finding the transitive closure of a directed , unweighted network , with computational complexity . for the estimated network it is required to know , for each pair the value of , i.e. the largest value of the threshold at which . to compute these quantitieswe generalised the warshall algorithm to the case of weighted networks ; see sec .[ theory results ] .( d2 - 4 ) are automatically satisfied analogously to local scores , by construction of confusion matrices and area under the curve statistics .whilst mss1 scores capture the ability to infer connected components , they do not capture the graph theoretical notion of _ differential connectivity _( i.e. the minimum number of edges that must be removed to eliminate all paths between a given pair of vertices ) .our second proposed mss represents an attempt to explicitly prioritise pairs of vertices which are highly connected over those pairs with are weakly connected : * definition : * for each pair we will compute an _ effect _ that can be thought of as the importance of variable on the regulation of variable according to the network ( in a global sense that includes indirect regulation ) . to achieve thiswe take inspiration from recent work by as well as , who exploit spectral techniques from network theory .since the effect , which is defined below , includes contributions from all possible paths from to in the network , it explicitly captures differential connectivity .we formally define effects for an arbitrary unweighted network ( which may be either or ) .specifically the effect of on is defined as the sum over paths where is the kronecker delta .effects quantify direct and indirect regulation on all length scales . to illustrate this , notice that eqn .[ effects def ] satisfies the recursive property ( see sec . [ theory results ] ) . intuitively , eqn .[ recurse ] states that the fraction of s behaviour explained by is related to the combined fractions of s parents behaviour that are explained by .rephrasing , in order to explain the behaviour of it is sufficient to explain the behaviour of each of s parents .moreover , if a parent of is an important regulator ( in the sense that has only a small number of parents ) then the effect of on will contribute significantly to the combined effect .[ recurse ] is inspired by and later , but differs from these works in two important respects : ( i ) for biological networks it is more intuitive to consider normalisation over incoming edges rather than outgoing edges , since some molecular species may be more influential than others .mathematically , corresponds to replacing in eqn .[ recurse ] with .( ii ) employed a `` damping factor '' that imposed a multiplicative penalty on longer paths , with the consequence that effects were readily proved to exist and be well - defined .in contrast our proposal does not include damping on longer paths ( see discussion of d2 below ) and the theory of and others does not directly apply in this setting .below we write for the matrix that collects together all effects for the benchmark network ; similarly denote by the matrix of effects for the thresholded estimator .any well - behaved measure of similarity between and may be used to define a mss .we constructed an analogue of a confusion matrix as tp , fp , tn , fn . repeating the construction across varying threshold ,we compute analogues of roc and pr curves .( note that , unlike conventional roc curves , the curves associated with mss2 need not be monotone ; see supp . sec .[ auc con ] . ) finally scores , are computed as the area under these curves respectively .* captures : * mss2 is a spectral method , where larger scores indicate that the inferred network better captures the _ eigenflows _ of the benchmark network . in general neither nor need have a unique maximiser ( see sec .[ theory results ] ) . as such, mss2 scores do not require precise placement of edges , provided that higher - order topology correctly captures differential connectivity .* desiderata : * eqn .[ effects def ] involves an intractable summation over paths : nevertheless ( d1 ; computability ) is ensured by an efficient iterative algorithm related but non - identical to , described in sec .[ theory results ] . in order to ensure ( d2 ; objectivity ) we did not include a damping factor that penalised longer paths , since the amount of damping would necessarily depend on the nature of the data and the scientific context .it is important , therefore , to establish whether effects are mathematically well defined in this objective limit .this paper contributes novel mathematical theory to justify the use of mss2 scores and prove the correctness of the associated algorithm ( see sec . [ theory results ] ) . as with mss1, the remaining desiderata ( d3 - 4 ) are satisfied by construction . in performance assessment of network inference algorithmswe wish to test the null hypothesis : for an appropriate null model .we will construct a one - sided test based on a performance score and we reject the null hypothesis when ] . to construct a consistent performance measure we instead define auc as follows :let denote the coordinates of points on the curve whose auc is to be calculated .reorder the sequence as such that the x - coordinates form an increasing sequence .then auc is defined using the trapezium rule as .note that whilst an ordering may in general be non - unique , the expression for does not depend on which particular ordering is selected .the ` net_assess ` package for matlab r2013b is provided to compute all of the scores and associated -values discussed in this paper . in the case of mss2 we compute roc and pr curves based on 100 uniformly spaced thresholds defined implicitly by .this package can be used as shown in the following example : typical computational times for the package , based on the dream5 data analysis performed in this paper , are displayed in supp .[ times ] .note that local scores require operations to evaluate , whereas mss1 requires and mss2 requires .we briefly summarise the approach of that assigns each network inference algorithm an `` overall score '' . for each network inference algorithma combined auroc score was calculated as the ( negative logarithm of the ) geometric mean over -values corresponding to 3 individual auroc scores ( _ in silico _ , _ e. coli _ , _ s. cerevisiae _ respectively ) .likewise a combined aupr score was assigned to each network inference algorithm .for each algorithm an overall score was computed as the arithmetic mean of the combined auroc and combined aupr scores .note that report the overall score over all 3 datasets , whereas the performance scores which we report in this paper were computed for each dataset individually .note also that -values used by were computed using a common null distribution , rather than the estimator - specific nulls that are used in this paper .supp . figs .[ silico bar ] , [ ecoli bar ] , [ sc bar ] display the scores obtained by each of the 36 dream methodologies on , respectively , the _ in silico _ , _e. coli _ and _ s. cerevisiae _ datasets .[ dream2 ] compares results for the dream5 challenge data based on both local and multi - scale scores and roc analysis .[ meta5 ] displays both inferred benchmark topology for the _ in silico _ dataset using _
graphical models are widely used to study complex multivariate biological systems . network inference algorithms aim to reverse - engineer such models from noisy experimental data . it is common to assess such algorithms using techniques from classifier analysis . these metrics , based on ability to correctly infer individual edges , possess a number of appealing features including invariance to rank - preserving transformation . however , regulation in biological systems occurs on multiple scales and existing metrics do not take into account the correctness of higher - order network structure . in this paper novel performance scores are presented that share the appealing properties of existing scores , whilst capturing ability to uncover regulation on multiple scales . theoretical results confirm that performance of a network inference algorithm depends crucially on the scale at which inferences are to be made ; in particular strong local performance does not guarantee accurate reconstruction of higher - order topology . applying these scores to a large corpus of data from the dream5 challenge , we undertake a data - driven assessment of estimator performance . we find that the `` wisdom of crowds '' network , that demonstrated superior local performance in the dream5 challenge , is also among the best performing methodologies for inference of regulation on multiple length scales . matlab r2013b code ` net_assess ` is provided as supplement . key words : performance assessment , multi - scale scores , network inference .
fifty years after the pioneering discoveries of fermi , pasta and ulam , the paradigm of coherent structures has proven itself one of the most fruitful ones of nonlinear science .fronts , solitons , solitary waves , breathers , or vortices are instances of such coherent structures of relevance in a plethora of applications in very different fields .one of the chief reasons that gives all these nonlinear excitations their paradigmatic character is their robustness and stability : generally speaking , when systems supporting these structures are perturbed , the structures continue to exist , albeit with modifications in their parameters or small changes in shape ( see for reviews ) .this property that all these objects ( approximately ) retain their identity allows one to rely on them to interpret the effects of perturbations on general solutions of the corresponding models . among the different types of coherent structures one can encounter , topological solitons are particularly robust due to the existence of a conserved quantity named topological charge .objects in this class are , e.g. , kinks or vortices and can be found in systems ranging from josephson superconducting devices to fluid dynamics .a particularly important representative of models supporting topological solitons is the family of nonlinear klein - gordon equations , whose expression is specially important cases of this equation occur when , giving the so - called equation , and when , leading to the sine - gordon ( sg ) equation , which is one of the few examples of fully integrable systems .indeed , for any initial data the solution of the sine - gordon equation can be expressed as a sum of kinks ( and antikinks ) , breathers , and linear waves . herewe focus on kink solitons , which have the form being a free parameter that specifies the kink velocity .the topological character of these solutions arises from the fact that they join two minima of the potential , and therefore they can not be destroyed in an infinite system .our other example , the equation , is not integrable , but supports topological , kink - like solutions as well , given by it is by now well established , already from pioneering works in the seventies that both types of kinks behave , under a wide class of perturbations , like relativistic particles .the relativistic character arises from the lorentz invariance of their dynamics , see eq .( [ kg ] ) , and implies that there is a maximum propagation velocity for kinks ( 1 in our units ) and their characteristic width decreases with velocity .indeed , even behaving as particles , kinks do have a characteristic width ; however , for most perturbations , that is not a relevant parameter and one can consider kinks as point - like particles .this is not the case when the perturbation itself gives rise to certain length scale of its own , a situation that leads to the phenomenon of length - scale competition , first reported in ( see for a review ) .this phenomenon is nothing but an instability that occurs when the length of a coherent structure approximately matches that of the perturbation : then , small values of the perturbation amplitude are enough to cause large modifications or even destroy the structure .thus , in , the perturbation considered was sinusoidal , of the form where and are arbitrary parameters .the structures studied here were breathers , which are exact solutions of the sine - gordon equation with a time dependent , oscillatory mode ( hence the name ` breather ' ) and that can be seen as a bound kink - antikink pair .it was found that small values , i.e. , long perturbation wavelengths , induced breathers to move as particles in the sinusoidal potential , whereas large or equivalent short perturbation wavelengths , were unnoticed by the breathers . in the intermediate regime , where length scales were comparable , breathers ( which are non topological ) were destroyed .as breathers are quite complicated objects , the issue of length scale competition was addressed for kinks in . in this case , kinks were not destroyed because of the conservation of the topological charge , but length scale competition was present in a different way : keeping all other parameters of the equation constant , it was observed that kinks could not propagate when the perturbation wavelength was of the order of their width . in all other ( smaller or larger ) perturbations ,propagation was possible and , once again , easily understood in terms of an effective point - like particle .although a explanation of this phenomenon was provided in in terms of a ( numerical ) linear stability analysis and the radiation emitted by the kink , it was not a fully satisfactory argument for two reasons : first , the role of the kink width was not at all transparent , and second , there were no simple analytical results .these are important issues because length scale competition is a rather general phenomenon : it has been observed in different models ( such as the nonlinear schrdinger equation ) or with other perturbations , including random ones .therefore , having a simple , clear explanation of length scale competition will be immediately of use in those other contexts .the aim of the present paper is to show that length scale competition can be understood through a collective coordinate approximation .collective coordinate approaches were introduced in to describe kinks as particles ( see for a very large number of different techniques and applications of this idea ) .although the original approximation was to reduce the equation of motion for the kink to an ordinary differential equation for a time dependent , collective coordinate which was identified with its center , it is being realized lately that other collective coordinates can be used instead of or in addition to the kink center .one of the most natural additional coordinates to consider is the kink width , an approach that has already produced new and unexpected results such as the existence of anomalous resonances or the rectification of ac drivings .there are also cases in which one has to consider three or more collective coordinates ( see , e.g. , ) .it is only natural then to apply these extended collective coordinate approximations to the problem of length scale competition , in search for the analytical explanation needed . as we will see below ,taking into account the kink width dependence on time is indeed enough to reproduce the phenomenology observed in the numerical simulations .our approach is detailed in the next section , whereas in sec .3 we collect our results and discuss our conclusions .we now present our collective coordinate approach to the problem of length scale competition for kinks . we will use the lagrangian based approach developed in , which is very simple and direct .equivalent results can be obtained with the so - called generalized travelling wave _ ansatz _ , somewhat more involved in terms of computation but valid even for systems that can not be described in terms of a lagrangian .let us consider the generically perturbed klein - gordon equation : the starting point of our approach is the lagrangian for the above equation , which is given by as stated above , we now focus on the behaviour of kink excitations of the form and .to do so , we will use a two collective coordinates approach by substituting the _ ansatz _ in the lagrangian of the system , and in , where and and are two collective coordinates that represent the position of the center and the width of the kink , respectively . substituting the expresions and in the lagrangian with our perturbation , and , we obtain an expresion of the lagrangian in terms of and , where , for the system , , and , and for the sg system , , and , and which is for and for the sg model .we note that the effect of a spatially periodic perturbation like the one considered here was studied for the model in , although the authors were not aware of the existence of length scale competition in this system and focused on unrelated issues .the equations of motion of and can now be obtained using the lagrange equations , where stands for the collective coordinates and .the ode system for and is , finally , where and .different behaviours of for different values of for , , , and in the a ) model , b ) sg model ., title="fig:",width=226 ] different behaviours of for different values of for , , , and in the a ) model , b ) sg model ., title="fig:",width=226 ] the equations above are our final result for the dynamics of sg and kinks in terms of their center and width . as can be seen , they are quite complicated equations and we have not been able to solve them analytically .therefore , in order to check whether or not they predict the appearance of length scale competition , we have integrated them numerically using a runge - kutta scheme . the simulation results for different values of , are in fig .[ fig : lengthscale ] .we have taken in order to compare with the numerical results for the sg equation in .the plots present already the physical variable , i.e. , the position of the center of the kink .as we may see , the agreement with the results in is excellent for the sg equation ( cf . figs . 7 and 8 in ) : for small and large wavelengths the kink , initially at the top of one of the maxima of the perturbation , travels freely ; however , for intermediate scales , the kink is trapped and even moves backward for a while . as previous works on the same problem for the equation did not touch upon this issue , we have carried out numerical simulations , also using a runge - kutta scheme , of the full partial differential equation .the results , presented in fig . [ fig : simphi4 ] , confirm once again the agreement with the prediction of the collective coordinate approach .we see that for , the value at which the collective coordinate prediction is more dramatic , the kink is trapped at the first potential well , indicating that for that value the length scale competition phenomenon is close to its maximum effect .different behaviours of for different values of for , , as obtained from numerical simulations of the full model ., width=264 ] interestingly , a more detailed analysis of fig .[ fig : simphi4 ] shows two different kink trapping processes . in agreement with the collective coordinate prediction , for see that the kink is trapped very early , after having travelled a few potential wells at most .this is the length scale competition regime .however , there is an additional trapping mechanism : as is well known , kinks travelling on a periodic potential emit radiation .this process leads to a gradual slowing down of the kink until it is finally unable to proceed over a further potential well .this takes place at a much slower rate than length scale competition and hence the kink is stopped after travelling a larger distance in the system .the radiation emission as observed in the simulation is smoother than in the length scale competition trapping .as we have seen in the preceding section , a collective coordinate approach in terms of the kink center and width is able to explain in a correct quantitative manner the phenomenon of length scale competition , observed in numerical simulations earlier for the sg equation with a spatially periodic perturbation .the structure of the equations makes it clear the necessity for a second collective coordinate ; imposing constant , we recover the equation for the center already derived in , which shows no sign at all of length scale competition , predicting effective particle - like behavior for all .the validity of this approach has been also shown in the context of the equation , which had not been considered before from this viewpoint . in spite of the fact that the collective coordinate equations can not be solved analytically , they provide us with the physical explanation of the phenomenon in so far as they reveal the key role played bythe width changes with time and their coupling with the translational degree of freedom .it is interesting to reconsider the analysis carried out in of length scale competition through a numerical linear stability analysis . in that previous work, it was argued that the instability arose because , for the relevant perturbation wavelengths , radiation modes moved below the lowest phonon band , inducing the emission of long wavelength radiation which in turn led to the trapping of the kink .it was also argued that those modes became internal modes , i.e. , kink shape deformation modes in the process .the approach presented here is a much more simple way to account for these phonon effects : indeed , as was shown by quintero and kevrekidis , ( odd ) phonons do give rise to width oscillations very similar to those induced by an internal mode .we are confident that what our perturbation technique is making clear is precisely the result of the action of those phonons , summarized in our approach in the width variable .the case for the equation is sligthly different : whereas the sg kink does not have an internal mode and is hence a collective description of phonon modes , the kink possesses an intrinsic internal mode that is easily excited by different mechanisms ( such as interaction with inhomogeneities , ) .therefore , one would expect that for the equation the effect of a perturbation of a given length is more dramatic than for the sg model , as indeed is the case : see fig .[ fig : lengthscale ] for a comparison , showing that the kink is trapped for a wider range of values of in the equation .remarkably , the present and previous results using this two collective coordinate approach , and particularly their interpretation in terms of phonons , suggest that this technique could be something like a ` second order collective coordinate perturbation theory ' , the width degree of freedom playing the role of second order term .we believe it is appealing to explore this possibility from a more formal viewpoint ; if this idea is correct , then one could think of a scheme for adding in an standard way as many collective coordinates as needed to achieve the required accuracy .progress in that direction would provide the necessary mathematical grounds for this fruitful approximate technique . finally , some comments are in order regarding the applicability of our results .we believe that the collective coordinate approach may also explain the length scale competition for breathers , that so far lacks any explanation . for this problem , the approach would likely involve the breather center and its frequency , as this magnitude controls the kink width when the kink is at rest .of course , such an _ ansatz _ would only be valid for the breather initially at rest , and the description of the dynamical problem would be more involved , needing perhaps more collective coordinates ( such as an independent width ) .if this approach succeeds , one can then extend it to other breather like excitations , such as nonlinear schrdinger solitons or intrinsic localized modes .work along these lines is in progress .on the other hand , it would be very important to have an experimental setup where all these conclusions could be tested in the real world .a modified josephson junction device has been proposed recently where the role of the kink width is crucial in determining the performance and dynamical characteristics .we believe that a straightforward modification of that design would permit an experimental verification of our results .we hope that this paper stimulates research in that direction .we thank niurka r. quintero and elas zamora - sillero for discussions on the lagrangian formalism introduced in .this work has been supported by the ministerio de ciencia y tecnologa of spain through grant bfm2003 - 07749-c05 - 01 .is supported by a fellowship from the consejera de educacin de la comunidad autnoma de madrid and the fondo social europeo .88 e. fermi , j. r. pasta , and s. m. ulam , los alamos scientific laboratory report no .la - ur-1940 ( 1955 ) ; reprinted in _ nonlinear wave motion _ , a. c. newell , ed ., ( american mathematical society , providence , 1974 ) .a. c. scott , _ nonlinear science _( oxford university press , oxford , 1999 ) .s. kivshar and b. a. malomed , rev .phys . * 61 * , 763 ( 1989 ) .a. snchez and l. vzquez , int .j. mod .b * 5 * , 2825 ( 1991 ) .m. j. ablowitz , d. j. kaup , a. c. newell , and h. segur , phys .lett . * 30 * , 1262 ( 1973 ) .d. w. mclaughlin and a. c. scott , phys .a * 18 * , 1652 ( 1978 ) .m. b. fogel , s. e. trullinger , a. r. bishop , and j. a. krumhansl , phys .lett . * 36 * , 1411 ( 1976 ) ; phys .b * 15 * , 1578 ( 1976 ) .a. snchez , r. scharf , a. r. bishop , and l. vzquez , phys .a * 45 * , 6031 ( 1992 ) .a. snchez and a. r. bishop , siam review * 40 * , 579 ( 1998 ) .a. snchez , a. r. bishop , and f. domnguez - adame , phys .e * 49 * , 4603 ( 1994 ) .r. scharf and a. r. bishop , phys .e * 47 * , 1375 ( 1993 ) .j. garnier , phys .b * 68 * , 134302 ( 2003 ) .n. r. quintero , a. snchez , and f. g. mertens .84 * , 871 ( 2000 ) ; phys .e * 62 * , 5695 ( 2000 ) . l. morales - molina , n. r. quintero , f. g. mertens , and a. snchez . phys . rev .91 , 234102 ( 2003 ) .m. meister , f. g. mertens , and a. snchez eur .j. b * 20 * , 405 ( 2001 ) n. r. quintero and e. zamora - sillero , physica d * 197 * , 63 ( 2004 ) . h. j. schnitzer , and a. r. bishop , phys . rev .b * 56 * , 2510 ( 1997 ) .z. fei , v. v. konotop , m. peyrard , and l. vzquez , phys .e * 48 * , 548 ( 1993 ) .w. h. press , s. a. teukolsky , w. t. vetterling , and b. p. flannery , _ numerical recipes in c _ ,2nd edition ( cambridge university press , new york , 1992 ) .n. r. quintero and p. g. kevrekidis , phys .rev.e * 64 * , 056608 ( 2001 ) .n. r. quintero , a. snchez , and f. g. mertens , phys .rev.e * 62 * , r60 ( 2000 ) .d. k. campbell , j. f. schonfeld , and c. a. wingate , physica d * 9 * , 1 ( 1983 ) .z. fei , yu . s. kivshar , and l. vzquez , phys .a * 46 * , 5214 ( 1992 ) .l. morales - molina , f. g. mertens , and a. snchez .. j. b bf 37 , 79 ( 2004 ) .
we study the phenomenon of length scale competition , an instability of solitons and other coherent structures that takes place when their size is of the same order of some characteristic scale of the system in which they propagate . working on the framework of nonlinear klein - gordon models as a paradigmatic example , we show that this instability can be understood by means of a collective coordinate approach in terms of soliton position and width . as a consequence , we provide a quantitative , natural explanation of the phenomenon in much simpler terms than any previous treatment of the problem . our technique allows to study the existence of length scale competition in most soliton bearing nonlinear models and can be extended to coherent structures with more degrees of freedom , such as breathers . * lead paragraph * + * solitons , solitary waves , vortices and other coherent structures possess , generally speaking , a characteristic length or size . one important feature of these coherent structures , which usually are exact solutions of certain nonlinear models , is their robustness when the corresponding models are perturbed in different ways . a case relevant in many real applications is that of space - dependent perturbations , that may or may not have their own typical length scale . interestingly , in the latter case , it has been known for a decade that when the perturbation length scale is comparable to the size of the coherent structures , the effects of even very small perturbing terms are dramatically enhanced . although some analytical approaches have shed some light of the mechanisms for this special instability , a clear - cut , simple explanation was lacking . in this paper , we show how such explanation arises by means of a reduction of degrees of freedom through the so - called collective coordinate technique . the analytical results have a straightforward physical interpretation in terms of a resonant - like phenomenon . notwithstanding the fact that we work on a specific class of soliton - bearing equations , our approach is readily generalizable to other equations and/or types of coherent structures . *
nowadays , the application of information technology is a vital part of our daily life .people around the globe use billions of mobile devices daily and spend more times on using these digital devices than ever . sharing our opinions in social networks , searching the web , twitting , purchasing online products , participating in online polling and many other digital aspect of our livesleave behind a tremendous digital footprint .billions of sensors embedded in cars , mobiles and other forms of devices constantly sense , generate and communicate trillions of bytes of information .this gigantic generated data , which is also referred to as _ big data _ , is rich of information about the behavior of individuals in an interconnected network . that is why those who are interested in analyzing human behavior from business analysts to social scientists to academic researchers are highly interested in this data .decision makers tend to extract individual as well as social behavior indicators from this data in order to make better decisions .using traditional data management models to process and manage big data is nearly impossible due to the huge volume of data , the vast velocity of data arrival and variety of data types .therefore , there is a need to develop special techniques which are able to deal with these aspects of big data in order to support the data - driven decision makings .these techniques are also called _big data analytics_. big data management approaches are expected to provide a required level of availability , scalability , security and privacy while working with data . traditionally , automated techniques are used as big data analytics .sometimes ai techniques are used to extract information from big data . in some other casesheuristic approaches are used to extract social or individual behavioral indicators from a large community .these techniques while perform reasonably well in some aspect such as storing or retrieving data in cloud data management systems , they might not perform well when it comes to data collection , curation , annotation and dissemination .for example , ai techniques are not able to provide results with very high precisions when working with unstructured or incomplete data .also , there are cases in which automated techniques are not able to do the job due to the nature of the tasks . for instance, in a database , there might be some missing data items , such as a person s mail address , that there not exist in the datasets at all , hence no automated technique is able to extract such missing piece of information . to overcome this problemmany researches have proposed to enlist the human intelligence and wisdom of crowds in combination with the automated techniques .crowdsourcing is a distributed computing method in which , under specific circumstances , can provide contributions comparable to experts contributions in terms of quality level .crowd involvement in data management tasks , while improves quality of outcomes , raises new challenges . in this paper , we first study related - work in the area of big data analytics as well as crowdsourcing. then we propose a generic framework that simplifies the analysis of existing hybrid human - machine big data analytics .the result of such an analysis is a set of problems that are yet to be answered .we propose such set of challenges and propose some directions for future research in the area . in summary , in section [ sec : rels ] , we study related work in the area of big data analytics and crowdsourcing . in section [ sec : frm ] , we propose our analysis framework .the open issues are studied in section [ sec : issues ] , and we conclude in section [ sec : concs ] .we organize this section in three different sub - sections .we first study big data analytics .we then study the crowdsourcing basic concepts and finally we use the wall - mart case study to articulate the problems that need more investigations .many systems such as social networks , sensing systems , etc ., produce very large amounts of information .this data is not called big data only because of its size .four important attributes , also referred to as _4v _ , characterize the big data concept:(i ) data is huge in terms of its _ volume _ ; ( ii ) data is produced with a very high _ velocity _ ; ( iii ) data comes from a great _ variety _ of data types ; and finally ( iv ) data has different levels of _veracity_. such a tremendous volume of data is a rich source of information about the behavior of individuals , social relations between individuals , patterns , e.g. , purchase patterns , in the behavior of individuals and so on .hence , extracting these hidden aspects is of a great importance to the business owners and analysts .the process of extracting these information from big data is called big data analytics and are applied using different techniques and methods . with the rise of recent web technologies and especially emergence of web 3.0 , recent applications which are working with big data aim to be implemented as distributed , scalable and widely accessible service on the web .cloud computing paradigm makes applications available as services from anywhere in the world by shifting the infrastructure to the network .the following properties of cloud computing has made it a good candidate for hosting deployments of data - intensive applications : -it produces virtually unlimited capacity by providing means to consume the amount of it resources that is actually needed .-it reduces costs by only paying for what you use ( pay - as - you - go ) .-it reduces the time that it systems have to spend on managing and supporting infrastructure .for example in 2007 new york times aimed to build a service for users to have access to any new york times issue since 1851 , a service called timesmachine .the big challenge was serving a bulk of 11 millions of articles in the form of pdf files . to process these 4 terabytes of files they decided to use amazon elastic compute cloud ( ec2 ) and simple storage service ( s3 ) .the source data was uploaded to s3 and then a cluster of hadoop ec2 amazon machine images ( amis ) was started . with parallel running of 100 ec2 amis ,the task of reading the source data from s3 , converting it to pdf and storing it back to s3 was completed within 36 hours . at the beginning, big data analytics started to be done using the existing advanced analytics disciplines such as data mining techniques .authors in use supervised learning techniques for link prediction in social networks . as another example , grahne et .propose method for mining sets of items that have been frequently purchased together . since the clique detection techniques in np - hard by nature , grahne and his colleagues propose a heuristic method to partially overcome this problem .authors in use the same itemset mining technique for detecting spam groups in consumer review systems . the first author and his colleagues have also used mining techniques in and iterative techniques in to identify collusion groups in the log of online rating systems. traditional data management techniques may work fine as long as data are structured .however , one of the main characteristics of big data is the wide variety of data types . in most of the casesdata are unstructured or incomplete .hence , existing data mining or management techniques can not handle big data . to solve this issue big data management systems have been proposed .current big data management systems in cloud use some well - known application logics can be categorized to mapreduce , sql - like and hybrid : the most well - known big data management system is apache hadoop , a framework for running data intensive applications on clusters of commodity hardware .hadoop , which has been very successful and widely used in industry and academia , is a an open source java implementation of mapreduce .mapreduce is a simple but powerful programming model that is designed to enable programmers to develop scalable data - intensive applications on clusters of commodity pcs .the mapreduce model is inspired by the map and the reduce functions in functional programming languages .a map instruction partitions a computational task into some smaller sub - tasks .these sub - tasks are executed in the system in parallel .a reduce function is used to collect and integrate all results from sub - tasks in order to build up the main task s outcome .more than 80 companies and organizations ( e.g. aol , linkedin , twitter , adobe , visa ) are using hadoop for analytic of their large scale data .some efforts have been done to add sql - like flavor on top of mapreduce as many programmers would prefer working with sql as a high - level declarative language instead of low - level procedural programming using mapreduce .pig latin and sawzal are examples of such tools .finally , some systems have been designed with the main goal of bringing some familiar relational database concepts ( such as .tables and columns ) and a subset of sql to unstructured world of hadoop while enjoying hadoop s the extensibility and flexibility .an example of hybrid solutions is hadoopdb project that tries to combine the scalability of mapreduce with the performance and efficiency of parallel databases . in parallel with recent trend that convinces companies to give up building and managing their own data centers byusing computing capacity of cloud providers , many companies are willing to outsource some of their jobs given the low costs of transferring data over the internet and high costs of managing complicated hardware and software building blocks .therefore amazon concluded that cloud computing can allow having access to a workforce that is based around the world and is able to do things that computer algorithms are not really good for .therefore amazon launched machanical turk ( mturk ) system as a crowdsourcing internet marketplace where now has over 200k workers in 100 different countries .crowdsourcing is the process of enlisting a crowd of people to solve a problem .the idea of crowdsourcing was introduced first by jeff .howe in 2006 .since then , an enormous amount of efforts from both academia and industry has been put into this area and so many crowdsourcing platforms and research prototypes ( either general or special purpose)have been introduced .amazon mechanical turk(mturk ) , crowdflower , wikipedia and stackoverflow are examples of well - known crowdsourcing platforms . to crowdsource a problem , the problem owner , also called the _ requester _ , prepares a request for crowd s contributions and submits it to a crowdsourcing platform .this request , also referred to as the _ crowdsourcing task _ or shortly as the _ task _ , consists of a description of the problem that is asked to be solved , a set of requirements necessary for task accomplishment , a possible criteria for evaluating quality of crowd contributions and any other information which can help workers produce contributions of higher quality levels .people who are willing to contribute to the task , also called _ workers _ , select the task , if they are eligible to do so , and provide the requester with their contributions .the contributions are sent to the requester directly or through the crowdsourcing platform .the requester may evaluate the contributions and reward the workers whose contributions have been accepted .several dimensions characterized a crowdsourcing task , each of which impacting various aspects of the task from outcome quality to execution time or the costs .task definition is important in the success of a crowdsourcing process .a poorly designed task can result in receiving low quality contributions , attracting malicious workers or leaving the task unsolved due to unnecessary complications .therefore , it is highly recommended to design robust tasks .a robust task is designed so that it is easier to do it rather than to cheat .moreover , a requester should make sure that she has provided the workers with all information required for doing the task to increase the chance of receiving contributions of higher quality levels .the importance of this dimension is because of its direct impact mainly on the outcome quality , task execution time and number of recruited workers .quality of workers who contribute to a task can directly impact the quality of its outcome .low quality or malicious workers can produce low quality contributions and consequently waste the resources of the requester .research shows that recruiting suitable workers can lead to receiving high quality contributions .a suitable worker is a worker whose profile , history , experiences and expertise highly matches the requirements of a task . in a crowdsourcing process, workers might be recruited through various methods such as open - call , publish / subscribe , friend - based , profile - based and team - based . during the execution of the task ,the requester may manually or automatically control the workflow of the task and manipulate the workflow or the list of the workers who are involved in the task in order to increase the chance of receiving high quality contributions .moreover , workers may increase their experience while contributing to a task by receiving real - time feedback from other workers or requester .the feedback received in real - time , and before final submission of the worker s contribution , can assist her with pre - assessing her contribution and change it so that satisfies the task requirements .real - time workflow control and giving feedback can directly impact the outcome quality , the execution time and also the cost of the task , so they should be taken into account when studying crowdsourcing processes .assessing the quality of contributions received from the crowd is another important aspect of a crowdsourcing process .quality in crowdsourcing is always under question .the reason is that workers in crowdsourcing systems have different levels of expertise and experiences ; they contribute with different incentives and motivations ; and even they might be included in collaborative unfair activities .several approaches are proposed to assess quality of workers contributions such as expert review , input agreement , output agreement , majority consensus and ground truth .rewarding the workers whose contributions have been accepted or punishing malicious or low quality workers can directly impact their chance , eligibility and motivation to contribute to the future tasks .rewards can be monetary ( extrinsic ) or non - monetary ( intrinsic ) .research shows that the impact of intrinsic rewards , e.g. , altruism or recognition in the community , on the quality of the workers contributions is more than the monetary rewards .choosing an adequate compensation policy can greatly impact the number of contributing workers as well as the quality of their contributions .hence , compensation policy is an important aspect of a crowdsourcing process .a single crowdsourcing task might be assigned to several workers .the final outcome of such a task can be one or few f the individual contributions received from workers or an aggregation of all of them .voting is an example of the tasks that crowd contributions are aggregated to build up the final task outcome .in contrast , in competition tasks only one or few workers contributions are accepted and rewarded .each of the individual contributions has its own characteristics such as quality level , worker s reputation and expertise and so many other attributes .therefore , combining or even comparing these contributions is a challenging tasks and choosing a wrong aggregation method can directly impact the quality of the task outcome .in this section , we first propose an overview of the concept of combining crowd and big data analytics .we then propose a framework in order to simplify understanding and studying hybrid human - machine big data approaches . as mentioned earlier ,one of the main characteristics of big data is the wide variety of data types .data might be from different types ; they might be semi - structured , unstructured or structured but incomplete . traditional or advanced analytics can not handle such a variable tremendously large data .so , they should be handled using the big data analytics . as we studied in the previous section ,generally , the big data analytics rely on the deterministic or learning techniques which use the computing power of machines to process big data and extract necessary information , patterns , etc . due to huge size and high level of complexity of big data , the techniques used for data analysis usually leverage heuristic or learning techniques , and hence , they are not able to guarantee their performance . in some casesthe requester might need a predefined specific level of precision and accuracy for the results , but the existing techniques can not provide that level of precision . in these cases , to generate results having the specified quality level a crowd of people are employed to complement the performance of the machines .to simplify understanding these concepts we study crowder and wallmart crowdsourcing project as two related work .we study these systems later in section [ sec : dim ] to study how they deal with the inclusion of crowd into cloud .crowder is a hybrid crowd - machine approach for entity resolution .crowder first employs machines to perform an initial analysis on the data and find most likely solutions and then employs people to refine the results generated by machines .wall - mart product classification project is another example of hybrid - crowd - machine approaches proposed for big data analytics . in this project, a huge volume of data is constantly being received from various retailers from all across the country .the sent data is structured but in most of the cases it is incomplete .so , wall - mart can not use only machines for the purpose of entity matching and resolution .the results , as tested , do not have the required level of accuracy . to overcome this problem, wall - mart selects some individuals with adequate expertise to refine the results produced by machine .big data analytics can be characterized using various dimensions .some of these dimensions are related to the nature of big data and how the machine - based approaches deal with it .availability , multi - tenancy , scalability , etc .are examples of these dimensions .these dimensions are well studied in several related work . in this work ,we do not study these dimensions and identify new dimensions that emerge when a crowd of people in added to big data analytics .from this view point , we identify three important dimensions : _ worker selection _ , _ task decomposition _ and _quality control_. in the followings , we describe these dimensions and study crowder and wall - mart project as two existing case studies from point of view of these dimensions . figure [ fig : tax ] depicts a representation of the identified dimensions and important factors that should be taken into account when studying each dimension .when somebody tries to empower big data analytics with a crowd of workers , there are several points that she should have in mind .the first point is how to assess eligibility of workers .doing some big data analyses may need specific skills and expertise .in such cases only workers having suitable profiles containing those expertise should be selected .defining good worker selection policies can greatly impact quality of analytics . in wall - mart project ,since workers are selected manually from a private crowd , their profiles are checked against the requirements and only suitable workers are chosen . in crowderno worker selection policies are put in place and everybody can contribute to the data analysis process .the second point in worker selection is that , workers should be selected so that the chance of deception is decreased .human - enhanced systems are constantly subject to misbehavior and deceptive activities .the worker selection should be applied so that both individual and collaborative deceptive activities have a very low chance to happen .selecting untrustworthy workers may result in producing dubious low quality results , so it is crucial to select trustworthy workers .crowder does not apply any trust assessment technique while selecting workers , but the wall - mart project people are selected from a private crowd , so they generally have a minimum level of trustworthiness .the third parameter is automaticity , i.e. , how to approach and select workers . in some systems ,workers are selected manually . in this casethe system or task owner looks for people with suitable profiles and asks them to contribute to her task .wall - mart project follows a manual worker selection method . on the other hand , in automatic methods , system on behalf ofthe task owner and , based on the task requirements and people profile , selects and recruits a group of suitable workers .crowder recruits its required workers automatically from people at the amazon mechanical turk . by task decomposition , we mean the process of breaking down a complex analytical task into some smaller sub - tasks that are easier to accomplish mainly in parallel .these sub - tasks should be divided among the machines and humans involved in the system .the first factor is how to decide on who should do what ? " .more precisely , what is the basis on which the task owner or the system decides to assign a sub - task to a human or a machine to do it .task assignment should be done based on several factors .the first factor is the type of the task .computers are good in performing computational tasks , but there are tasks that require some levels of intelligence to be done .there are several tasks types that fit in this category such as image tagging , photo description , audio transcription and so on .these tasks are better to be assigned to a human .the type of the task is an important factor to decide to assign the task to a machine or a human . in both crowder and wall - mart projects ,tasks are first assigned to machines to do .then , the results obtained from machines are given to people to refine the results so that they meet the precision requirements .after a task owner decides to assign a task to humans , the next challenges emerges : who is suitable to do the task ? "several methods are proposed for eligibility assessment to answer this question .generally speaking , the humans whose profiles have a higher level of match with the task requirements are more suitable workers to participate in the task .wall - mart selects people manually and they check the suitability of workers when they select them to participate , but in crowder no suitability assessment approaches are used and everybody can contribute to the tasks . by quality control , we mean all the activities performed to make sure that the outcome of a hybrid human - machine big data task has an accepted level of quality . since , the task is performed by both machines and human and then the individual contributions are aggregated to build up the final task outcome , quality of the final task s outcome depends on two parameters : quality of the individual contributions and the aggregation algorithm .quality of each single contribution is an important factor that directly impacts the quality of the final task outcome . quality of humans contributions depends on many parameters such as quality of the contributor , task requirements , the process of contribution generation ( e.g. , time of the process , cost of the process , etc . ) and so on .quality of machines contributions mainly depends on the algorithm that is used to generate the contributions .many quality control approaches are proposed for assessing quality of contributions in human - involved tasks such as expert review , ground truth , voting , input agreement , output agreement , etc .wall - mart project uses the expert review method for assessing quality of single contributions received from machines .but they do not control the quality of contributions received from human , because of the way they have recruited them . in crowder , the ground truth method is used to assess quality of machines contributions .then , the contributions passing a minimum threshold are sent to crowd for final assessment . to increase the chance of receiving quality results from humans , the crowder ,encourages them to provide a reason why they think the entities match .this is a quality assurance ( design time ) technique called robust task design .the aggregation algorithm is another important parameter impacting quality of the final outcome . for the simplest form in which all contributions are generated by only humans or machines ,the aggregation algorithms take into account many parameters such as the quality of individual contributions , quality of the contributor , statistical distribution of contributions , etc . . in the context of hybrid human - machine systemsthe situation is more complex .that is because , in addition to all those parameters , a priority between machine and human should be considered as well .assume that in a special case , a disagreement occurs between the human response and machine s result .which one should be selected as the correct result ? in almost all existing hybrid systems , the priority is given to human s response whenever a contradiction happens .this is the case in crowder and wall - mart project as well . in both casesthe initial results are generated by machines and refined and curated by humans .the hybrid big data analytics is a young but fast growing research area .billions of people all around the world are equipped with mobile devices all capable of creating gigantic amounts of unstructured data forces the big data analytics area to leverage human computational power towards analyzing big data .the involvement of humans in the big data analytics , while resolves some challenges , creates new challenges that need more attention and investigations . in this sectionwe bring some of these challenges and discuss how they probably can be solved in future . task decomposition as described in section [ sec : frm ] is an important dimension in hybrid human - machine big data analytics .task decomposition has several challenging aspects .the first aspect is how to decompose the task .should the task be decomposed automatically or humans should be involved in ?automated task decomposition is faster , but probably not enough accurate . on the other hand ,involving public crowd with different levels of knowledge and experience as well as various and sometimes dubious incentives and motivations is a risky job that may waste time and money of the task owners .employing expert crowd to do the decomposition is another solution which incurs higher costs , may cause more delays , etc . at the moment , the existing hybrid systems employ the manual ( expert - based ) approaches for decomposing the tasks .the next challenge is assigning decomposed sub - tasks to humans and machines . deciding on who should do what " is seriously challenging .the first challenge is to decide which part of the work should be assigned to machine and which parts should be given to crowd to be done .the next challenging aspect here is crowd selection .assigning sub - tasks to crowd can easily create security and privacy problems .there are cases in which assigning whole the problem to one worker can cause serious privacy and security problems .even these problems may occur if a large number of tasks of a requester goes to one worker , or a small group of collaborating workers , the security and privacy of the requester easily breach .finding a privacy aware worker recruitment approach which is also security aware as too and in the same times has a reasonable performance can be a potential direction of research in the future of hybrid human - machine big data analytics .furthermore , considering the size of the big data , the high velocity of data arrival and unstructured nature of big data makes the problem more complicated .let s use an example to explain the challenge .assume that in a big data management system people are employed to extract entities from a given text .as the size of the problem is too big , a very large number of workers are required to do the job .assume that the simple task is extracting entities from a given text .there are several approaches to allocate tasks to users . letting every worker to see all tasks simply can overwhelm workers with stacks of tasks and it makes it impossible for them to find the jobs that suits their expertise .so , it is necessary to have a mechanism for assigning tasks to workers .assigning tasks according to the workers profiles is not also possible , because the types of data may change suddenly and system should change the worker selection criteria on the fly to match the new requirements . also data is unstructured , hence , matching the data with the profile of the workers in real - time is another challenge that needs investigations .moreover , the unstructured nature of big data makes it challenging to use publish / subscribe model for worker selection . in publish/ subscribe approach , workers apply to receive updates or tasks from a specific category but because big data is usually unstructured it is hard to fit arrived data items in particular predefined categories .so it is not possible to assign tasks to workers based on this approach .finding a task assignment approach is an interesting and challenging aspect of human - enhanced big data analytics .the final outcome of a hybrid big data computation task is an aggregate of all contributions received from both machines and humans .selecting a inadequate aggregation algorithm can easily lead to producing low quality results .the size of the big data is very large ; the data is rapidly changing and unstructured .all these characteristics emphasize that the traditional aggregation algorithms are unable to handle this problem .simple algorithms even , the ones that rely on automated methods and do not involve humans in their computations such as and , can not show an accepted performance in the area . on the other hand , when it comes to crowd contributions , quality is always a serious question .people have the potential to exploit the system in order to gain unfair benefits and they do cleverly so that they can easily trap any aggregation algorithm . selecting a suitable aggregation algorithm is a serious challenge even in traditional big data algorithms .the combination of human and machine raises even more challenges . in some cases , such as computation tasks ,the credibility of the results generated by machines is far more than human results , while in hits the credibility of crowd s contributions is much higher than machine s .also , there might be cases in which there are contradictions between the machine s outcome and a worker s contribution . which one should have the higher priority in this case ?machine or human ? answering these questions needs deeper investigations and research supported with excessive experimentations .availability of service is one of the main attributes of cloud and big data services . according to the agreements between the service provider and the customer ,the system is responsible for providing on - demand resources for the requester .the existing techniques have solved the availability problem reasonably well . on the other hand , the crowd workers usually are not full - time workers and do not have obligations to be on - time , on - call or even available in a specific period of time .some systems have solved this problem by paying workers to be on - call .considering the size of the problems in big data , paying on - call workers drastically increases the costs and is not feasible to apply to this area .moreover , in addition to the size of the data , big data is unstructured and also may come from disparate data sources .therefore , analyzing such a diverse data in terms of type requires having a large crowd of on - call workers with a very broad range of expertise .providing and keeping on - call such a large crowd having diverse set of expertise is a challenging task which in most of cases is not feasible , or at least imposes high costs to the system. workers may come and get engaged in a task , then suddenly leave it without any notice or clear reason .this crowd characteristic raises another challenge .assume that a worker has started a task , e.g. , tagging data in an online stream , and then suddenly leaves the system due to a communication problem or intentionally . what would be the best scenario to solve this issue ?how is the best worker whom should be replaced with the left worker ? should the task be restarted from beginning or it should resume executing from a specific milestone ? how these milestones should be defined ? and many other problems which all emerge because of leaving and joining workers in a human - enhanced big data management system .all these crowd characteristics raise a great concern on the availability of service in hybrid big data systems .therefore , availability is probably another promising and interesting future research direction in the area of hybrid big data .the collaborative and social nature of web 2.0has created an opportunity for emergence of a wide variety of human - centric systems , such as social networks , blogs , online markets , etc . , in which people spend a large portion of their daily life . the fast growth of smart phones has made these applications more available than ever .this has led to the era of big data where users deal with data of a tremendous volume , wide variety of types , a rapid velocity and variable veracity . to understand this gigantic era ,users need to rely on big data analytics to extract useful information from such a huge pool of data .recently , a trend has emerged towards analyzing big data using human intelligence and wisdom of the crowds .this trend , creates great opportunities for big data analytics to create more accurate analytics and results , however they raise many challenges that need more research and investigations . in this paper, we have studied the area of hybrid human - machine big data analytics .we first studied the crowdsourcing and big data areas and then proposed a framework to study the existing hybrid big data analytics using this framework .we then introduced some significant challenges we believe to point to new research directions in the area of hybrid big data analytics .we believe that in the near future the trend towards hybrid systems which contain human as a computing power will dramatically increase and a great amount of efforts from both industry and academia will be put in studying hybrid human - enabled systems .as many researches show the emergences of these human - centric systems has created a great challenge on the security and privacy of future big data services .also , the existence dominance of casual workers in these hybrid systems makes it harder than ever to guarantee availability and quality of services .social networks , global crowdsourcing systems and other types of mass collaborative systems on the web are pools rich of work forces that can easily participate in big data analytics . at the moment these systems are almost isolated and it is very hard to define a task on all of these pools of human resources .we believe that in the future it will be a need for a generic people management system on the web which is capable of handling profiles , records and histories of people all around the web and acts as a broker for all systems which request for human involvement .such a people management system can simplifies handling security and privacy issues as well as finding available crowd workers for tasks which need high levels of availability .mohammad allahbakhsh , boualem benatallah , aleksandar ignjatovic , hamid reza motahari - nezhad , elisa bertino , and schahram dustdar .quality control in crowdsourcing systems : issues and directions ., 17(2):7681 , 2013 .mohammad allahbakhsh , aleksandar ignjatovic , boualem benatallah , seyed - mehdi - reza beheshti , elisa bertino , and norman foo .reputation management in crowdsourcing systems . in _ collaborative computing: networking , applications and worksharing ( collaboratecom ) , 2012 8th international conference on _ , pages 664671 .ieee , 2012 .mohammad allahbakhsh , aleksandar ignjatovic , boualem benatallah , seyed - mehdi - reza beheshti , elisa bertino , and norman foo .collusion detection in online rating systems . in _ proceedings of the 15th asia pacific web conference ( apweb 2013 ) _ , pages 196207 , 2013 .mohammad allahbakhsh , aleksandar ignjatovic , boualem benatallah , seyed - mehdi - reza beheshti , norman foo , and elisa bertino .representation and querying of unfair evaluations in social rating systems . , 2013 .h. amintoosi and s.s .a trust - based recruitment framework for multi - hop social participatory sensing . in _ distributed computing in sensor systems ( dcoss ) , 2013 ieee international conference on _ , pages 266273 , may 2013 .lars backstrom and jure leskovec .supervised random walks : predicting and recommending links in social networks . in_ proceedings of the fourth acm international conference on web search and data mining _ , wsdm 11 , pages 635644 , new york , ny , usa , 2011 .michael s. bernstein , joel brandt , robert c. miller , and david r. karger .crowds in two seconds : enabling realtime crowd - powered interfaces . in _ proceedings of the 24th annual acm symposium on user interface software and technology_ , uist 11 , pages 3342 , new york , ny , usa , 2011 .alessandro bozzon , marco brambilla , and stefano ceri . answering search queries with crowdsearcher . in _ proceedings of the 21st international conference on world wide web _ , www 12 , pages 10091018 , new york , ny , usa , 2012 .acm .alfredo cuzzocrea , il - yeol song , and karen c davis .analytics over large - scale multidimensional data : the big data revolution ! in _ proceedings of the acm 14th international workshop on data warehousing and olap _ , pages 101104 .acm , 2011 .murat demirbas , murat ali bayir , cuneyt gurcan akcora , yavuz selim yilmaz , and hakan ferhatosmanoglu .crowd - sourced sensing and collaboration using twitter . in _ world of wireless mobile and multimedia networks ( wowmom ) , 2010 ieee international symposium on a _ , pages 19 .ieee , 2010 .steven dow , anand kulkarni , scott klemmer , and bjrn hartmann . shepherding the crowd yields better work . in _ proceedings of the acm 2012 conference on computer supported cooperative work _ , cscw 12 , pages 10131022 , new york , ny , usa , 2012 .michael j. franklin , donald kossmann , tim kraska , sukriti ramesh , and reynold xin .crowddb : answering queries with crowdsourcing . in _ proceedings of the 2011 acm sigmod international conference on management of data_ , sigmod 11 , pages 6172 , new york , ny , usa , 2011 .acm .aleksandar ignjatovic , norman foo , and chung tong lee .an analytic approach to reputation ranking of participants in online transactions . in _ proceedings of the 2008 ieee / wic / acm international conference on web intelligence and intelligent agent technology - volume 01_ , pages 587590 , washington , dc , usa , 2008 .ieee computer society .aniket kittur boris smus jim laredoc maja vukovic jakob rogstadius , vassilis kostakos .an assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing . in _ proceeding of the fifth international aaai conference on weblogs and social media_. aaai , 2011 .natala j. menezes jenny j. chen and adam d. bradley .opportunities for crowdsourcing research on amazon mechanical turk . in _proceeding of the chi 2011 workshop on crowdsourcing and human computation _ ,may 2011 .aniket kittur , susheel khamkar , paul andr , and robert kraut .crowdweaver : visually managing complex crowd work . in _ proceedings of the acm 2012 conference on computersupported cooperative work _ , pages 10331036 .acm , 2012 .anand p kulkarni , matthew can , and bjoern hartmann .turkomatic : automatic recursive task and workflow design for mechanical turk . in _chi11 extended abstracts on human factors in computing systems _ ,pages 20532058 .acm , 2011 .christopher olston , benjamin reed , utkarsh srivastava , ravi kumar , and andrew tomkins .pig latin : a not - so - foreign language for data processing . in _ proceedings of the 2008 acmsigmod international conference on management of data _ , sigmod 08 , pages 10991110 , new york , ny , usa , 2008 .aditya g. parameswaran , hector garcia - molina , hyunjung park , neoklis polyzotis , aditya ramesh , and jennifer widom .crowdscreen : algorithms for filtering data with humans . in _ proceedings of the 2012 acmsigmod international conference on management of data _ , sigmod 12 , pages 361372 , new york , ny , usa , 2012 .alexander j. quinn and benjamin b. bederson .human computation : a survey and taxonomy of a growing field . in _ proceedings of the 2011 annual conference on human factors in computing systems_ , chi 11 , pages 14031412 , new york , ny , usa , 2011 .acm .maja vukovic and claudio bartolini . towards a research agenda for enterprise crowdsourcing .in tiziana margaria and bernhard steffen , editors , _ leveraging applications of formal methods , verification , and validation _ ,volume 6415 of _ lecture notes in computer science _ , pages 425434 .springer berlin / heidelberg , 2010 .maja vukovic , mariana lopez , and jim laredo .peoplecloud for the globally integrated enterprise . in _service - oriented computing .icsoc / servicewave 2009 workshops _ ,volume 6275 of _ lecture notes in computer science _ , pages 109114 .springer berlin / heidelberg , 2010 .gang wang , christo wilson , xiaohan zhao , yibo zhu , manish mohanlal , haitao zheng , and ben y zhao . serf and turf : crowdturfing for fun and profit . in_ proceedings of the 21st international conference on world wide web _ , pages 679688 .acm , 2012 .yafei yang , qinyuan feng , yan lindsay sun , and yafei dai .reptrap : a novel attack on feedback - based reputation systems . in _ proceedings of the 4th international conference on security and privacy in communication netowrks _, page 8 .acm , 2008 .
the increasing application of social and human - enabled systems in people s daily life from one side and from the other side the fast growth of mobile and smart phones technologies have resulted in generating tremendous amount of data , also referred to as big data , and a need for analyzing these data , i.e. , big data analytics . recently a trend has emerged to incorporate human computing power into big data analytics to solve some shortcomings of existing big data analytics such as dealing with semi or unstructured data . including crowd into big data analytics creates some new challenges such as security , privacy and availability issues . in this paper study hybrid human - machine big data analytics and propose a framework to study these systems from crowd involvement point of view . we identify some open issues in the area and propose a set of research directions for the future of big data analytics area .
one of the key challenges in neuroscience is how the human brain activities can be mapped to the different brain tasks . as a conjunction between neuroscience and computer science , multi - voxel pattern analysis ( mvpa ) addresses this question by applying machine learning methods on task - based functional magnetic resonance imaging ( fmri ) datasets .analyzing the patterns of visual objects is one of the most interesting topics in mvpa , which can enable us to understand how brain stores and processes the visual stimuli .it can be used for finding novel treatments for mental diseases or even creating a new generation of the user interface in the future .technically , there are two challenges in previous studies .the first challenge is decreasing sparsity and noise in preprocessed voxels .since , most of the previous studies directly utilized voxels for predicting the stimuli , the trained features are mostly sparse , high - dimensional and noisy ; and they contain trivial useful information .the second challenge is increasing the performance of prediction .most of the brain decoding problems employed binary classifiers especially by using a one - versus - all strategy . in addition , multi - class predictors are even mostly based on the binary classifiers such as the error - correcting output codes ( ecoc ) methods .since task - based fmri experiments are mostly imbalance , it is so hard to train an effective binary classifier in the brain decoding problems . for instance , consider collected data with 10 same size categories . since this dataset is imbalance for one - versus - all binary classification , most of the classical algorithms can not provide acceptable performance . for facing mentioned problems, this paper proposes anatomical pattern analysis ( apa ) as a general framework for decoding visual stimuli in the human brain .this framework employs a novel feature extraction method , which uses the brain anatomical regions for generating a normalized view . in practice, this view can enable us to combine homogeneous datasets .the feature extraction method also can automatically detect the active regions for each category of the visual stimuli .indeed , it can decrease noise and sparsity and increase the performance of the final result .further , this paper develops a modified version of imbalance adaboost algorithm for binary classification .this algorithm uses a supervised random sampling and penalty values , which are calculated by the correlation between different classes , for improving the performance of prediction .this binary classification will be used in a one - versus - all ecoc method as a multi - class approach for classifying the categories of the brain response .the rest of this paper is organized as follows : in section 2 , this study briefly reviews some related works .then , it introduces the proposed method in section 3 .experimental results are reported in section 4 ; and finally , this paper presents conclusion and pointed out some future works in section 5 .there are three different types of studies for decoding visual stimuli in the human brain .pioneer studies just focused on the special regions of the human brain , such as the fusiform face area ( ffa ) or parahippocampal place area ( ppa ) .they only proved that different stimuli can provide different responses in those regions , or found most effective locations based on different stimuli .the next group of studies introduced different correlation techniques for understanding similarity or difference between responses to different visual stimuli .haxby et al. recently showed that different visual stimuli , i.e. human faces , animals , etc ., represent different responses in the brain .further , rice et al . proved that not only the mentioned responses are different based on the categories of the stimuli , but also they are correlated based on different properties of the stimuli .they used gist technique for extracting the properties of stimuli and calculated the correlations between these properties and the brain responses .they separately reported the correlation matrices for different human faces and different objects ( houses , chairs , bottles , shoes ) .the last group of studies proposed the mvpa techniques for predicting the category of visual stimuli .cox et al . utilized linear and non - linear versions of support vector machine ( svm ) algorithm .norman et al . argued for using svm and gaussian naive bayes classifiers .carroll et al .employed the elastic net for prediction and interpretation of distributed neural activity with sparse models .varoquaux et al .proposed a small - sample brain mapping by using sparse recovery on spatially correlated designs with randomization and clustering .their method is applied on small sets of brain patterns for distinguishing different categories based on a one - versus - one strategy .mcmenamin et al .studied subsystems underlie abstract - category ( ac ) recognition and priming of objects ( e.g. , cat , piano ) and specific - exemplar ( se ) recognition and priming of objects ( e.g. , a calico cat , a different calico cat , a grand piano , etc . ) .technically , they applied svm on manually selected rois in the human brain for generating the visual stimuli predictors .mohr et al . compared four different classification methods , i.e. l1/2 regularized svm , the elastic net , and the graph net , for predicting different responses in the human brain .they show that l1-regularization can improve classification performance while simultaneously providing highly specific and interpretable discriminative activation patterns .osher et al .proposed a network ( graph ) based approach by using anatomical regions of the human brain for representing and classifying the different visual stimuli responses ( faces , objects , bodies , scenes ) .-0.1 in -0.27 inblood oxygen level dependent ( bold ) signals are used in fmri techniques for representing the neural activates . based on hyperalignment problem in the brain decoding ,quantity values of the bold signals in the same experiment for the two subjects are usually different .therefore , mvpa techniques use the correlation between different voxels as the pattern of the brain response .as depicted in figure 1 , each fmri experiment includes a set of sessions ( time series of 3d images ) , which can be captured by different subjects or just repeating the imaging procedure with a unique subject .technically , each session can be partitioned into a set of visual stimuli categories .indeed , an independent category denotes a set of homogeneous conditions , which are generated by using the same type of photos as the visual stimuli .for instance , if a subject watches 6 photos of cats and 5 photos of houses during a unique session , this 4d image includes 2 different categories and 11 conditions .consider of scans images for each session of the experiment . can be written as a general linear model : , where of scans ( n ) categories ( regressors) denotes the design matrix ; is the noise ( error of estimation ) ; and also of categories 3d images denotes the set of correlations between voxels for the categories of the session .design matrix can be calculated by convolution of onsets ( or time series ) and the hemodynamic response function ( hrf ) .this paper uses generalized least squares ( gls ) approach for estimating optimized solution , where is the covariance matrix of the noise ( ) .now , this paper defines the positive correlation for all categories as the active regions , where denotes the estimated correlation , and are the correlation and _ positive _ correlation for the -th category , respectively .moreover , the data is partitioned based on the conditions of the design matrix as follows : where denotes the set of all conditions in each session , and are respectively the number of categories in each session and the number of conditions in each category .further , = \{number of scans 3d images } denotes the 4d images for the -th category and -th condition in the design matrix .now , this paper defines the sum of all images in a condition as follows : \label{eqrawfeatures}\ ] ] where ] is the ] is the ] denotes the index of -th voxel of -th atlas region ; and is the set of indexes of voxels in the -th region . \in a_l \implies { \gamma}_{q_r}^p(l)=\frac{1}{\mid a_l \mid}\sum_{v = 1}^{\mid a_l \mid}{({\xi}_{q_r}^{p})[a_v]}=\frac{1}{\mida_l \mid}\sum_{v = 1}^{\mid a_l \mid}{({\xi}_{q_r}^{p})[x_v , y_v , z_v ] } \label{featureextraction}\ ] ] this paper randomly partitions the extracted features ,\dots, ] to the train set and the test set . as a new branch of adaboost algorithm , algorithm 1 employs for training binary classification .then , is utilized for estimating the performance of the classifier .as mentioned before , training binary classification for fmri analysis is mostly imbalance , especially by using a one - versus - all strategy . as a result ,the number of samples in one of these binary classes is smaller than the other class . this paper also exploits this concept . indeed , algorithm 1 firstly partitions the train data to small and large classes ( groups ) based on the class labels .then , it calculates the scale of existed elements between two classes ; and employs this scale as the number of the ensemble iteration .here , denotes the floor function . in the next step , the large classis randomly partitioned to parts .now , train data for each iteration is generated by all instances of the small class , one of the partitioned parts of the large class and the instances of the previous iteration , which can not truly be trained . in this algorithm, function denotes the pearson correlation ; and $ ] is the train weight ( penalty values ) , which is considered for the large class .further , denotes any kind of weighted classification algorithm .this paper uses a simple classical decision tree as the individual classification algorithm ( ) . + generally , there are two techniques for applying multi - class classification .the first approach directly creates the classification model such as multi - class support vector machine or neural network .in contrast , ( indirect ) decomposition design uses an array of binary classifiers for solving the multi - class problems . asone of the classical indirect methods , error - correcting output codes ( ecoc ) includes three components , i.e. base algorithm , encoding and decoding procedures . as the based algorithm in the ecoc , this paper employs algorithm 1 for generating the binary classifiers ( ) .further , it uses a one - versus - all encoding strategy for training the ecoc method , where an independent category of the visual stimuli is compared with the rest of categories ( see figure 1.e ) .indeed , the number of classifiers in this strategy is exactly equal to the number of categories .this method also assigns the brain response to the category with closest hamming distance in decoding stage .data set is train set , denotes real class labels of the train sets , + classifier , + 1 .partition , where , are small and large classes .calculate based on number of elements in classes .+ 3 . randomlysample the .+ 4 . by considering ,generating classifiers : + 5 .construct and + 7.train .construct , as the set of instances can not truly trained in .if * : go to line ; * else : * return as final classifier . +this paper employs two datasets , shared by openfmri.org , for running empirical studies . as the first dataset , ` visual object recognition ' ( ds105 ) includes 71 sessions ( 6 subjects ) .it also contains 8 categories of visual stimuli , i.e. gray - scale images of faces , houses , cats , bottles , scissors , shoes , chairs , and scrambled ( nonsense ) photos .this dataset is analyzed in high - level visual stimuli as the binary predictor , by considering all categories except scrambled photos as objects , and low - level visual stimuli in the multi - class prediction .please see for more information . as the second dataset , ` word and object processing ' ( ds107 ) includes 98 sessions ( 49 subjects ) .it contains 4 categories of visual stimuli , i.e. words , objects , scrambles , consonants. please see for more information .these datasets are preprocessed by spm 12 ( www.fil.ion.ucl.ac.uk/spm/ ) , i.e. slice timing , realignment , normalization , smoothing .then , the beta values are calculated for each session .this paper employs the _ mni 152 t1 1 mm _( see figure 1.d ) as the reference image ( ) in eq .( 4 ) for registering the extracted conditions ( ) to the standard space ( ) .in addition , this paper uses _ talairach _ atlas ( contains regions ) in eq .( 5 ) for extracting features ( see figure 1.d ) .figures 2.a - c demonstrate examples of brain responses to different stimuli , i.e. ( a ) word , ( b ) object , and ( c ) scramble . here, gray parts show the anatomical atlas , the colored parts ( red , yellow and green ) define the functional activities , and also the red rectangles illustrate the error areas after registration .indeed , these errors can be formulated as the nonzero areas in the brain image which are located in the zero area of the anatomical atlas ( the area without region number ) .the performance of objective function on ds105 , and ds107 data sets is analyzed in figure 2.d by using different distance metrics , i.e. woods function ( w ) , correlation ratio ( cr ) , joint entropy ( je ) , mutual information ( mi ) , and normalized mutual information ( nmi ) .as depicted in this figure , the nmi generated better results in comparison with other metrics .figure 3.a and c illustrate the correlation matrix of the ds105 and ds107 at the voxel level , respectively .similarly , figure 3.b and d show the correlation matrix the ds105 and ds107 in the feature level , respectively .since , brain responses are sparse , high - dimensional and noisy at voxel level , it is so hard to discriminate between different categories in figure 2.a and c. by contrast , figure 2.b and d provide distinctive representation when the proposed method used the correlated patterns in each anatomical regions as the extracted features . ) on the error of registration .+ , title="fig:",scaledwidth=60.0% ] -0.1 in -0.3 in + ( a ) ( b ) + + ( c ) ( d ) -0.3 in the performance of our framework is compared with state - of - the - art methods , i.e. cox & savoy , mcmenamin et al . , mohr et al . , and osher et al . , by using leave - one - out cross validation in the subject level .further , all of algorithms are implemented in the matlab r2016a ( 9.0 ) by authors in order to generate experimental results .tables 1 and 2 respectively illustrate the classification accuracy and area under the roc curve ( auc ) for the binary predictors based on the category of the visual stimuli .all visual stimuli in the dataset ds105 except scrambled photos are considered as the object category for generating these experimental results . as depicted in the tables 1 and 2 ,the proposed algorithm has achieved better performance in comparison with other methods because it provided a better representation of neural activities by exploiting the anatomical structure of the human brain .table 3 illustrates the classification accuracy for multi - class predictors . in this table , ` ds105 ' includes 8 different categories ( p=8 classes ) and ` ds107 ' contains 4 categories of the visual stimuli . as another 4 categories dataset , `all ' is generated by considering all visual stimuli in the dataset ds105 except scrambled photos as object category and combining them with the dataset ds107 . in this dataset ,the accuracy of the proposed method is improved by combining two datasets , whereas , the performances of other methods are significantly decreased . as mentioned before , it is the standard space registration problem in the 4d images .in addition , our framework employs the extracted features from the brain structural regions instead of using all or a subgroup of voxels , which can increase the performance of the predictive models by decreasing noise and sparsity .-0.15 in [ tbl : binaryaccuracy ] [ cols="<,^,^,^,^,^",options="header " , ] -0.3 inthis paper proposes anatomical pattern analysis ( apa ) framework for decoding visual stimuli in the human brain .this framework uses an anatomical feature extraction method , which provides a normalized representation for combining homogeneous datasets .further , a new binary imbalance adaboost algorithm is introduced .it can increase the performance of prediction by exploiting a supervised random sampling and the correlation between classes .in addition , this algorithm is utilized in an error - correcting output codes ( ecoc ) method for multi - class prediction of the brain responses .empirical studies on 4 visual categories clearly show the superiority of our proposed method in comparison with the voxel - based approaches . in future , we plan to apply the proposed method to different brain tasks such as low - level visual stimuli , emotion and etc . -0.1 inwe thank the anonymous reviewers for comments . this work was supported in part by the national natural science foundation of china ( 61422204 and 61473149 ) ,jiangsu natural science foundation for distinguished young scholar ( bk20130034 ) and nuaa fundamental research funds ( ne2013105 ) . -0.1 in norman , k. a. , polyn , s. m. , detre , g. j. , haxby , j. v. : beyond mind - reading : multi - voxel pattern analysis of fmri data .trends in cognitive sciences , vol . 10 ( 9 ) , pp .424430 , 2006 .haxby , j. v. , connolly , a. c. , guntupalli , j. s. : decoding neural representational spaces using multivariate pattern analysis. annual review of neuroscience , vol .37 , pp . 435456 , 2014 .osher , d. e. , saxe , r. , koldewyn , k. , gabrieli , j. d. e. , kanwisher , n. , saygin , z. m. : structural connectivity fingerprints predict cortical selectivity for multiple visual categories across cortex .cerebral cortex , vol . 26 ( 4 ) , pp . 16681683 , 2016 .friston , k. j. , ashburner , j. o. h. n. , heather , j. : statistical parametric mapping .neuroscience databases : a practical guide , vol . 1 ( 237 ) , pp .174 , 2003 .cox , d. , savoy , r. l. : functional magnetic resonance imaging ( fmri ) ` brain reading ' : detecting and classifying distributed patterns of fmri activity in human visual cortex .neuroimage , vol .19 ( 2 ) , pp . 261270 , 2003 .mcmenamin , b. w. , deason , r. g. , steele , v. r. , koutstaal , w. , marsolek , c. j. : separability of abstract - category and specific - exemplar visual object subsystems : evidence from fmri pattern analysis .brain and cognition , vol .93 , pp . 5464 , 2015 .mohr , h. , wolfensteller , u. , frimmel , s. , ruge , h. : sparse regularization techniques provide novel insights into outcome integration processes .neuroimage , vol .163176 , 2015 .escalera , s. , pujol , o. , petia , r. : error - correcting output codes library .journal of machine learning research , vol .11 , pp . 661664 , 2010 .liu , x. y. , wu , j. , zhou , z. h. : exploratory undersampling for class - imbalance learning .ieee transactions on cybernetics , vol .39 ( 2 ) , pp . 539550 , 2009 .jenkinson , m. , bannister , p. , brady , m. , smith , s. : improved optimization for the robust and accurate linear registration and motion correction of brain images .neuroimage , vol .17 ( 2 ) , pp . 825841 , 2002 .duncan , k. j. , pattamadilok , c. , knierim , i. , devlin , j. t. , consistency and variability in functional localisers .neuroimage , vol .46 ( 4 ) , pp .10181026 , 2009 . g. e. rice , d. m. watson , t. hartley , and t. j. andrews : low - level image properties of visual objects predict patterns of neural response across category - selective regions of the ventral visual pathway .the journal of neuroscience , vol .26 , pp . 88378844 , 2014 . m. k. carroll , g. a. cecchi , i. rish , r. garg , and a. r. rao , prediction and interpretation of distributed neural activity with sparse models .neuroimage , vol .1 , pp . 112122 , 2009 . g. varoquaux , a. gramfort , and b. thirion , small- sample brain mapping : sparse recovery on spatially correlated designs with randomization and clustering .international conference on machine learning , 2012 .
a universal unanswered question in neuroscience and machine learning is whether computers can decode the patterns of the human brain . multi - voxels pattern analysis ( mvpa ) is a critical tool for addressing this question . however , there are two challenges in the previous mvpa methods , which include decreasing sparsity and noises in the extracted features and increasing the performance of prediction . in overcoming mentioned challenges , this paper proposes anatomical pattern analysis ( apa ) for decoding visual stimuli in the human brain . this framework develops a novel anatomical feature extraction method and a new imbalance adaboost algorithm for binary classification . further , it utilizes an error - correcting output codes ( ecoc ) method for multi - class prediction . apa can automatically detect active regions for each category of the visual stimuli . moreover , it enables us to combine homogeneous datasets for applying advanced classification . experimental studies on 4 visual categories ( words , consonants , objects and scrambled photos ) demonstrate that the proposed approach achieves superior performance to state - of - the - art methods .
cellular automata with complex behavior exhibit dynamical patterns that can be interpreted as the movement of particles through a physical medium .these particles are interpretable as loci for information storage , and their movement through space is interpretable as information transfer .the collisions of these particles in the cellular automaton s lattice are sites of information processing .cellular automata with complex behavior have immense potential to describe physical systems and their study has had impact in the design of self - assembling structures and the modelling of biological processes like signaling , division , apoptosis , necrosis and differentiation .john conway s game of life is the most renowned complex binary cellular automaton , and the archetype used to guide the search methodology for other complex binary cellular automata that we describe in this work .previously , complex behavior in binary cellular automata has been characterized through measures such as entropy , lyapunov exponents , and kolmogorov - chaitin complexity .we propose the characterization of the behavior of -dimensional cellular automata through heuristic measures derived from the evaluation of their minimal boolean forms .this proposed characterization is derived from heuristic criteria validated in elementary cellular automata with simple boolean forms .table [ table : ca - boolean - behavior ] illustrates the rationale for this characterization showing elementary cellular automata whose boolean forms are minimally simple , and whose behavior can be unequivocally identified .cellular behaviors of growth , decrease , and chaoticity are characterized by the boolean operations _ or _ , _ and _ , and _ xor _ , respectively .the cellular behavior of stability can be characterized by the absence of a boolean operator or the use of the _ not _ operator .we define an evaluation criterion to produce metrics that characterize the behavior of cellular automata whose minimal boolean expressions are more complex ( i.e. have more terms and the combination of various operators ) than those appearing in table [ table : ca - boolean - behavior ] .the produced metrics are used to create static and dynamic measures of behavior .the static measure of behavior is calculated from the truth table of the minimal boolean expression of the cellular automaton , and the dynamic measure of behavior is derived from the averaged appearance of the metrics in _n _ executions of the cellular automaton from _n _ random initial conditions .we use the euclidean distance of these measures in a given cellular automaton to the measures of the game of life to assess its capacity for complex behavior , and use this distance as a cost function to guide the genetic search of -dimensional cellular automata with complex behavior .a cellular automaton is formally represented by a quadruple , where * is the finite or infinite cell lattice , * is a finite set of states or values for the cells , * is the finite cell neighborhood , * is the local transition function , defined by the state transition rule .each cell in the lattice is defined by its discrete position ( an integer number for each dimension ) and by its discrete state value . in a binary cellular automaton , .time is also discrete .the state of the cell is determined by the evaluation of the local transition function on the cell s neighborhood at time ; is the next time step after time .the neighborhood is defined as a finite group of cells surrounding and/or including the observed cell .the global state is the configuration of all the cells that comprise the automaton , .the lattice is the infinite cyclic group of integers .the position of each cell in the lattice is described by the index position .configurations are commonly written as sequences of characters , such as the finite global state is a finite configuration , where is a finite lattice , indexed with integers , the set of neighborhood indices of size is defined by the set of relative positions within the configuration , such that is the neighborhood of the observed cell that includes the set of indices , and is defined as this describes the neighborhood as a character string that includes the cells that are considered neighbors of the observed cell . a compact representation of the neighborhood value is a unique integer , defined as an , number [ 2 ] the local transition function yields the value of at from the neighborhood of the cell observed at present time is expressed by where specifies the states of the neighboring cells to the cell at time .the transition table defines the local transition function , listing an output value for each input configuration .table [ table : tran - function - truth - table ] is a sample transition table for an elementary cellular automaton with a neighborhood of radius 1 , wherein adjacent neighboring cells of are and , forming a tuple , ..local transition function of as a truth table .[ cols="^,^",options="header " , ] c + averaged spacetime evolution + + + identified glider +we wish to thank jan baetens , hector zenil , alyssa adams , and nima dehghani for their helpful comments .we appreciate the support of the physics and mathematics in biomedicine consortium .we also wish to thank todd rowland for his encouragement and continued interest in the project h. abelson , d. allen , d. coore , c. hanson , e. rauch , g. j. sussman , g. homsy , j. thomas f. knight and r. w. radhika nagpal , `` amorphous computing , '' _ communications of the acm _ , * 43*(5 ) , 2000 , pp .74 - 82 .m. hirabayashi , s. kinoshita , s. tanaka , h. honda , h. kojima and k. oiwa , `` cellular automata analysis on self - assembly properties in dna tile computing , '' _ lecture notes in computer science _, * 7495 * , 2012 , pp .544 - 553 .m. hwang , m. garbey , s. a. berceli and r. tran - son - tay , `` rule - based simulation of multi - cellular biological systems a review of modeling techniques , '' _ cellular and molecular bioengineering _, * 2*(3 ) , 2009 , pp .285 - 294 .
we propose the characterization of binary cellular automata using a set of behavioral metrics that are applied to the minimal boolean form of a cellular automaton s transition function . these behavioral metrics are formulated to satisfy heuristic criteria derived from elementary cellular automata . behaviors characterized through these metrics are growth , decrease , chaoticity , and stability . from these metrics , two measures of global behavior are calculated : 1 ) a static measure that considers all possible input patterns and counts the occurrence of the proposed metrics in the truth table of the minimal boolean form of the automaton ; 2 ) a dynamic measure , corresponding to the mean of the behavioral metrics in _ n _ executions of the automaton , starting from _ n _ random initial states . we use these measures to characterize a cellular automaton and guide a genetic search algorithm , which selects cellular automata similar to the game of life . using this method , we found an extensive set of complex binary cellular automata with interesting properties , including self - replication .
a growing literature has presented empirical findings of the persistent impact of trade activities on economic growth and poverty reduction ( portugal - perez,,, , ackah , ) . besides discussing on the relation between trade and development , they also report on the growth by destination hypothesis , accordingto which , the destination of exports can play an important role in determining the trade pattern of a country and its development path .simultaneously , there has been a growing interest in applying concepts and tools of network theory to the analysis of international trade ( serrano,,, , , picciolo, ) .trade networks are among the most cited examples of the use of network approaches .the international trade activity is an appealing example of a large - scale system whose underlying structure can be represented by a set of bilateral relations .this paper is a contribution to interweaving two lines of research that have progressed in separate ways : network analyses of international trade and the literature on african trade and development .the most intuitive way of defining a trade network is representing each world country by a vertex and the flow of imports / exports between them by a directed link .such descriptions of bilateral trade relations have been used in the gravity models ( ) where some structural and dynamical aspects of trade have been often accounted for . while some authors have used network approaches to investigate the international trade activity , studies that apply network models to focus on specific issues of african trade are less prominent .although african countries are usually considered in international trade network analyses , the space they occupy in these literature is often very narrow. this must be partly due to the existence of some relevant limitations that empirical data on african countries suffer from , mostly because part of african countries does not report trade data to the united nations .the usual solution in this case is to use partner country data , an approach referred to as * mirror statistics*. however , using mirror statistics is not a suitable source for bilateral trade in africa as an important part of intra - african trade concerns import and exports by non - reporting countries .a possible solution to overcome the limitations on bilateral trade data is to make use of information that , although concerning two specific trading countries , might be provided indirectly by a third and secondary source .that is what happens when we define a bipartite network and its one - mode projection . in so doing, each bilateral relation between two african countries in the network is defined from the relations each of these countries hold with another entity .it can be achieved in such a way that when they are similar enough in their relation with that other entity , a link is defined between them .our approach is applied to a subset of 49 african countries and based on the definition of two independent bipartite networks where trade similarities between each pair of african countries are used to define the existence of a link . in the first bipartite graph ,the similarities concern a mutual leading destination of exports by each pair of countries and in the second bipartite graph , countries are linked through the existence of a mutual leading export commodity between them .therefore , bilateral trade discrepancies are avoided and we are able to look simultaneously at network structures that emerge from two fundamental characteristics ( exporting destinations and exporting commodities ) of the international trade . as both networks were defined from empirical data reported for 2014 , we call these networks * destination share networks * * * ( dsn ) and * * * * * commodity share networks * ( csn , respectively .its worth noticing that the choice of a given network representation is only one out of several other ways to look at a given system .there may be many ways in which the elementary units and the links between them are conceived and the choices may depend strongly on the available empirical data and on the questions that a network analysis aims to address ( ) . the main question addressed in this paper is whether some relevant characteristics of african trade would emerge from the bipartite networks above described .we hypothesized that specific characteristics could come out and shape the structures of both the dsn and the csn .we envision that these networks will allow to uncover some ordering emerging from african exports in the broader context of international trade .if it happens , the emerging patterns may help to understand important characteristics of african exports and its relation to other economic , geographic and organizational concerns . to this end, the paper is organized as follows : next section presents the empirical data we work with , section three describes the methodology and some preliminary results from its application . in section fourwe present further results and discuss on their interpretation in the international trade setting .section five concludes and outlines future work .trade map - trade statistics for international business development ( itm ) - provides a dataset of import and export data in the form of tables , graphs and maps for a set of reporting and non - reporting countries all over the world .there are also indicators on export performance , international demand , alternative markets and competitive markets .trade map covers 220 countries and territories and 5300 products of the harmonized system ( hs code ) .since the trade map statistics capture nationally reported data of such a large amount of countries , this dataset is an appropriate source to the empirical study of temporal patterns emerging from international trade .nevertheless , some major limitations should be indicated , as for countries that do not report trade data to the united nations , trade map uses partner country data , an issue that motivated our choice for defining bipartite networks .our approach is applied to a subset of 49 african countries ( see table 1 ) and from this data source , trade similarities between each pair of countries are used to define networks of links between countries .table 1 shows the 49 african countries we have been working with .it also shows the regional organization of each country , accordingly to the following classification : 1 - southern african development community ( sadc ) ; 2 - unio do magreb rabe ( uma ) ; 3 - comunidade econmica dos estados da africa central ( ceeac ) ; 4 - common market for eastern and southern africa ( comesa ) and 5 - comunidade econmica dos estados da frica ocidental ( cedeao ) . for each african country in table 1, we consider the set of countries to which at least one of the african countries had exported in the year of 2014 .the specification of the destinations of exports of each country followed the international trade statistics database ( ) from where just * the first and the second main destinations of exports * * of each country * were taken . similarly and also for each african country in table 1 , we took the set of commodities that at least one of the african countries had exported in 2014 .the specification of the destinations of exports of each country followed the same database from where just * the first and the second main export commodities of each country * were taken . for each country ( column label `` * * country * * '' ) in table 1 , besides the regional organization ( column label `` * * o * * '' ) and the first and second destinations ( column labels `` * * destinations * * '' ) and commodities ( column labels `` * * products * * '' ) , we also considered the export value in 2014 ( as reported in ) so that the size of the representation of each country in the networks herein presented is proportional to its corresponding export value in 2014 . [cols="<,<,<,<,<,<,<,<,<,<,<,<,<,<",options="header " , ] table 6 : comparing topological coefficients obtained for dsn and csn .the dsn also has a larger clustering coefficient than the csn , showing that when a country shares a mutual export destination with other two countries , these two other countries also tend to share a mutual export destination between them .the densities of dsn and csn confirm that topological distances in the dsn are shorter than in the csn and that on average going from one country in the dsn to any other country in the same graph takes less intermediate nodes than in the csn .although the networks dsn and csn inform about the degree of the nodes , their densely - connected nature does not help to discover any dominant topological pattern besides the distribution of the node s degree .moving away from a dense to a sparse representation of a network , one shall ensure that the degree of sparseness is determined endogenously , instead of by an a priory specification .it has been often accomplished ( araujo2012, ) through the construction of a minimal spanning tree ( mst ) , in so doing one is able to develop the corresponding representation of the network where sparseness replaces denseness in a suitable way . in the construction of a mst by the _nearest neighbor _ method , one defines the 49 countries ( in table 1 ) as the nodes ( ) of a weighted network where the distance between each pair of countries and corresponds to the inverse of weight of the link ( ) between and . from the distance matrix , a hierarchical clustering is then performed using the _nearest neighbor _ method .initially clusters corresponding to the countries are considered .then , at each step , two clusters and are clumped into a single cluster if with the distance between clusters being defined by with and this process is continued until there is a single cluster .this clustering process is also known as the _ single link method _ , being the method by which one obtains the minimal spanning tree ( mst ) of a graph . in a connected graph ,the mst is a tree of edges that minimizes the sum of the edge distances . in a network with nodes, the hierarchical clustering process takes steps to be completed , and uses , at each step , a particular distance to clump two clusters into a single one .in this section we discuss the results obtained from the mst of each one - mode projected graphs dsn and csn . as earlier mentioned, the mst of a graph may allow for discovering relevant topological patterns that are not easily observed in the dense original networks . as in the last section, we begin with the analysis of the dsn and then proceed to the csn .we look for eventual topological structures coming out from empirical data of african exports , in order to see whether some relevant characteristics of african trade have any bearing on the network structures that emerge from the application of our approach . in the last section , we observed some slight influence of the regional position of each country in its connectivity . with the construction of the minimum spanning treeswe envision that some stronger structural patterns would come to be observed on the trees .figure 7 shows the mst obtained from the dsn and colored according to each country main destination of exports in 2014 .the first evidence coming out from the mst in figure 7 is the central position of ago clustering together the entire set of `` china '' exporters ( red ) in 2014 .another important pattern that emerges in the mst is the branch of uma countries ( yellow ) in the right side of the tree , being `` europe '' their most frequent destination of exports .similarly , part of the countries that export mostly to `` other '' seems to cluster on the left branch ( purple ) .interestingly , the countries that exports to `` african countries '' ( blue ) occupy the less central positions on the tree . this result illustrates the suitability of the mst to separate groups of african countries according to their main export destinations and the show how opposite are the situations of those that export to `` china '' from the countries that have africa itself as their main export destinations . regarding centrality ,ago occupies the most central position of the network since this country exports to the top most african export destinations ( `` china '' and `` europe '' ) being therefore , and by this means , easily connected to a large amount of other countries . indeed , ago is the center of the most central cluster of `` china '' exporters . on the other hand, many leaf positions are occupied by countries that exports to other african countries as they have the smallest centrality in the whole network , they are ken , nam and swa .their weak centrality is due to the fact that their leading export destinations are spread over several countries ( zmb , tza , zaf , bwa and ind ) .figure 8 shows the mst obtained from the csn and colored according to the main export commodity of each country in 2014 .the first observation on the mst presented in figure 8 is that , centrality is concentrated in a fewer number of countries ( when compared to the mst of the dsn ) .the top most central and connected positions are shared by countries belonging to two regional organizations : sadc and cedeao , being mainly represented by zaf and ago and clustering countries whose main export commodities are `` diamonds '' and `` petroleum '' , respectively .unsurprisingly , centrality and connectivity advantages seem to be concentrated in these two leading commodity partitions ( `` diamonds '' and `` petroleum '' ) and organization groups ( sadc and cedeao ) .indeed , half of cedeao countries occupy the upper branch rooted in zaf and having `` diamonds''(yellow ) as their main export commodity .another regional cluster is rooted in ago and tie together several uma countries whose main export commodity is `` petroleum''(blue ) . on the other hand ,half of uma countries are far from each other on the tree , they occupy the leaf positions , being weakly connected to the other african countries to which , the few connections they establish rely on having `` manufactured '' as their main export commodity .likewise , there is a branch clustering exporters of `` raw materials''(green ) being also placed at the leaf positions on the tree .such a lack of centrality of `` raw materials '' exporters in the csn seems to be due to the fact that their leading export products are spread over many different commodities ( the `` raw materials '' partition comprises 12 different commodities ) .in the last decade , a debate has taken place in the network literature about the application of network approaches to model international trade . in this context , andeven though recent research suggests that african countries are among those to which exports can be a vehicle for poverty reduction , these countries have been insufficiently analyzed .we have proposed the definition of trade networks where each bilateral relation between two african countries is defined from the relations each of these countries hold with another entity .both networks were defined from empirical data reported for 2014.they are independent bipartite networks : a destination share network ( dsn ) and a commodity share network ( csn ) . in the former , two african countries are linked if they share a mutual leading destination of exports , and in the latter , countries are linked through the existence of a mutual leading export commodity between them .1 . sharing a mutual export destination happens more often : the very first remark coming out from the observation of both the dsn and the csn is that , in 2014 and for the 49 african countries , sharing a mutual exporting product happens less often than sharing a mutual destination of exports .2 . great exporting countries tend to be more linked : there is a positive correlation between strong connected countries in both the dsn and the csn and those with high amounts of export values in 2014 .it is in line with recent research placed in two different branches of the literature on international trade : the world trade web ( wtw ) empirical exploration ( ,,, , benedictis,, ) and the one that specifically focus on african trade ( , , morrissey,,, ) .references ( , ) reports on the role of export performance to economic growth .they also discuss on the relation between trade and development , and on the growth by destination hypothesis , according to which , the destination of exports can play an important role in determining the trade pattern of a country and its development path .destination matters : the idea that destination matters is in line with our finding that in the dsn , the highest connected nodes are those whose main export destination is china .according to baliamoune - lutz ( baliamoune ) export concentration enhances the growth effects of exporting to china , implying that countries which export one major commodity to china benefit more ( in terms of growth ) than do countries that have more diversified exports . * the china effect : one of the patterns that came out from our dsn shows that half of ceeac and sadc countries belongs to the bulk of `` china '' destination cluster , having high betweenness centrality .additionally , the `` china '' destination group of countries displays the highest clustering coefficient ( 0.96 ) , meaning that , besides having china as their main exports destination , the second destination of exports of the countries in this group is highly concentrated on a few countries . *the role of intra - african trade : another important pattern coming out from the mst of the dsn shows how opposite are the situations of the countries that export to `` china '' from the countries that have africa itself as their main export destinations . in the mst of the destination share network ,many leaf positions are occupied by intra - african exporters as they have the smallest centrality in the whole network .their weak centrality is due to the fact that their leading export destinations are spread over several countries .this result is in line with the results reported by reference ( ) where the growing importance of intra - african trade is discussed and proven to be a crucial channel for the expansion of african exports .moreover , kamuganga found significant correlation between the participation in intra - african trade and the diversification of exports . * the angola cluster : our results highlighted the remarkable centrality of ago as the center of the most central cluster of `` china '' exporters . indeed , ago is the country that holds the most central position whenboth dsn and csn are considered .this country occupies in both cases the center of the largest central clusters : `` china '' exporters in dsn and exporters of `` petroleum '' in csn .* uma countries anti - diversification : in the opposite situation , we found that uma countries display very low centrality , showing that besides having `` europe '' as their first export destination , the second destination of exports of uma countries is spread over several countries .this result is in line with reference ( gamberoni ) report on european unilateral trade preferences and anti - diversification effects .we showed that uma countries occupy a separate branch in the mst of the dsn , being `` europe '' their most frequent destination of exports .4 . in the csn ,the highest connected nodes are those that cluster as `` petroleum '' exporters , being followed by those that export `` diamonds '' . unsurprisingly, `` raw materials '' exporters display very low connectivity as their second main exporting product is spread over several different commodities .organizations matter : regional and organizational concerns seem to have some impact in the csn . * sadc and petroleum : the group of sadc countries , although comprising a large number of elements , is the one with the poorest connectivity and clustering in the csn .it is certainly due to the fact that without ago and moz , this large group of countries does not comprise `` petroleum '' exporters .* uma countries anti - diversification : again in the csn , its mst shows that uma countries are placed on a separate branch .although they are countries with high amounts of export values in 2014 , uma countries display low connectivity and low centrality .the leaf positions in the mst of either dsn or csn - while occupied by countries with very low centrality and connectivity - were shown to characterize countries that export mainly to `` europe '' and whose main exporting product is `` raw materials '' .future work is planned to be twofold .we plan to further improve the definition of networks of african countries , enlarging the set of similarities that define the links between countries in order to include aspects like mother language , currencies , demography and participation in trade agreements .on the other hand , we also plan to apply our approach to different time periods .as soon as we can relate the structural similarities baliamoune - lutz , m. ( 2011 ) growth by destination ( where you export matters ) : trade with china and growth in african countries in african development review , special issue : special issue on the 2010 african economic conference on setting the agenda for africa s economic recovery and long - term growth v.23 , 2 .gamberoni , e. ( 2007 ) do unilateral trade preferences help export diversification ?an investigation of the impact of the european unilateral trade preferences on intensive and extensive margin of trade , iheid working paper 17 .picciolo , f. ; squartini , t. ; ruzzenenti , f. ; basosi , r. and garlaschelli , d. ( 2012 ) the role of distances in the world trade web , _ in _ proceedings of the eighth international conference on signal - image technology & internet - based systems ( sitis 2012 ) .
this paper is a contribution to interweaving two lines of research that have progressed in separate ways : network analyses of international trade and the literature on african trade and development . gathering empirical data on african countries has important limitations and so does the space occupied by african countries in the analyses of trade networks . here , these limitations are dealt with by the definition of two independent bipartite networks : a destination share network and a commodity share network . these networks - together with their corresponding minimal spanning trees - allow to uncover some ordering emerging from african exports in the broader context of international trade . the emerging patterns help to understand important characteristics of african exports and its binding relations to other economic , geographic and organizational concerns as the recent literature on african trade , development and growth has shown . financial support from national funds by fct ( fundao para a cincia e a tecnologia ) . this article is part of the strategic project : uid / eco/00436/2013 keywords : trade networks , african exports , spanning trees , bipartite graphs
as the amount of available sequence data from genomes or proteoms rapidly grows , one is confronted with the problem that , for a given set of taxa , tree topologies relating homologous sequences of these taxa can ( and in practice often will ) differ from the each other , depending on which locus in the genome or which protein is used for constructing the tree .possible reasons for discordance among gene trees and the species phylogeny include horizontal gene transfer , gene duplication / loss , or incomplete lineage sorting ( deep coalescences ) .thus one is confronted with the task to determine the phylogeny of the taxa ( the ` species tree ' ) from a set of possibly discordant gene trees ( maddison , and maddison and knowles ) .several methods have been proposed and are used to infer a species tree from a set of possibly discordant gene trees : \(1 ) declaring the most frequently observed gene tree to be the true species tree was shown to be statistically inconsistent under the multispecies coalescent model by degnan and rosenberg , in cases where branch lengths of the underlying species tree are sufficiently small .\(2 ) a popular approach for species tree reconstruction is by concatenating multiple alignments from several loci to one large multiple alignment and construct a single ` gene tree ' from this multiple alignment by standard methods .this approach was pursued e.g. by teichmann and mitchison and cicarelli et al. . however , also this method was shown to be inconsistent under the multispecies coalescent by degnan and kubatko , if branch lengths on the species tree are sufficiently short .\(3 ) similarly , the concept of minimizing the number of ` deep coalescences ' ( maddison ) was recently shown to be statistically inconsistent under the multispecies coalescent model by than and rosenberg .\(4 ) on the other hand , it is well known from coalescent theory that the probability distribution of gene trees , or even the distributions rooted gene triplet trees , on a species tree uniquely determine the species tree if incomplete lineage sorting is assumed to be the only source of gene tree discordance .similarly ( see allman et al . ) , _ the distributions of unrooted quartet gene trees identify the unrooted species tree topology ._ however , as soon as triplet / quartet gene trees are inferred from experimental data , their distributions will differ from the theoretical ones , and this may lead to conflicts among the inferred triplets / quartets on the hypothetical species tree , a problem which is not straight forward to resolve ( ` quartet - puzzeling ' could be an approach to resolve this , see strimmer and von haeseler ) . also direct maximum likelihood calculations using gene tree distributionsare very problematic due to the enormous number of possible gene trees on a given set of taxa .\(5 ) recently liu and yu have proposed to use the ` average internode distance ' on gene trees as a measure of distance between taxa on species trees and apply classical neighbor joining to these distance data .they prove that this reconstructs the underlying species tree in a statistically consistent way . in the present paperwe propose a method to overcome many of the difficulties discussed above , and in particular to make the above mentioned result by allman et al. accessible in practice .that is , we describe a polynomial time algorithm ( in the number of taxa ) , which uses empirical distributions of unrooted quartet gene trees as an input , to estimate the unrooted topology of the underlying species tree in a statistically consistent way . due to the conceptual similarity to classical neighbor joining we call this algorithm ` quartet neighbor joining ' , or briefly ` quartet - nj ' .the fact that quartet - nj uses quartet distributions as an input makes it flexible in practice : quartet gene tree distributions can be obtained directly from sequence data ( as is described in the application to a prokaryote data set in section [ sect : appsim ] of this paper ) , or e.g. from gene tree distributions , which is the type of input required by liu and yu s ` neighbor joining for species trees ' . also , quartet - nj is naturally capable of dealing with multiple lineages per locus and taxon .the paper is organized as follows : after a brief review of the multispecies coalescent model and distributions of quartet gene trees in section [ sect : mscm ] , we investigate in section [ sect : cherrypicking ] how to identify cherries on a species tree . to this endwe show how to assign ` weights ' to unrooted quartet trees on the set of taxa , using the distributions of quartet gene trees , and how to define a ` depth ' for each pair of taxa . in analogy to classical cherry picking weprove that under the multispecies coalescent model any pair of taxa with maximal depth is a cherry on the species tree .moreover , we give an interpretation of this theorem by the concept of ` minimal probability for incomplete lineage sorting ' , in analogy to the concept of minimal evolution underlying classical neighbor joining . in section [sect : algorithms ] we translate our cherry picking theorem into a neighbor joining - like procedure ( ` quartet - nj ' ) and prove that it reproduces the true unrooted species tree topology as the input quartet distributions tend to the theoretical quartet distributions .in other words , quartet - nj is statistically consistent .finally , in section [ sect : appsim ] we apply the quartet - nj algorithm to data from coalescence simulations , as well as to a set of multiple alignments from nine prokaryotes , in order to demonstrate the suitability of quartet - nj . in both situationswe consider only one lineage per locus and taxon .we are going to present briefly the multispecies coalescent , which models gene ( or protein ) tree evolution within a fixed species tree .the material in this section is neither new nor very deep .rather this section is to be considered as a reminder for the reader on the relevant definitions , as well a suitable place to fix language and notation for the rest of the paper . for a more detailed introduction to the multispecies coalescent modelwe refer e.g. to allman et al. .let us consider a set of taxa ( e.g. species ) and a rooted phylogenetic ( that is : metric with strictly positive edge lengths and each internal node is trivalent ) tree on the taxon set .the tree is assumed to depict the ` true ' evolutionary relationship among the taxa in , and the lengths of its internal edges are measured in coalescence units ( see below ) .this tree is commonly called the _ species tree _ on the taxon set .it is a fundamental problem in phylogenetics to determine the topology of this tree .molecular methodes , however , usually produce evolutionary trees for single loci within the genome of the relevant species , and these _ gene trees _ will usually differ topologically from the species tree ( degnan and rosenberg ) .this has several biological reasons , like horizontal gene transfer , gene duplication / loss or incomplete lineage sorting .the latter describes the phenomenon that to gene lineages diverge long before the population actually splits into two separate species , and in particular gene lineages may separate in a different order than the species do .the multispecies coalescent model describes the probability for each rooted tree topology to occur as the topology of a gene tree , under the assumption that incomplete lineage sorting is the only reason for gene tree discordance .notation : we adopt the common practice to denote taxa ( species ) by lower case letters , while we denote genes ( or more general : loci ) in their genome by capital letters . e.g. if we consider three taxa , the letters will denote a particular locus sampled from the respeceive taxa .( by abuse of notation , we will identify the leaf sets of both species _ and _ gene trees with ) . considering only a single internal branch of the species tree , and two gene lineages within this branch going backwards in time , we find that the probability that the two lineages coalesce to a single lineage within time is given by where is in _ coalescence units _( that is number of generations represented by the internal branch divided by the total number of allele of the locus of interest , present in the population ) . in particular ,if the branch on the species tree has length , the probability that two lineages coalesce within this branch is . from this ,nei derives the following probabilities under the multispecies coalescent model for a three taxon tree .denote the taxa by , the corresponding loci by , and assume that the species tree has the topology with internal edge length . then the probability that a gene tree sampled from these taxa has the topology is equal to while the other two topologies are observed with probability . in degnan and saltershow how to calculate the probabilities of a gene tree in a fixed species tree on an arbitrary taxon set .it turns out that these probabilities are polynomials in the unknonws , where denotes the length of the edge on the species tree .in particular , the multispecies coalescent model is an algebraic statistical model . in the following we consider four taxon species trees on the taxon set , and we use newick notation to specify ( rooted ) species trees .for instance denotes the caterpillar tree with edge lengths and in coalescent units .moreover , unrooted quartets are denoted in the format , meaning the quartet gene tree which has cherries and .up to permutation of leaf labels there is only one additional tree topology on four taxa , namely the balanced tree . for both of these two species tree topologiesit is not hard to calculate the probabilities of the tree possible unrooted quartet gene trees on , and one obtains for both species tree topologies and the probabilities of unrooted quartet gene trees on have the same form and are given by the formulas [ lemquartetprob ] in particular , the gene quartet distribution determines the _ unrooted _ species tree topology , but not the position of the root ( allman et al . ) .the sum of the internal edge lengths of the rooted species tree ( resp .the length of the internal edge of the unrooted species tree ) is given by the formula returning to the study of the multispecies coalescent on species trees on arbitrary taxon sets , we deduce from the above that the quartets displayed on the species tree on the taxon set are exactly those which appear with a probability bigger than for any sampled locus .hence , the distributions of quartets determine uniquely the quartet subtrees which are displayed by the true species tree , and hence determine the unrooted topology of ( see allman et al. ) . in the following two sectionswe describe a neighbor joining algorithm which makes this theoretical insight applicable in practice , meaning that it yields a method to estimate unrooted species trees from ( empirical ) distributions of quartet gene trees , which is statistically consistent under the multispecies coalescent model .statistical consistency of this algorithm will follow from the ` cherry picking theorem ' below .here we give a precise criterion , using theoretical distributions of quartet gene trees under the multispecies coalescent , to determine which pairs of taxa on the species tree are cherries .this criterion can as well be applied to estimated quartet distributions and thus enables us to recursively construct a species tree estimate from observed gene quartet frequencies in section [ sect : algorithms ] . as always , we consider a fixed species tree on the taxon set , and we denote by the probability that a gene tree on displays the gene quartet tree .recall that by lower case letters we denote the species from which the genes are sampled . regardless of whether contains the ( species- ) quartet , we can attach to it the following numbers : [ defn : weight ] the _ weight _ of the quartet is defined as if the taxa are not pairwise distinct , then we set . of course , if are pairwise distinct we may equivalently write if the quartet is displayed on the species tree , then the weight is precisely the length of the interior branch of the quartet .moreover , the other two weights , and , are less than zero . [ lem : weightdistance ] the proof is immediate using equation : if the quartet tree is displayed on the species tree , then otherwise , if is not displayed on , then one of the other two quartet trees on is displayed , say , and we have since . this littlelemma motivates the following definition and a cherry picking theorem which is formulated and proved in close analogy to the ` classical ' one ( see saito and nei for the original publication , or pachter and sturmfels for a presentation analogous to ours ) .[ defn : depth ] for any pair of taxa we define the _ depth _ of to be the number ( recall that if then . ) [ thm : cherrypicking ] if a pair of taxa has maximal depth , then is a cherry on the species tree .as mentioned above , the proof of this theorem is parallel to the proof of the classical cherry picking result as presented in .we state it here for sake of completeness of our exposition . as a first step ,we introduce the auxiliary values for every four taxon subset , as well as for every pair of taxa . obviously , if is a cherry on , then for all , and hence also . in general , for any pair of taxa has by lemma [ lem : weightdistance ] .we will now assume that the pair does _ not _ form a cherry on the species tree and prove that in this case there exists a cherry on such that the following inequalities hold : from this we deduce the claim of the theorem : if is not a cherry on , then it does not have maximal depth .we have to find a cherry on such that indeed holds . as we assume and not to form a cherry on , the unique path connecting and crosses at least two interior nodes ( symbolized as black dots in figure [ tree1 ] .we denote the number of internal nodes on this path by . to each a rooted binary tree is attached ( see figure [ tree1 ] ) . i & [ fillstyle = solid , fillcolor = black ] & [ linestyle = none]\cdots & [ fillstyle = solid , fillcolor = black ] & j \\ & [ mnode = tri]t_{1 } & & [ mnode = tri]t_{r } \ncline{1,1}{1,2 } \ncline{1,2}{1,3 } \ncline{1,3}{1,4 } \ncline{1,4}{1,5 } \ncline{1,2}{2,2 } \ncline{1,4}{2,4 } \endpsmatrix ] [ tree2 ] the third sum might be negative - but not too negative : the situation is illustrated by figure [ tree2 ] ( which arises from the tree in figure [ tree1 ] by deleting the irrelevant subtrees ) , whose inspection shows that each for each summand we have thus we see , whence is positive .now we treat the frouth sum : if is a node in one of the subtrees , then , so is non - negative .more precisely we have . on the other hand ,the sum is greater or equal , whence also is positive . in totalwe have found that in any case , which proves our claim .as explained above , this suffices to prove the cherry picking theorem . in practical applications one will not know the precise probability of each quartet gene tree .hence one has to use e.g. relative frequencies as estimates .let us denote by the ( experimentally determined ) relative frequency of the gene quartet . in analogyto definitions [ defn : weight ] and [ defn : depth ] we make the following the _ empirical weight _ of the quartet of taxa is defined as moreover , the _ empirical depth _ of a pair of taxa is defined as under the multispecies coalescent model , the relative frequency of a quartet gene tree is a statistically consistent estimator for its probability .hence the _ empirical _ depth of a pair of taxa is a statistically consistent estimator for . in particularthis means : _ inferring a pair of taxa , where is maximal , as a cherry on the species tree is statistically consistent ._ more precisely , we have [ cor : consistent ] let be the number of loci sampled from each taxon in and denote by the set of pairs of taxa at which attains its maximum .if the number of genes tends to infinity , then the probability that contains a pair which is not a cherry on the true species tree approaches zero . as there are only finitely many taxa in , there is a strictly positive difference between the maximum value of on and its second biggest value .since moreover and are continuous , there exists an such that whenever , is maximal if and only if is maximal , and the claim follows from theorem [ thm : cherrypicking ] .but the probability that approaches 1 as grows , for any choice of . recall that classical neighbor joining is a greedy algorithm which in each ` cherry picking ' step declares a pair of taxa to be neighbors if this minimizes the sum of branch lengths in the refined tree resulting from this step ( saito and nei ) . in other words , classical cherry picking and neighbor joining is guided by the principle of _ minimal evolution_. it turns out that our cherry picking result in theorem [ thm : cherrypicking ] can be interpreted in a similar fashion .let us make the following ad - hoc definition .let be independent random variables with values in , and let be the probability of for each .we call the geometric mean {p_{1}\dotsb p_{n}}\ ] ] the _ average probability _ of the random experiments giving the result .let us now consider what the cherry picking theorem [ thm : cherrypicking ] does with a set of taxa , initially arranged as a star - like tree ( see figure [ fig : cherrypickingtree ] ) .consider two taxa fixed and let . on a species tree on which displays the cherry , this is the probability that a gene quartet tree sampled from the taxa differs from the species quartet tree .let {\prod_{k , l}q(ij , kl)}\ ] ] be the average probability of discordance of a sampled gene quartet tree with the corresponding quartet on the species tree , for a set of four taxa containing and .we will also call this number the ` average probability of incomplete lineage sorting ' for quartets containing the taxa and . with this terminology ,the cherry picking theorem [ thm : cherrypicking ] can be phrased as follows .a pair of taxa is a cherry on the true species tree if it minimizes the average probability for incomplete lineage sorting for quartets containing the taxa and .[ thm : avprobincomplete ] using theorem [ thm : cherrypicking ] we only have to show that is minimal if and only if is maximal . butthis follows from the fact that where is a constant independent of . i & [ fillstyle = solid , fillcolor = black ] & [ fillstyle = solid , fillcolor = black ] \\ & [ fillstyle = solid , fillcolor = black ] & [ fillstyle = solid , fillcolor = black ] \\j & [ fillstyle = solid , fillcolor = black ] & [ fillstyle = solid , fillcolor = black ] \ncline{2,2}{1,1 } \ncline{2,2}{1,2 } \ncline{2,2}{1,3 } \ncline{2,2}{2,3 } \ncline{2,2}{3,1 } \ncline{2,2}{3,2 } \ncline{2,2}{3,3 } \endpsmatrix & & \psmatrix[mnode = circle , linewidth=1pt , rowsep=1 cm , colsep=1 cm ] i & & [ fillstyle = solid , fillcolor = black ] & [ fillstyle = solid , fillcolor = black ] \\ & [ fillstyle = solid , fillcolor = black ] & [ fillstyle = solid , fillcolor = black ] & [ fillstyle = solid , fillcolor = black ] \\j & & [ fillstyle = solid , fillcolor = black ] & [ fillstyle = solid , fillcolor = black ] \ncline{2,2}{2,3 } \ncline{2,2}{1,1 } \ncline{2,2}{3,1 } \ncline{2,3}{1,3 } \ncline{2,3}{1,4 } \ncline{2,3}{2,4 } \ncline{2,3}{3,4 } \ncline{2,3}{3,3 } \endpsmatrix \end{tabular } $ ] [ fig : cherrypickingtree ]in this section we discuss how the above cherry picking result can be used to design an algorithm which estimates a species tree from ( observed ) gene quartet tree frequencies .the first version of our neighbor joining algorithm reconstructs the underlying species tree from the ( theoretical ) gene quartet tree distributions .[ alg : naive ] * input * : a set of taxa containing at least four elements , and for each quartet with the probability of the quartet under the multispecies coalescent model on the species tree . *output * : the species tree .* step 0 . * set to be the graph with vertex set and with empty edge set . * step 1 . * if go to step 4 .* for each unordered pair calculate the depth . let be a pair of taxa with maximal depth .add a node to and draw edges from to and from to .replace by . * step 3 .* for each quartet with set and for all quartets which do not contain set .replace by .go to step 1. * step 4 . *if , then add a vertex to and draw edges from this new vertex to the three elements of .each calculation of requires logarithms and additions .repeating this for every pair of taxa gives a computational complexity of for step 2 .this is also the complexity of one iteration of the algorithm . since each step will identify exactly one cherry , we need iterations to reconstruct , which gives a total complexity of for algorithm [ alg : naive ] .it is rather straightforward to improve step 2 so that the algorithm requires only arithmetic operations ( see subsection [ subsect : complexity ] ) . using the result of theorem [ thm : avprobincomplete ], we may summarize : _ with algorithm [ alg : naive ] we have described a greedy algorithm which inferes in each step the cherry which requires minimal incomplete lineage sorting ._ further , the fact that cherry picking is statistically consistent implies that also species tree estimation by algorithm [ alg : naive ] is statistically consistent : let be the number of loci sampled from each taxon in and denote by the estimate for the species tree produced by algorithm [ alg : naive ] .if the number of genes goes to infinity , then the probability that equals the true species tree approaches 1 .this follows by a repeated application of corollary [ cor : consistent ] . in practical applications two problems may occur using algorithm [ alg : naive ] :first , there might be several non - disjoint pairs of taxa with maximal depth .this could be resolved by chosing one of these pairs randomly .a more serious problem occurs when many of the empirical gene quartet distributions are 0 .this will be discussed in the following subsection .algorithm [ alg : naive ] works well if one uses as input the theoretical probabilities for quartet gene trees .it also works well , if one uses relative quartet frequencies which are ` very close ' to the probabilities ( and in particular non - zero ) . in other cases ,the result might be problematic , as we are going to explain now .imagine that for some reason ( e.g. very long branch lenghts on the species tree ) the observed quartet gene trees reflect very precisely the topology of the species tree .this might mean in the extreme case that for every four - taxon subset one observes only this very quartet gene tree which is displayed also by the ( unknown ) species tree . in some sense this situation should be optimal , since the observed gene quartet trees fit together without conflict and thus yield an unambiguous estimate for the species tree .however , let us run algorithm [ alg : naive ] with the empirical depth function in place of and consider any pair of taxa .regardless of the choice of and , there exist such that is a quartet on the species tree , and hence by our assumption . calculating the empirical depth of yields thus _ every _ pair maximizes , andso in each iteration of algorithm [ alg : naive ] the choice of a cherry is completely arbitrary .so this procedure fails in a situation , which has to be considered the ` easiest ' possible in some obvious sense .a possible solution to this problem is to perturb the arguments in the logarithms in equation by a small number . in other words , we fix and calculate , for each pair of taxa , the ` perturbed depths ' obviously , for we have and , for all pairs . from thiswe obtain the following easy assume that the theoretical probabilities are known for each four taxon subset and are used as input for algorithm [ alg : naive ] .then there exists a constant such that algorithm [ alg : naive ] computes the same results with in place of , for each .in other words , if is the species tree inferred by algorithm [ alg : naive ] with in place of , then the limit exists and is equal to . [lem : epsilon ] this follows from the fact that is a binary tree , that there are only finitely many values of and that depends continuously on .this suggests that for practical applications it will be reasonable to fix ` small enough ' , and run algorithm [ alg : naive ] with the perturbed empirical depth function in place of in order to avoid ` infinite depths ' as in the discussion at the beginning of this subsection .hence we obtain the following modification of algorithm [ alg : naive ] .[ alg : improved ] *input * : a finite set of taxa , a ` small ' constant , and for each four taxon subset the relative quartet gene tree frequencies , , . * output * : an unrooted tree on which is our estimate for the topology of the species tree on .* algorithm * : run algorithm [ alg : naive ] with the depth function substituted by the empirical depth .the result of this calculation is . indeed ,if we apply this modified algorithm to the problematic situation at the beginning of this subsection , we will obtain ( for any choice of ) a fully resolved correct estimate for the species tree .of course , this algorithm has the same complexity as algorithm [ alg : naive ] .we want to claim that , for sufficiently small values of , quartet - nj produces a asymptotically correct estimate of the true species tree on . to this end , we first prove there exists a constant such that for all the trees are equal . in other words , the limit exists . [lem : epsilon_empirical ] since there are only finitely many cherries to pick , we may restrict our considerations to the picking of the first cherry .we have to distinguish two cases : first assume that there are taxa such that , i.e. the set is nonempty . if is small enough , then the maximum of the values of is attained at one of the pairs in .at which of those pairs the maximum is attained then only depends on the number of summands in which approach infinity as tends to zero .this number is clearly independent of , whence the set of potential cherries to pick is independent of . in the second case , there are no elements in the set .this means that each of the perturbed empirical depths approaches the _ finite _ value as tends to zero .this again means that , for small enough , the pair which maximizes does not depend on . combining this with lemma [ lem : epsilon ] we obtain the desired result . the limit exists and is a statistically consistent estimate for the true species tree . in particular ,quartet - nj ist statistically consistent .[ thm : main ] the existence of the limit in the theorem is established by lemma [ lem : epsilon_empirical ] .it remains to prove that it is a statistically consistent estimate for .recall that the definition of the function depends on the relative frequencies .if we consider the probabilities known and fixed and moreover fix the pair of taxa , then we may consider the function depending on the ` variables ' and .this is a continuous function which vanishes on the line .thus for every there exists an open ball centered at which is mapped to by . in particular , there exists a positive constant such that for every we have if and . if we chose small enough, we thus may conclude that for every and for all relative frequencies satisfying .moreover , by lemma [ lem : epsilon ] we may assume , by reducing if necessary , that for every .taking this together we obtain that , provided for every quartet , .since is a statistically consistent estimate for , this proves that is a statistically consistent estimate for .the question of how to find an appropriate to run the neighbor joining algorithm will of course depend on the special problem instance and is left open here . herewe briefly mention a complexity reduction for the quartet neighbor joining algorithm . since algorithm [ alg : naive ] and [ alg : improved ] are completely equivalent in this respect , we formulate this reduction only with the simpler notation of algorithm [ alg : naive ] .we consider step 2 in algorithm [ alg : naive ] . in our present formulation ,this step calculates the depth by adding up -many quartet weights for each pair of taxa .however , most of these quartet weights will not have changed during the last iteration of the algorithm .assume that in the previous iteration the cherry was identified and joined by a new vertex .then for any pair of taxa which are still present in we may calculate the new depth from the old one by the formula this reduces the complexity of step 2 from to , and hence the overall complexity of algorithms [ alg : naive ] and [ alg : improved ] are reduced by this modification to .in order to test quartet neighbor joining on real data , algorithm [ alg : improved ] was applied to a set of protein sequences from nine prokaryotes , among them the two archaea _ archaeoglobus fulgidus _ ( af ) and _ methanococcus jannaschii _ ( mj ) , as well as the seven bacteria _ aquifex aeolicus _ ( aq ) , _ borrelia burgdorferi _ ( bb ) , _ bacillus subtilis _ ( bs ) , _ escherichia coli _ ( ec ) , _ haemophilus influenzae _ ( hi ) , _ mycoplasma genitalium _ ( mg ) , and _ synechocystis sp . _ ( ss ) .the choice of organisms follows teichmann and mitchison , while the multiple sequence alignments where taken out of the dataset used by cicarelli et al . in .for each of the 28 protein families in the list 1 .ribosomal proteins , small subunits : s2 , s3 , s5 , s7 , s8 , s9 , s11 , s12 , s13 , s15 , s17 , l1 , l3 , l5 , l6 , l11 , l13 , l14 , l15 , l16 , l22 ; 2 .trna - synthetase : leucyl- , phenylalanyl- , seryl- , valyl ; 3 .other : gtpase , dna - directed rna polymerase alpha subunit , preprotein translocalse subunit secy , a distance matrix was calculated using belvu with ` scoredist'-distance correction ( sonnhammer and hollich ) . for each distance matrix ,a set of quartet trees was inferred in the classical way by finding , for each four taxon set , the unrooted quartet which maximizes the value where denotes the respective entry in the distance matrix .algorithm [ alg : improved ] was then run with the parameter , and on the quartet distribution obtained from analyzing all the 28 protein families above , and in a second try on the quartet distributions obtained only by the ribosomal proteins .the resulting tree topologies are depicted in figures [ fig : all ] and [ fig : ribosomal ] , respectively ( the root of these trees is of course not predicted by algorithm [ alg : improved ] .rather it was placed a posteriori on the branch which separates archaea from bacteria on the unrooted output of the algorithm . )the different choices of did not affect the result in these calculations .as a first test of performance of algorithm [ alg : improved ] two series of simulations were performed as follows . for a certain choice of a species tree on a set of taxa , a coalescent process was repeatedly simulated using the coalescence package of mesquite , .each simulation yielded a set of gene trees on , and from these the frequency of each unrooted quartet gene tree with leaves in was determined .these frequencies where then submitted to algorithm [ alg : improved ] and the resulting ( unrooted ) tree was compared with the ( unrooted version of the ) species tree .we report here the proportion of correct inferences of the unrooted species tree in the different situations . in the first series of simulations the underlying species tree was the 5-taxon caterpillar tree , with all internal branch lengths set equal ( of course , the length of the pending edges does not have an impact , as we consider only one lineage per taxon ) .algorithm [ alg : improved ] was run 1000 times using 5 , 10 , 20 and 50 sampled gene trees per trial , 500 times using 100 gene trees , 250 times using 200 gene trees , and 100 times using 500 simulated gene trees per trial .the proportion of trials which yielded the correct unrooted species tree topology is reported in table [ tab : caterpillar ] .note that simulations under the 5-taxon caterpillar tree are also performed by liu and yu in order to assess the performance of their ` neighbor joining algorithm for species trees ' . for our choices of branch lengths and sample sizes, the performance of algorithm [ alg : improved ] seems to be roughly equal to the performance of ` neighbor joining for species trees ' ( liu and yu , figure 2 ) . in a second series of simulations ,the underlying species tree was the tree inferred by algorithm [ alg : improved ] for the nine prokaryotes in section [ subsect : prokaryotes ] , see figure [ fig : all ] .again , for different internal branch lengths and different numbers of gene trees per trial , 1000 trials ( in the case of 5 , 10 , 20 and 50 gene trees per trial ) , and 500 resp .100 trials ( in the case of 100 resp .500 gene trees per trial ) were run , and the proportions of correctly inferred unrooted species tree topologies are reported in table [ tab : prokaryotes ] .two comments are in order : ( 1 ) in fact , for each choice of the parameter ( and fixed number of gene trees per trial ) two simulations were performed .for the first , the length of the branch leading to the cherry formed by mj and af was set to , while in a second simulation this branch length was set to .the differences in the proportions of successful trials are small in most cases ( between one and two percent ) , and , as expected , in most cases the proportion of successful trials was bigger in the first situation .\(2 ) the number of gene trees ( or rather , the number of unrooted quartet gene trees for each 4-taxon subset ) used for the reconstruction of the prokaryote species tree in section [ subsect : prokaryotes ] where 21 and 28 , respectively . from table [tab : prokaryotes ] we see that such a species tree is likely to be inferred correctly by algorithm [ alg : improved ] if its internal branch lengths are around ( with a probability of more than 90 percent ) . for branch lengths around , however , the probability for a correct inference of decreases to about 40 to 50 percent .( however , there might still be certain clades on which can be detected with high accuracy also for smaller branch lengths . ) clearly , in these considerations we ignore effects such as horizontal gene transfer , for whose existence there is evidence in the case of the nine prokaryotes considered in section [ subsect : prokaryotes ] for some non - ribosomal proteins ( see teichmann and mitchison ) ..simulation results for the 5-taxon caterpillar tree with all internal branch lengths set to ( in coalescent units ) .table entries are proportions of trials which yielded the correct unrooted species tree topology .columns are labelled by the number of simulated gene trees used in each trial . [ cols="^,^,^,^,^,^,^,^ " , ] elizabeth s. allman , james h. degnan , and john a. rhodes . _ identifying the rooted species tree from the distribution of unrooted gene trees under the coalescent ._ j. math .( 2011 ) , in press : doi:10.1007/s0028501003557 .y. yu , t. warnow , and l. nakhleh , _ algorithms for mdc - based multi - locus phylogeny inference ._ proceedings of the 15th annual international conference on research in computational molecular biology ( recomb ) , lnbi 6577 , 531 - 545 , 2011 .
in this article we propose a new method , which we name ` quartet neighbor joining ' , or ` quartet - nj ' , to infer an unrooted species tree on a given set of taxa from empirical distributions of unrooted quartet gene trees on all four - taxon subsets of . in particular , quartet - nj can be used to estimate a species tree on from distributions of gene trees on . the quartet - nj algorithm is conceptually very similar to classical neighbor joining , and its statistical consistency under the multispecies coalescent model is proven by a variant of the classical ` cherry picking'-theorem . in order to demonstrate the suitability of quartet - nj , coalescent processes on two different species trees ( on five resp . nine taxa ) were simulated , and quartet - nj was applied to the simulated gene tree distributions . further , quartet - nj was applied to quartet distributions obtained from multiple sequence alignments of 28 proteins of nine prokaryotes .
the convection - diffusion model can be expressed mathematically , which is a semi linear parabolic partial differential equation .specially , we consider an initial value system of _ convection - diffusion _ equation in dimension as : ,\ ] ] together with the dirichlet boundary conditions : ,\ ] ] or neumann boundary conditions : .\ ] ] where is the boundary of computational domain \times [ c , d]\subset { \mathbb{r}}^2 ] is time interval , and and are known smooth functions , and denote heat or vorticity .the parameters : and are constant convective velocities while the constants are diffusion coefficients in the direction of and , respectively .the convection - diffusion models have remarkable applications in various branches of science and engineering , for instance , fluid motion , heat transfer , astrophysics , oceanography , meteorology , semiconductors , hydraulics , pollutant and sediment transport , and chemical engineering . specially , in computational hydraulics and fluid dynamics to model convection - diffusion of quantities such as mass , heat , energy , vorticity .many researchers have paid their attention to develop some schemes which could produce accurate , stable and efficient solutions behavior of convection - diffusion problems , see and the references therein . in the last years, the convection - diffusion equation has been solved numerically using various techniques : namely- finite element method , lattice boltzmann method , finite - difference scheme and higher - order compact finite difference schemes . a nine - point high - order compact implicit scheme proposed by noye and tan is third - order accurate in space and second - order accurate in time , and has a large zone of stability .an extension of higher order compact difference techniques for steady - state to the time - dependent problems have been presented by spotz and carey , are fourth - order accurate in space and second or lower order accurate in time but conditionally stable .the fourth - order compact finite difference unconditionally stable scheme due to dehghan and mohebbi have the accuracy of order .a family of unconditionally stable finite difference schemes presented in have the accuracy of order .the schemes presented in are based on high - order compact scheme and weighted time discretization , are second or lower order accurate in time and fourth - order accurate in space .the high - order alternating direction implicit ( adi ) scheme with accuracy of order proposed by karaa and zhang , is unconditionally stable . a high - order unconditionally stable exponential scheme for unsteady convection - diffusion equation by tian and yua have the accuracy of order .a rational high - order compact alternating direction implicit ( adi ) method have been developed for solving unsteady convection - diffusion problems is unconditionally stable and have the accuracy of order . a unconditionally stable fourth - order compact finite difference approximation for discretizing spatial derivatives and the cubic - spline collocation method in time , proposed by mohebbi and dehghan ,have the accuracy of order .an unconditionally stable , semi - discrete based on pade approximation , by ding and zhang , is fourth - order accurate in space and in time both .the most of schemes are based on the two - level finite difference approximations with dirichlet conditions , and very few schemes have been developed to solve the convection - diffusion equation with neumann s boundary conditions , see and references therein . the fourth - order compact finite difference scheme by cao et al . is of - order accurate in time and 4th - order in the space .a high - order alternating direction implicit scheme based on fourth - order pade approximation developed by you is unconditionally stable with the accuracy of order .the differential quadrature method ( dqm ) dates back to bellman et al . . after the seminal paper of bellman , various test functions have been proposed , among others , spline functions , sinc function , lagrange interpolation polynomials , radial base functions , modified cubic b - splines , see , etc .shu and richards have generalized approach of dqm for numerical simulation of incompressible navier - stokes equation .the main goal of this paper is to find numerical solution of initial value system of _ convection - diffusion _ equation with both kinds of boundary conditions ( dirichlet boundary conditions and neumann boundary conditions ) , approximated by dqm with new sets of modified cubic b - splines ( modified extended cubic b - splines , modified exponential cubic b - splines , modified trigonometric cubic b - splines ) as base functions , and so called modified trigonometric cubic - b - spline differential quadrature method ( mtb - dqm ) , modified exponential cubic - b - spline differential quadrature method ( mexp - dqm ) and third modified extended cubic - b - spline differential quadrature method ( mecdq ) .these methods are used to transform the convection diffusion problem into a system of first order odes , in time .the resulting system of odes can be solved by using various time integration algorithm , among them , we prefer ssp - rk54 scheme due to its reduce storage space , which results in less accumulation errors .the accuracy and adaptability of the method is illustrated by three test problems of two dimensional convection diffusion equations .the rest of the paper is organized into five more sections , which follow this introduction . specifically , section [ sec - metho - decr - tem ] deals with the description of the methods : namely- mtb - dqm , mexp - dqm and mecdq .section [ sec - impli ] is devoted to the procedure for the implementation of describe above these methods for the system together with the boundary conditions as in and .section [ sec - stab ] deals with the stability analysis of the methods .section [ sec - num ] deals with the main goal of the paper is the numerical computation of three test problems .finally , section [ sec - conclu ] concludes the results .the differential quadrature method is an approximation to derivatives of a function is the weighted sum of the functional values at certain nodes .the weighting coefficients of the derivatives is depend only on grids . this is the reason for taking the partitions ] . setting the values of and its first and second derivatives in the grid point , denoted by , and , respectively , read : the modified trigonometriccubic b - splines base functions are defined as follows : now , the set is the base over ] , ] . eq . reduced to compact matrix form : the coefficient matrix of order can be read from and as : \ ] ] and the columns of the matrix read as : = \left [ \begin{array}{c } 2 a_4\\ a_3-a_4\\ 0 \\ \vdots \\ \\ 0 \\ 0\\ \end{array } \right ] , \im'[2 ] = \left[\begin{array}{c } a_4 \\ \\ a_3 \\ \\ \vdots \\ \\ \\ \end{array}\right ] , \ldots , \im'[n_x-1 ] = \left [ \begin{array}{c } \\\vdots \\ \\ \\ a_4 \\ \\ a_3 \\ \end{array } \right ] , \mbox { and } \im'[n_x ] = \left [ \begin{array}{c } \\ \\\vdots \\ \\ \\ 2a_4\\ a_3- a_4 \\ \end{array } \right].\ ] ] the exponential cubic b - splines function at node in direction , reads : where the set forms a base over ] . the procedure to define modified trigonometric cubic b - splines in direction ,is followed analogously . setting and for all .using mexp - dqm , the approximate values of the first - order derivative is given by setting ] , and 0 0 0 0 0 0 0 0 ] .let and , then the values of and its first and second derivatives in the grid point , denoted by , and , respectively , read : the modified extended cubic b - splines base functions are defined as follows : the set is a base over ] , ] .eq . can be re - written in compact matrix form as : using eqns . and, the matrix of order read as : \ ] ] and the columns of the matrix read : = \left [ \begin{array}{c } -1/h_x\\ 1/h_x\\ 0 \\\vdots \\ \\ 0\\ 0 \\ \end{array } \right ] , \phi'[2 ] = \left[\begin{array}{c } -1/2h_x \\ \\ 1/2h_x \\ \\ \vdots \\ \\ \\ \end{array}\right ] , \ldots , \phi'[n_x-1 ] = \left [ \begin{array}{c } \\ \vdots \\ \\ \\ -1/2h_x \\ \\1/2h_x \\ \end{array } \right ] , \mbox { and } \phi'[n_x ] = \left [ \begin{array}{c } \\\\ \vdots \\ \\ \\ -1/h_x \\ 1/h_x \\\end{array } \right].\ ] ] using thomas algorithm " the system , and have been solved for the weighting coefficients , for all . similarly , the weighting coefficients , in either case , can be computed by employing these modified cubic b - splines in the direction . using and , the weighting coefficients , and ( for )can be computed using the shu s recursive formulae : where denote -th order spatial derivative . in particular, the weighting coefficients of order can be obtained by taking in .after computing the approximate values of first and second order spatial partial derivatives from one of the above three methods , one can re - write eq as follows : in case of the dirichlet conditions , the solutions on boundaries can directly read from as : .\ ] ] on the other hand , if the boundary conditions are neumann or mixed type , then the solutions at the boundary are obtained by using any above methods ( mtb - dqm , mexp - dqm or mecdq method ) on the boundary , which gives a system of two equations . on solving it we get the desired solution on the boundary as follows : from eq . with andthe neumann boundary conditions at and , we get in terms of matrix system for , the above equation can be rewritten as \left [ \begin{array}{c } u_{1j } \\u_{n_x j } \\\end{array } \right ] = \left [ \begin{array}{c } s_j^a \\ s_j^b \\ \end{array } \right],\ ] ] where and .on solving , for the boundary values and , we get analogously , for the neumann boundary conditions at and , the solutions for the boundary values and can be obtained as : where and . after implementing the boundary values , eq can be written in compact matrix form as follows : where 1 . ] is the vector of order containing the boundary values , i.e. , + 3 . be a square matrix of order given as where and are the block diagonal matrices of the weighting coefficients and , respectively as given below , \mbox { and } & ~ b_r = \left [ \begin{array}{cccc } m_r & o & \ldots & o \\ o & m_r & \ldots & o \\ \vdots & \vdots & \ddots & \vdots \\ o & o & \ldots & m_r \\\end{array } \right ] , \end{array}\ ] ] where and are the matrices of order , and the sub - matrix of the block diagonal matrix is given by .\end{array}\ ] ] finally , we adopted ssp - rk54 scheme to solve the initial value system as : where stability of the method mtb - dqm for convection - diffusion equation depends on the stability of the initial value system of odes as defined in .noticed that whenever the system of odes is unstable , the proposed method for temporal discretization may not converge to the exact solution . moreover , being the exact solution can directly obtained by means of the eigenvalues method , the stability of depends on the eigenvalues of the coefficient matrix .in fact , the stability region is the set , where is the stability function and is the eigenvalue of the coefficient matrix . the stability region for ssp - rk54 scheme is depicted in ( * ? ? ?* fig.1 ) , from which one can clam that for the stability of the system it is sufficient that for each eigenvalue of the coefficient matrix b. hence , the real part of each eigenvalue is necessarily either zero or negative .it is seen that the eigenvalues of the matrices and have identical nature .therefore , it is sufficient to compute the eigenvalues , and , of the matrices and for different values of grid sizes . the eigenvalues and for has been depicted in figure [ eq - eignv ] .analogously , one can compute the eigenvalues and using mecdq method or mexp - dqm .it is seen that in either case , and have same nature as in figure [ eq - eignv ] .further , we can get from figure [ eq - eignv ] that each eigenvalue of the matrix as defined in eq .is real and negative .this confirms that the proposed methods produces stable solutions for two dimensional convection - diffusion equations .this section deals with the main goal , the numerical study of three test problems of the initial value system of convection - diffusion equations with both kinds of the boundary conditions has been done by adopting the methods mtb - dqm , mexp - dqm and mecdq method along with the integration ssp - rk54 scheme .the accuracy and the efficiency of the methods have been measured in terms of the discrete error norms : namely- average norm ( -error norm ) and the maximum error ( error norm ) .[ ex1 ] consider the initial value system of convection - diffusion equation with , while values of for can be extracted from the exact solution where initial condition is a gaussian pulse with unit hight centered at .we have computed the numerical solution of problem [ ex1 ] for and different values of . for ^ 2 ] , time , and grid space step size : the and error norms in the proposed solutionshave been compared with the errors in the solutions by various schemes in in table [ tab1.2 ] for and in table [ tab1.3 ] for .the initial solution and mtb - dqm solutions in ^ 2 ] with , where and the neumann boundary condition ,\]]or the dirichlet s conditions can be extracted from the exact solution : the computation by the proposed methods has been done for different values of and . for , we take for the solutions at .the rate of convergence and error norms in the proposed solutions has been compared with that of due to fourth - order compact finite difference scheme for , in table [ tab2.1 ] .it is found that the proposed solutions from either method are more accurate in comparison to , and are in good agreement with the exact solutions . for , we take for the solution at . in table[ tab2.2 ] , the and error norms are compared with that obtained by fourth - order compact finite difference scheme for , and .rate of convergence have been reported in table [ tab2.3 ] . from tables[ tab2.2 ] and [ tab2.3 ] , we found that the proposed solutions are more accurate as compared to the results in , while the rate of convergence is linear .the behavior of solutions is depicted in figure [ ex2-fig2.2 ] and [ ex2-fig2.3 ] with for and , respectively .for the same parameter , mentioned above , the numerical solution is obtained by using neumann conditions and reported in table [ tab2.4 ] .the obtained results are in good agreement with the exact solutions , the rate of convergence for is quadratic for each method . * 2lclcllclcllclcllc + & & & & & & & + & & roc & & roc & & & roc & & roc & & & roc & & roc & & & roc + 0.2 & 4.231e-06 & & 4.969e-10 & & & 1.714e-05 & & 3.584e-09 & & & 4.231e-06 & & 4.999e-10 & & & & + 0.1 & 5.155e-07 & 3.0 & 2.472e-11 & 4.3 & & 5.168e-07 & 5.1 & 2.473e-11 & 7.2 & & 5.153e-07 & 3.0 & 2.478e-11 & 4.3 & & 2.289e-10 & + 0.05 & 5.108e-08 & 3.3 & 9.041e-13 & 4.8 & & 5.105e-08 & 3.3 & 9.044e-13 & 4.8 & & 5.107e-08 & 3.3 & 9.049e-13 & 4.8&&1.621e-11 & 3.82 + 0.025 & 1.147e-08 & 2.2 & 1.922e-14 & 5.6 & & 1.147e-08 & 2.2 & 1.922e-14 & 5.6 & & 1.147e08 & 2.4 & 1.923e-14 & 5.6&&8.652e-13 & 4.23 + + 0.2 & 7.4031e-06 & & 1.4907e-09 & & & 3.5634e-05 & & 1.5314e-08 & & & 2.0585e-06 & & 6.7290e-11 & & & & + 0.1 & 1.1460e-06 & 2.7 & 1.3103e-10 & 3.5 & & 1.1421e-06 & 5.0 & 1.3148e-10 & 6.9 & & 1.5919e-07 & 3.7 & 1.1624e-12 & 5.9 & & 2.749e-09 & + 0.05 & 1.4578e-07 & 3.0 & 7.5377e-12 & 4.1 & & 1.4582e-07 & 3.0 & 7.5396e-12 & 4.1 & & 1.9012e-08 & 3.1 & 5.4939e-14 & 4.4 & & 2.394e-10 & 3.52 + 0.025 & 1.3902e-08 & 3.4 & 2.7129e-13 & 4.8 & & 1.3904e-08 & 3.4 & 2.7132e-13 & 4.8 & & 6.2220e-09 & 1.6 & 9.0456e-15 & 2.6 & & 1.658e-11 & 3.85 + * 6l*6l*6l & & & & & & & + & & & & & & & & & & & & & & & + 0.04 & 4.1718e-03 & & 1.7833e-03 & & 1.7475e-03 & & 7.9168e-04 & & 1.2939e-03 & & 3.6957e-04 & & 1.1826e-01 & & 4.9331e-03 + 0.02 & 5.2095e-04 & & 6.8410e-05 & & 5.2097e-04 & & 6.8445e-05 & & 6.3735e-04 & & 6.8622e-05 & & 1.5310e-02 & & 4.1351e-04 + 0.01 & 3.2496e-04 & & 2.1508e-05 & & 3.2484e-04 & & 2.1493e-05 & & 2.5394e-04 & & 1.8260e-05 & & 9.4696e-04 & & 2.8405e-05 + * 5l*5l*5l & + & & & & & + & & roc & & roc & & & roc & & roc & & & roc & & roc + 0.1 & 2.6263e-02 & & 3.6277e-02 & & & 2.6280e-02 & & 3.6670e-02 & & & 2.4464e-02 & & 4.0175e-02 & + 0.05 & 6.8144e-03 & 1.9 & 4.1447e-03 & 3.1 & & 6.8144e-03 & 1.9 & 4.1339e-03 & 3.1 & & 3.3580e-03 & 2.9 & 1.6264e-03 & 4.6 + 0.025 & 1.1634e-03 & 2.6 & 2.7780e-04 & 3.9 & & 1.0719e-03 & 2.7 & 1.9697e-04 & 4.4 & & 8.1776e-04 & 2.0 & 1.0090e-04 & 4.0 + & + & & & & & + 0.01 & 5.0998e-04 & & 3.5191e-03 & & & 5.0738e-04 & & 1.0116e-05 & & & 7.4180e-04 & & 2.5558e-05 & + 0.05 & 1.8776e-04 & 1.4 & 1.8193e-03 & 1.0 & & 1.8776e-04 & 1.4 & 2.9979e-06 & 1.8 & & 2.0972e-04 & 1.8 & 5.1381e-06 & 2.3 + 0.025 & 9.9342e-05 & 0.9 & 9.1372e-04 & 1.0 & & 9.9342e-05 & 0.9 & 7.9462e-07 & 1.9 & & 9.5677e-05 & 1.1 & 8.1188e-07 & 2.7 + l*4c*5c*5c & + & & & & & + & & roc & & roc & & & roc & & roc & & & roc & & roc + 0.05 & 1.0176e-02 & & 6.7504e-03 & & & 1.0177e-02 & & 6.7513e-03 & & & 9.1549e-03 & & 5.7412e-03 & + 0.025 & 2.2164e-03 & 2.2 & 1.0428e-03 & 2.7 & & 2.2165e-03 & 2.2 & 1.0429e-03 & 2.7 & & 1.9834e-03 & 2.2 & 8.7214e-04 & 2.7 + 0.0125 & 2.9460e-04 & 2.9 & 7.2888e-05 & 3.8 & & 2.9460e-04 & 2.9 & 7.2889e-05 & 3.8 & & 2.5565e-04&3.0 & 5.6518e-05 & 3.9 + & + & & & & & + 0.05 & 1.2070e-01 & & 6.3285e-01 & & & 1.2070e-01 & & 6.3293e-01 & & & 1.0921e-01 & & 6.0454e-01 & + 0.025 & 2.7808e-02 & 2.1 & 6.2302e-02 & 3.3 & & 2.7809e-02 & 2.1 & 6.2304e-02 & 3.3 & & 2.6004e-02 & 2.1 & 5.5004e-02 & 3.5 + 0.0125&6.0348e-03 & 2.2 & 6.4433e-03 & 3.3 & & 6.0348e-03 & 2.2 & 6.4433e-03 & 3.3 & & 5.4733e-03 & 2.2 & 5.5668e-03 & 3.3 + & + & & & & & + 0.05 & 1.4931e-05 & & 5.8044e-08 & & & 1.4932e-05 & & 5.8056e-08 & & & 1.3598e-05 & & 4.8332e-08 & + 0.025 & 3.3009e-06 & 2.2 & 1.0664e-08 & 2.4 & & 3.3009e-06 & 2.2 & 1.0665e-08 & 2.4 & & 2.9824e-06 & 2.2 & 8.7286e-09 & 2.5 + 0.0125 & 4.5947e-07 & 2.8 & 8.0780e-10 & 3.7 & & 4.5947e-07 & 2.8 & 8.0780e-10 & 3.7 & & 3.9840e-07 & 2.9 & 6.0761e-10 & 3.8 + & + & & & & & + 0.05 & 1.7132e-05 & & 6.9905e-08 & & & 1.7132e-05 & & 6.9910e-08 & & & 1.6511e-05 & & 6.5219e-08 & + 0.025 & 3.9931e-06 & 2.1 & 1.4135e-08 & 2.3 & & 3.9932e-06 & 2.1 & 1.4136e-08 & 2.3 & & 3.7892e-06 & 2.1 & 1.2756e-08 & 2.4 + 0.0125 & 8.8931e-07 & 2.2 & 2.7052e-09 & 2.4 & & 8.8931e-07 & 2.2 & 2.7052e-09 & 2.4 & & 8.2511e-07 & 2.2 & 2.3316e-09 & 2.5 + [ ex3 ] the initial value system of convection - diffusion equation with ^ 2 $ ] and , and where the distribution of the initial solution is depicted in figure [ ex3-fig3.1 ] .the solutions behavior is obtained for the parameter values : and , and is depicted in figure [ ex3-fig3.2 ] due to mtb - dqm , also we noticed the similar characteristics obtained using mexp - dqm and mecdq method .the obtained characteristics agreed well as obtained in .in this paper , the numerical computations of initial value system of two dimensional _ convection - diffusion _ equations with both kinds of boundary conditions has been done by adopting three methods : modified exponential cubic b - splines dqm , modified trigonometric cubic b - splines dqm , and mecdq method , which transforms the _ convection - diffusion _ equation into a system of first order ordinary differential equations ( odes ) , in time , which is solved by using ssp - rk54 scheme .the methods are found stable for two space convection - diffusion equation by employing matrix stability analysis method .section [ sec - num ] shows that the proposed solutions are more accurate in comparison to the solutions by various existing schemes , and are in good agreement with the exact solutions .the order of accuracy of the proposed methods for the convection - diffusion problem with dirichlet s boundary conditions is cubic whenever and otherwise it is super linear , in space . on the other hand ,the order of accuracy of the proposed methods for the convection - diffusion problem with neumann boundary condition is quadratic with respect to error norms , see table [ tab2.4 ] .huai - huo cao , li - bin liu , yong zhang and sheng - mao fu , a fourth - order method of the convection - diffusion equations with neumann boundary conditions , applied mathematics and computation 217 ( 2011 ) 9133 - 9141 .hengfei ding and yuxin zhang , a new difference scheme with high accuracy and absolute stability for solving convection - diffusion equations , journal of computational and applied mathematics 230 ( 2009 ) 600 - 606 .kalita , d.c .dalal and a.k .dass , a class of higher order compact schemes for the unsteady two - dimensional convection - diffusion equation with variable convection coefficients , int .methods fluids 38 ( 2002 ) 1111 - 1131 .m. zerroukat , k. djidjeli and a. charafi , explicit and implicite messless method for linear advection - diffusion - type partial differential eqations , international journal numerical method in engineering 48 ( 2000 ) 19 - 35 .g. arora , v. joshi , a computational approach for solution of one dimensional parabolic partial differential equation with application in biological processes , ain shams eng j ( 2016 ) , http://dx.doi.org/10.1016/j.asej.2016.06.013 .singh , p. kumar , an algorithm based on a new dqm with modified extended cubic b - splines for numerical study of two dimensional hyperbolic telegraph equation , alexandria eng .j. ( 2016 ) , http://dx.doi.org/10.1016/j.aej.2016.11.009 .g. arora , rc mittal and b. k. singh , numerical solution of bbm - burger equation with quartic b - spline collocation method , journal of engineering science and technology , special issue 1 , 12/2014 , 104 - 116 .v. k. srivastava and b. k. singh , a robust finite difference scheme for the numerical solutions of two dimensional time - dependent coupled nonlinear burgers equations , international journal of applied mathematics and mechanics 10(7 ) ( 2014 ) 28 - 39 .a. korkmaz and h.k .akmaz , extended b - spline differential quadrature method for nonlinear viscous burgers equation , proceedings of international conference on mathematics and mathematics education , pp 323 - 323 , elazi , turkey 12 - 14 may , 2016 .muhammad abbas , ahmad abd .majid , ahmad izani md .ismail , abdur rashid , the application of cubic trigonometric b - spline to the numerical solution of the hyperbolic problems , applied mathematics and computation 239 ( 2014 ) 74 - 88 .( left ) of 2d convection - diffusion equation with , for , title="fig:",width=236,height=170 ] ( left ) of 2d convection - diffusion equation with , for , title="fig:",width=379,height=170 ] ( left ) of 2d convection - diffusion equation with , for , title="fig:",width=283,height=194 ] ( left ) of 2d convection - diffusion equation with , for , title="fig:",width=283,height=194 ] d convection - diffusion equation in example [ ex3 ] at ( left ) and ( right),title="fig:",width=283,height=194 ] d convection - diffusion equation in example [ ex3 ] at ( left ) and ( right),title="fig:",width=283,height=194 ] d convection - diffusion equation in example [ ex3 ] at ( left ) and ( right),title="fig:",width=283,height=194 ] d convection - diffusion equation in example [ ex3 ] at ( left ) and ( right),title="fig:",width=283,height=194 ] d convection - diffusion equation in example [ ex3 ] at ( left ) and ( right),title="fig:",width=292,height=194 ] d convection - diffusion equation in example [ ex3 ] at ( left ) and ( right),title="fig:",width=292,height=194 ]
this paper deals with the numerical computations of two space dimensional time dependent _ parabolic partial differential equations _ by adopting adopting an optimal five stage fourth - order strong stability preserving runge - kutta ( ssp - rk54 ) scheme for time discretization , and three methods of differential quadrature with different sets of modified b - splines as base functions , for space discretization : namely- mecdqm : ( dqm with modified extended cubic b - splines ) ; mexp - dqm : dqm with modified exponential cubic b - splines , and mtb - dqm : dqm with modified trigonometric cubic b splines . specially , we implement these methods on _ convection - diffusion _ equation to convert them into a system of first order ordinary differential equations ( odes ) , in time . the resulting system of odes can be solved using any time integration method , while we prefer ssp - rk54 scheme . all the three methods are found stable for two space convection - diffusion equation by employing matrix stability analysis method . the accuracy and validity of the methods are confirmed by three test problems of two dimensional _ convection - diffusion _ equation , which shows that the proposed approximate solutions by any of the method are in good agreement with the exact solutions . convection - diffusion equation , modified trigonometric cubic - b - splines , modified exponential cubic - b - splines , modified extended cubic - b - splines , differential quadrature method , ssp - rk54 scheme , thomas algorithm
robot design deals with complexity in a manner similar to personal computers .robots have input / output devices that either provide output by acting in the environment or sensors that provide input . like pcs ,robot peripherals contain firmware ( device controllers ) to predictably and efficiently manage resources in real - time .data is provided via a well - defined interface ( set of system calls over a transport ) .however , pcs abstract the differences in internal organization and chipsets through classifying devices in terms of their roles in the system .these roles define an appropriate set of access and control functions that generally apply across the entire classification .subsequent differences in devices are accommodated through the use of custom device drivers . robots also contain a mechanism for providing input and output to the higher - level algorithms , but the placement of the hardware abstraction layer is different than in personal computers .although most devices are classified according to the data type they produce and consume , classification occurs within the framework , not at the firmware level .the disadvantage of this approach is that customized links from each hardware platform to each framework must be created . in the current robotics landscape, this is a huge burden given the rate of innovation on new hardware platforms for many research and education purposes .this ongoing backlog of creating one - to - one connections between platforms and hardware stifles innovation of control architectures .the small number of developers comfortable with device driver creation either due to the unfamiliarity of the transports or the complexity of the threaded management of connections is source of slow progress .fortunately , we can leverage some commonalities found at the device driver level that link salient concepts both in the device driver domain and the robotics domain .we propose a domain specific language based on these concepts called robot device interface specification ( rdis ) .the rdis describes the robot interface in terms of connection , primitives and interfaces .an accurate characterization of the device domain enables some important innovations .first , the rdis enables a declarative , rather than a programmed interface to frameworks .this approach benefits both device manufacturers and framework developers and users by separating the device semantics from the framework semantics .robot designers can describe the interface that they provide via onboard firmware and how it maps to abstract concepts via the rdis .the framework developers are only responsible for providing a mapping from the abstract concepts to the framework .the abstract interface allows a many - to - many mapping between devices and frameworks using only a single map for each device and framework .this is beneficial because manufacturers typically only provide device drivers for a single , often proprietary framework .specific device drivers for many frameworks are left to either framework developers ( in the case of popular robots ) or framework users ( as needed ) .the lack of available drivers for a specific device on a specific framework can be a barrier to leveraging existing software components .second , an abstraction that maps device semantics to domain specific concepts enables a new generation of development and runtime tools that can discover and manage available resources at both development and runtime .expertise in creating efficient threaded drivers for specific frameworks can be reused .this approach would simplify development by presenting developers with available resources that conform to specific domain concepts . in this paper, we present the rdis work in progress including rdis specification and tools as well as a use of the rdis to generate device specific programs . the rest of this paper is organized as follows : section [ rw ] discusses work related to declarative descriptions of robot hardware .section [ rdis ] presents the preliminary domain model and its applicability to existing platforms .the current implementation is discussed in section [ case ] .the summary and future work directions are detailed in section [ summary ] .although the literature reveals very few attempts at using dsls for hardware device drivers , thibault et al report the creation of efficient video device drivers using a novel dsl .this language is targeted at the hardware interface layer and creates device driver code rather than interpreting code for efficiency .urbi ( universal robotic body interface ) focuses on creating a model that controls the low level layer of robots and is independent from the robot and client system due to the client / server architecture .others have attempted to address the lack of standardization in abstraction layers but have not considered moving abstractions to drivers using device descriptions .some frameworks use a declarative description of the robots for simulation .player / stage is both a 2d simulator and a robot control framework .robot description files are broken into two pieces : 1 ) a 2d description of the robot and its sensors and 2 ) a set of interfaces that abstract the data produced by hardware to a standard format .the description , used for simulating the robot , consists of a polygon - based footprint with sensor locations marked .actuation and sensor characteristics along with parameters for simplified error models are used to complete the model of the robot .a domain - specific set of classes and message types describe what data can be obtained or how the robot can be manipulated ( i.e. pose2d for position and laser or ir for distance to other objects ) .the classes and message types represent the interface that abstracts the robot hardware to the data that it can produce or consume .writing software to the interfaces that a robot can utilize ( rather than the specific robot ) allows software to be written either for a simulated robot or a real robot , which in turns eases the transition from simulation to physical implementation .ros targets a 3d simulation framework ( gazebo ) and more sophisticated intelligent controller , which require a more rigorous description. urdf ( uniform robot description format ) provides a 3d physical description broken into links and joints to facilitate not only mobile robots but manipulators as well . geometric bounding boxes and meshes allow for collision detection and realistic visualization . like player / stage, ros utilizes a message - based model to decouple data providers from data producers .ideally robots that provide and consume similar data types can be controlled similarly . unlike player stage , urdf not only serves as a mechanism for simulating robots , but also allows for the visualization of real robots in both real - time and off - line ( through saved messages ) .a select number of robot control frameworks move beyond visualization information and relevant interface declaration in the hardware description .preop , an alice - based programming interface for robots takes this paradigm further .not only is 3d visualization information supplied , but also the programming interface is completely specified by the selection of the robot object .this is accomplished by linking the real - time control mechanism and exposed api available to the user within the robot object .frameworks and general reuse within the robotics research community rely upon the relatively invariant nature of mobile robots in several ways .first , in an effort to reduce the complexity of control software , many robots reuse certain kinematic designs .for example , differential drive is a fairly common choice as a configuration .there is a computationally simple , closed form solution for forward and inverse kinematics and when combined with wheel encoders , provides a method for calculating pose relative to a starting position ( which in turn enables closed loop control ) .although manipulators can contain arbitrary linkages , typically robots are constrained to configurations that provide a closed form inverse kinematic formulation and are numerically conducive to path planning ( avoiding singularities ) .therefore , software that takes advantage of the kinematic control inherent in one configuration could be applied to other robots that reuse that configuration with appropriate parameterization .second , many robots including the popular mobile robots pioneer class , irobot creates , k - team robots , erratic er-1 , white box robotics model 914 , ar.drones , and birdbrain finches contain an embedded firmware controller that accepts commands via a serial , bluetooth , wifi or usb interface rather than require the users to download a program to onboard memory .this approach is popular because it allows the hardware designers to hide the complexity of hardware control within the firmware .there are a few designs that still expect developers to download code to the firmware .the benefits of the low latency of local control are far outweighed by the burden of identifying a local toolchain to build the remote executables and the complexity of testing on a remote platform . to that end ,robots that utilize local control often provide modes where the local software program presents an api to an external computer ( i.e. lego mindstorms via lejos and e - puck ) .rdis , robot device interface specification , is a domain specific language that defines the connection to robot firmware and maps data types to defacto standard messages for use in frameworks .this mechanism provides an abstraction layer between the device and frameworks that negates the need for device drivers as point solutions .the rdis has three purposes : 1 ) provide enough information for simulation and visualization of hardware and controllers , 2 ) declaratively specify the mechanism for requesting data and actuation , and 3 ) inform users of standard message types that can be obtained from the hardware to facilitate connections to existing frameworks .the rdis enables several efficiencies in robotics controller development .although the long term goal is to embed the rdis within the firmware as a response to a request , it could also be requested via the internet from a repository .however manufacturers that provide access to the rdis within their hardware would benefit from being able to take advantage of the rdis connectors available for frameworks without specifically providing device drivers .then the rdis serves as a discovery message to the development architecture regarding the services available and how to manage the services at runtime . making the hardware the system of record for its abilities is in line with other modern technologies ( bluetooth for example ) .the challenge in successfully defining the rdis is in creating a model that captures the generalizable aspects of robots and appropriately identifies the aspects that vary .domain models , when designed properly , can be somewhat invariant to changes and can provide a stable basis for deciding the structure and parameters of the specification .primary concepts include connection , message formats , primitives and exposed interfaces ( figure [ er ] ) .figure [ circles ] shows a diagram of the domain and the scope of the rdis .connections are generally through standard transports and describe how the robot connects to external controllers .message formats either encode parameters in ascii formats or send natively as byte values .primitives describe the device invariant features which are requests that can be made and ingoing and outgoing parameters .exposed interfaces describe a more convenient exposed interface that may map directly to primitives or may add value to a primitive by data conversion or specific parameterization .there are some cross - cutting concerns .messaging paradigms are either request - reply ( service - based and adhoc ) or publish / subscribe ( periodic updates that are published or expected ) .threading models include single ( one loop that services incoming and outgoing data ) , dual threaded ( one thread for servicing incoming requests and one thread for periodic requests ) , or multiple threads ( requests create threads and periodic requests are on different frequencies ) .some drivers maintain state( i.e. current position relative to the starting point ) and the validation routines for incoming data and read and write routines can vary .a preliminary rdis that meets requirements 2 and 3 has been implemented for the finch robot from bird brain and the koala from k - team .figure [ generate ] shows how the rdis is used to create robot specific driver code for frameworks .the rdis and the resulting templates contain attributes to describe several functions including connection , basic primitives , external interfaces and mapping to abstract robotic concepts .the connection statement delineates the physical connection parameters , the overall threading model and functions to call upon the creation of the connection ( excerpt shown in figure [ connect ] ) . depending upon the physical connection, other parameters could include port i d , serial connection parameters , or usb i d .although we intend to support three threading models , the single threading model is used which processes requests and publishing of data in a single active loop .a callback is used to process any subscriptions ( if supported by the framework and indicated by the abstract mapping ) and second thread is used to issue a keepalive command if required by the platform .all data protection , including appropriate mutexes are generated by the rdis handler based on the threading model selected .basic primitives describe the mechanism for sending information to and from the robot .primitive specifications indicate the associated connection ( described in connection statement ) , frequency and message formatting .frequency indicates whether the method is a request / reply or periodic .request / reply methods ( indicated as adhoc ) are only submitted when a request is received .parameters can be provided by the client and data can be returned to the calling client .periodic requests are executed on a schedule and utilize a set of state variables ( defined in state variables section ) to retrieve and save method data .message formats for communicating with robot firmware are either position based or delimited .the interim specification presented here encodes the messages along with the input and output fields .an example of the setmotor function in figure [ setmotorc ] and the underlying abstract syntax tree is shown in figure [ setmotor ] .the external interface exposes the api available to client programs .each interface is composed of one or more primitive methods or can return state variables ( updated asynchronously by periodic primitives ) . the separation between the external interface and primitives encapsulates the robot firmware and its parameters from developers .for example , actuation commands are sometimes provided in encoder units where an external api would utilize a standard measure such as meters per second . a mapping to abstract concepts in sensing and locomotion provides a link between robots and existing frameworks . rather than specify frameworkspecific information within the rdis , abstract concepts that describe the data available are used instead .for example , since a differential drive robot can be controlled via linear and rotational velocity , we provide a mapping between linear and rotational velocity and robot primitives ( left and right velocity ) .an example is shown in figure [ am ] . in the current design , the rdis is modeled as a json subset ( excerpt in figure [ rdisjson ] ) .the intermediate product is an abstract syntax tree that represents the robot details in a domain specific model .this intermediate format can be further processed to verify conformance to the specification .end products are generated from the verified syntax tree , either in a single or multiple passes , using templates that format data based on the model .the preliminary result of this approach includes rdis specifications and grammars that generate a command line program , websocket server and a ros driver .the ros driver looks for specific interface signatures in the abstract mapping section that match to ros message structures .it is important to note that the rdis toolset is enabled by antlr and stringtemplate .these are open source libraries that parse and process data according to grammars .these grammars are often used to define domain specific languages that are subsequently processed either by interpreters or translators .the sample rdis and the translation to a c - based command line controller and a websocket server and a c++ ros node were achieved through the use of grammars and the antlr and stringtemplate libraries .although these libraries provide many built - in features , the ability to embed code to customize processing is important to using these tools effectively .this preliminary result supports the idea that general robot devices can be described declaratively in a manner that supports discovery and that links to the backend processes .the ultimate goal to enable more accessible programming by embedding the robot device descriptions within the device .discovery occurs when the design environment queries the device for its supported services ( or apis ) .the initial approach for platforms that support onboard reconfigurable firmware is to augment the firmware to support a single command that communicates the rdis .the information provided by the rdis can be used by any rdis - enabled development environment .it is expected that manufacturers will choose to rdis enable their devices once there are more rdis - enabled environments are available .the rdis must be expanded to be useful in a larger context .these tasks include but are not limited to : 1 ) addition of a complete kinematic , visual and collision description consistent with existing simulators and frameworks , 2 ) error handling at both the communication and primitive levels , 3 ) implementation of additional threading models , 4 ) refinement of the state concept and how it matches to primitives and interfaces , 5 ) management of sensor and actuator error models consistent with existing frameworks , and 6 ) match internal mechanisms to framework standard interfaces and message types through linking the description and the exposed api instead of relying upon matching external interface signatures .these changes require updates to the specification and the underlying parsers , lexers , tree grammars and string templates .the authors gratefully acknowledge the partial support nsf via grants cns-1042360 and eec-1005191 . b p gerkey , r t vaughan , k stoy , a howard , g s sukhatme , m j mataric , most valuable player : a robot device server for distributed control _ proc . of the ieee / rsj intl . conf . on intelligent robots and systems ( iros ) _ , 2001 m. quigley , b. gerkey , k. conley , j. faust , t. foote , j. leibs , e. berger , r. wheeler , and a. ng , `` ros : an open - source robot operating system , '' _ international conference on robotics and automation _ , 2009 .s. cooper , w. dann , and r. pausch , `` alice : a 3-d tool for introductory programming concepts , '' _ proceedings of the fifth annual ccsc northeastern conference on the journal of computing in small colleges _ ,107 - 116 .2000 s.a .thibault , r.marlet , c.consel , domain - specific languages : from design to implementation application to video device drivers generation , _ ieee transactions on software engineering _ volume : 25 issue : 3 , may / jun 1999
there is no dearth of new robots that provide both generalized and customized platforms for learning and research . unfortunately as we attempt to adapt existing software components , we are faced with an explosion of device drivers that interface each hardware platform with existing frameworks . we certainly gain the efficiencies of reusing algorithms and tools developed across platforms but only once the device driver is created . we propose a domain specific language that describes the development and runtime interface of a robot and defines its link to existing frameworks . the robot device interface specification ( rdis ) takes advantage of the internal firmware present on many existing devices by defining the communication mechanism , syntax and semantics in such a way to enable the generation of automatic interface links and resource discovery . we present the current domain model as it relates to differential drive robots as a mechanism to use the rdis to link described robots to html5 via web sockets and ros ( robot operating system ) .
numerous techniques for forecasting electric energy consumption have been proposed in the last few decades . for operators , energy consumption ( load )forecast is useful in effectively managing power systems .consumers can also benefit from the forecasted information in order to yield maximum satisfaction .in addition to these economic reasons , load forecasting has also been used for system security purposes . when deployed to handle system security problems, it provides expedient information for detecting vulnerabilities in advance .forecasting energy consumed within a particular geographical area greatly depends on several factors , such as , historical load , mean atmospheric temperature , mean relative humidity , population , gdp per capita . over the years, there has been rapid growth annually of about 10% from year 1999 to 2005 for energy demand in the gaza strip . with about 75% of energy demands from service and household sectors ,these demands are barely met . in order to meet these demands andefficiently utilize the limited energy , it is imperative to observe historic trends and make futuristic plans based on past data . in the past ,computationally easier approaches like regression and interpolation , have been used , however , this methods may not give sufficiently accurate results .as advances in technology and sophisticated tools are made , complex algorithmic approaches are introduced and more accuracy at the expense of heavy computational burden can be observed .several algorithms have been proposed by several researchers to tackle electric energy consumption forecasting problem .previous works can be grouped into three : _ * time series approach : * _ : : in this approach , the trend for electric energy consumption is handled as a time series signal .future consumption is usually predicted based on various time series analysis techniques .however , time series approach is characterized with prediction inaccuracies of prediction and numerical instability .this inaccurate results is due to the fact the approach does not utilize weather information .studies have shown that there is a strong correlation between the behavior of energy consumed and weather variables .zhou r. _ et al_. proposed a data driven modeling method using time series analysis to predict energy consumed within a building . the model in applied on two commercial building and is limited to energy prediction within a building .basu k. _ et al_. also used the time series approach to predict appliance usage in a building for just an hour .+ simmhan y. _ et al_. used an incremental time series clustering approach to predict energy consumption .this method in was able to minimize the prediction error , however , very large number of data points were required .autoregressive integrated moving average ( arima ) is a vastly used time series approach .arima model was used by chen j. _ et al_. to predict energy consumption in jiangsu province in china based on data collected from year 1985 to 2007 .the model was able to accurately predict the energy consumption , however it was limited to that environment .the previous works on time series usually use computationally complex matrix - oriented adaptive algorithms which , in most scenarios , may become unstable . _ * functional based approach : * _ : : here , a functional relationship between a load dependent variable ( usually weather ) and the system load is modelled .future load is then predicted by inserting the predicted weather information into the pre - defined functional relationship .most regression methods use functional relationships between weather variables and up - to - date load demands .linear representations are used as forecasting functions in conventional regression methods and this method finds an appropriate functional relationship between selected weather variables and load demand .liu d. _ et al_. proposed a support vector regression with radial basis function to predict energy consumption in a building .the approach in was only able to forecast the energy consumed due to lighting for some few hours .+ in , a grey model , multiple regression model and a hybrid of both were used to forecast energy consumption in zhejiang province of china .yi w. _ et al_. proposed an ls - svm regression model to also forecast energy consumption .however , these models were limited to a specific geographic area . _* soft computing based approach : * _ : : this is a more intelligent approach that is extensively being used for demand side management .it includes techniques such as fuzzy logic , genetic algorithm and artificial neural networks ( ann ) ( * ? ? ?* ; * ? ? ?* ; * ? ?? * ; * ? ? ?* ; * ? ? ?the ann approach is based on examining the relationship that exist between input and output variables .ann approach was used in to forecast regional load in taiwan .empirical data was used to effectively develop an ann model which was able to predict the regional peak load .catalo j. p. s. _ et al_. used the ann approach to forecast short - term electricity prices .levenberg - marquardt s algorithm was used to train data and the resulting model was able to accurately forecast electricity prices .however , it was only able to predict electricity prices for about 168 hours .+ pinto t. _ et al_. also worked on developing an ann model to forecast electricity market prices with a special feature of dynamism .this model performs well when a small set of data is trained , however , it is likely to perform poorly with large number of data due to the computational complexities involved .load data from year 2006 to 2009 were gathered and used to develop an ann model for short - term load forecast in . in , ann hybrid with invasive weed optimization ( iwo )was employed to forecast the electricity prices in the australian market .the hybrid model showed good performance , however , the focus was on predicting electricity prices in australia .most of the ann models developed in existing work considered some specific geographic area , some models were able to forecast energy consumption for buildings ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) and only for few hours .this study used available historical data from year 1994 to 2013 ( but trained data from year 1994 to 2011 ) in determining a suitable model .the resulting model from training will be used to predict electric energy consumption for future years ] , while the error criteria such as mean squared error ( mse ) , root mean squared error ( rmse ) , mean absolute error ( mae ) and mean absolute percentage error ( mape ) are used as measures to justify the appropriate model .the model was used to predict the behavior for year 2012 and 2013 .the remainder of this paper is organized as follows : section [ sec:2 ] gives a brief description of the ann concept .section [ sec:3 ] presents the ann approach used to analyse our data .section [ sec:4 ] evaluates the performance of the ann model .section [ sec:5 ] draws conclusions .ann is a system based on the working principles of biological neural networks , and is defined as a mimicry of biological neural systems .ann s are at the vanguard of computational systems designed to create or mimic intelligent behavior . unlike classical artificial intelligence ( ai ) systems that are aimed at directly emulating rational and logical reasoning ,ann s are targeted at reproducing the causal processing mechanisms that give results to intelligence as a developing property of complex systems .ann systems have been designed for fields such as capacity planning , business intelligence , pattern recognition , robotics . .in computer science and engineering , ann techniques has gained a lot of grounds and is vastly deployed in areas such as forecasting , data analytics and even data mining .the science of raw data examination with the aim of deriving useful conclusions can simply be defined as data analytics .on the other hand , data mining describes the process of determining new patterns from large data sets , by applying a vast set of approaches from statistics , artificial intelligence , or database management .forecasting is useful in predicting future trends with reliance on past data .the focus on this paper will be on using the ann approach in forecasting energy consumption .practically , ann provides accurate approximation of both linear and non - linear functions . a mathematical abstraction of the internal structure of a neuron is shown in fig .[ fig:1 ] . despite the presence of noise or incomplete information, it is possible for neurons to learn the behaviour or trends and consequently , make useful conclusions .usually , a neural network is trained to perform a specific function by adjusting weight values between elements as seen in fig . [ fig:2](a ) .the neural network function is mainly determined by the connections between the elements . by observing the data ,it is possible for the ann to make accurate predictions .unlike other forecasting approach , the ann technique has the ability to predict future trends with theoretically poor , but rich data set .in this section , we present the overall modeling process .the implementation of the ann model can be described using the flow chart in fig . [ fig:2](b ) .historical monthly data ( historical energy consumption , mean atmospheric temperature , mean relative humidity , population , gdp per capita ) has been gathered from years 1994 to 2013 from gaza region as shown in fig .[ fig:3 ] . based on the gathered data ,this study will develop a forecast model that predicts energy consumption for year 2012 and 2013 . using the ann technique , training and learning procedures are fundamental in forecasting future events .the training of feed - forward networks is usually carried out in a supervised manner . with a set of data to be trained ( usually extracted from the historical data ) , it is possible to derive an efficient forecast model .the proper selection of inputs for ann training plays a vital role to the success of the training process . on the other hand ,the learning process involves providing both input and output data , the network processes the input and compares the resulting output with desired result .the system then adjusts the weights which acts as a control for error minimization . in order to minimize error, the process is repeated until a satisfactory criterion for convergence is attained .the knowledge acquired by the ann via the learning process is tested by applying it to a new data set that has not been used before , called the testing set .it should now be possible that the network is able to make generalizations and provide accurate result for new data .due to insufficient information , some networks do not converge .it is also noteworthy that over - training the ann can seriously deteriorate forecasts .also , if the ann is fed with redundant or inaccurate information , it may destabilize the system .training and learning process should be thorough in order to achieve good results . to accurately forecast, it is imperative to consider all possible factors that influence electricity energy consumption , which is not feasible in reality .electric energy consumption is influenced by a number of factors , which includes : historical energy consumption , mean atmospheric temperature , mean relative humidity , population , gdp per capita , ppp , etc . in this paper ,different criteria were used to evaluate the accuracy of the ann approach in forecasting electric energy consumption in gaza .they include : mean squared error ( mse ) , root mean squared error ( rmse ) , mean absolute error ( mae ) and mean absolute percentage error ( mape ) .a popular and important criterion used for performance analysis is the mse .it is used to relay concepts of bias , precision and accuracy in statistical estimation . here, the difference between the estimated and the actual value is used to get the error , the average of the square of the error gives an expression for mse .the mse criterion is expressed in equation ( [ eqn:1 ] ) . where is the actual data and is the forecasted data .the rmse is a quadratic scoring rule which measures the average magnitude of the error .the rmse criterion is expressed in equation ( [ eqn:2 ] ) .rmse usually provides a relatively high weight to large errors due to the fact that averaging is carried out after errors are squared .this makes this criterion an important tool when large errors are specifically undesired . mae measures the average error function for forecasts and actual data with polarity elimination .equation ( [ eqn:3 ] ) gives the expression for the mae criterion used .the mae is a linear score which implies that all the individual differences are weighted equally in the average . mape , on the other hand , measures the size of the error in percentage ( % ) terms .it is calculated as the average of the unsigned percentage error .equation ( [ eqn:4 ] ) gives the expression for the mape criterion used . validation techniques are employed to tackle fundamental problems in pattern recognition ( model selection and performance estimation ) . in this study , 2-fold andk - fold cross validation techniques will be employed and the validation set will only be used as part of training and not part of the test set .the test set will be used to evaluate how well the learning algorithm works as a whole .the forecast model was simulated to obtain results of the energy consumed for year 2012 and 2013 in gaza .table 1 compares the actual and the forecasted energy consumption for year 2012 .2-fold and k - fold cross validation techniques were used and the performance of the forecast model based on different error criteria is shown in table 2 .similarly , table 3 and 4 shows the results for year 2013 .the results obtained have good accuracy and shows that the proposed ann model can be used to predict future trends of electric energy consumption in gaza ..2-fold and k - fold cross validation for year 2012 [ cols="^,^,^,^",options="header " , ] [ tab:4 ]in this paper , an ann model to forecast electric energy consumption in the gaza strip was presented . to the best of our knowledge ,this is the first of it s kind in existing literature .based on the performance evaluation , the error criteria were within tolerable bounds .empirical results presented in this paper indicates the relevance of the proposed ann approach in forecasting electric energy consumption .future works will consider other forecasting techniques .park d. c. , el - sharkawi m. a. , marks ii r. j. , atlas l. e. and damborg m. j.,(1991 ) electric load forecasting using an artificial neural network , _ ieee transactions on power engineering _ , vol.6 , pp .442 - 449 zhou r. , pan y. , huand z. and wang q. , ( 2013 ) building energy use prediction using time series analysis , _ieee 6th international conference on service - oriented computing and applications _ , pp .309 - 313 .basu k. , debusschere v. and bacha s. , ( 2012 ) appliance usage prediction using a time series based classification approach , _ieee 38th annual conference on ieee industrial electronics society _ , pp .1217 - 1222 . catalo j. p. s. , mariano s. j. p. s. , mendes v. m. f. and ferreira l. a. f. m. , ( 2007 ) an artificial neural network approach for short - term electricity prices forecasting , _ ieee intelligent systems applications to power systems _ , pp. 1 - 7 .pinto t. , sousa t. m. and vale z. , ( 2012 ) dynamic artificial neural network for electricity market prices forecast , _ieee 16th international conference on intelligent engineering systems _ , pp .311 - 316 . khamis m. f. i. , baharudin z. , hamid n. h. , abdullah m. f. and solahuddin s. , ( 2011 ) electricity forecasting for small scale power system using artificial neural network , _ieee 5th international power engineering and optimization conference _ , pp .54 - 59 .safari m. , dahlan y. n. , razali n. s. and rahman t. k. , ( 2013 ) electricity prices forecasting using ann hybrid with invasive weed optimization ( iwo ) , _ ieee 3rd international conference on system engineering and technology _ , pp .275 - 280 .
due to imprecision and uncertainties in predicting real world problems , artificial neural network ( ann ) techniques have become increasingly useful for modeling and optimization . this paper presents an artificial neural network approach for forecasting electric energy consumption . for effective planning and operation of power systems , optimal forecasting tools are needed for energy operators to maximize profit and also to provide maximum satisfaction to energy consumers . monthly data for electric energy consumed in the gaza strip was collected from year 1994 to 2013 . data was trained and the proposed model was validated using 2-fold and k - fold cross validation techniques . the model has been tested with actual energy consumption data and yields satisfactory performance . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
forecasting of seasonal rainfall , especially the summer monsoon , is important to the indian economy ( ) .seasonal forecasts of rainfall are made at the national - scale ( ) because monsoons are large scale phenomena and there is an association between all - india summer monsoon rainfall and aggregate impacts ( ) .however rainfall is a spatially heterogeneous phenomenon , and the country may be divided into distinct homogeneous rainfall zones , based on mean rainfall ( ) .there are also many regional differences in inter- and intra - annual variability ( ) , rainfall trends and the occurrence of extreme events ( ) .apart from the south - west monsoon winds affecting major parts of the country and causing rainfall during the months june - september ( jjas ) , other factors play a role in monsoon rainfall [ ] .these include the retreating monsoon rainfall on the eastern coast particularly during october and november [ ] , and the western disturbances affecting north - western parts of the country during summer months [ ]. furthermore , orography plays an important role [ ] .this paper studies spatial heterogeneity in interannual differences and extremes of rainfall , for both individual grid - points and all - india mean rainfall ( aimr)- the spatial mean across all grid points . such differences in variability within the aforementioned homogeneous zones have been studied by ( ) .however the different aspects of temporal changes and variability , when clustered , can not be expected to coincide with the clusters formed on the basis of mean rainfall , as observed in . regarding prediction of annual rainfall , an important variable is the sign of year - to - year changes in rainfall . while impacts of rainfall over a season depend on the magnitude and distribution within that season , its change from the previous year is a related variable . forecasting the change in rainfall from the present year tothe next is equivalent to forecasting next year s rainfall , once the present year s rainfall is known .the sign of this change is a binary variable , and therefore can be expected to exhibit larger spatial coherence than its magnitude . while this sign alone does not describe the full impacts of rainfall , it represents a compromise between impacts and ability to make forecasts at sub - national scales .furthermore , the internannual change in aimr exhibits large mean reversion , and therefore the sign of this change can be predicted with reasonably high confidence .together , this property of the sign of rainfall change at different spatial scales and their spatial coherence are worth examining . to the best of our knowledge , these properties have not been studied previously . herewe find that the sign of year - to - year changes is spatially coherent , but this has different effects from the mean rainfall field .specifically , clusters describing frequent coincidence of the sign of year - to - year changes differ from the aforementioned clusters defining relatively homogeneous mean rainfall . therefore they must be examined directly .similarly , it is also important to be able to make forecasts of annual extreme events at local or sub - national scales , i.e. the occurence of years with excess and deficient rainfall .such years are often associated with floods and droughts respectively , which have very widespread impacts on people s lives and economy in india .we find that there is spatial coherence in the occurrence of local extremes , and clusters can be identified based on such co - occurence .the corresponding clusters tend to differ from the aforementioned clusters formed on the basis of mean rainfall ( ) .identifying grid - level extremes and locations where these coincide with each other is a fundamentally different problem than characterizing variability of large scale patterns using , for example , empirical orthogonal functions as in .furthermore , the former problem is not subsumed within that of characterizing spatial patterns of temporal variability , because grid - level extremes need not be correlated with a few large scale spatial patterns of rainfall .therefore the properties of grid - level extremes and associated clusters must be examined directly .this paper introduces a systematic approach for identifying homogeneities as well as heterogeneities in year - to - year changes in rainfall as well as annual local extremes .homogeneities are manifested in spatial coherence , which is an important property of spatiotemporal fields generated by physical processes , and makes possible the identification of relatively homogeneous clusters .recently , there has been substantial progress in data science and data mining , allowing for comprehensive analysis of spatiotemporal datasets ( ) and extraction of prominent patterns with respect to these homogeneities .we objectively quantify spatial coherence , and use the results to study a number of properties of year - to - year change and annual extremes .the results are applied to identify cases where coherence can be exploited to form significant regionalizations .we analyze 110 years of gridded rain gauge data across india [ ] , based on concepts of spatiotemporal data mining .heterogeneities are manifested in the property that on larger scales there are substantial differences in statistics that also lead to differences from aimr .the overall message is threefold .first , spatial heterogeneities are substantial , involving both inter - region differences and differences from the all - india spatial mean .these heterogeneities must be taken into account when considering both year - to - year rainfall changes and extreme rainfall .second , both these features involve significant spatial contiguities , and hence for both features it is possible to find homogenous spatial clusters .third , the sign of inter - annual difference is reasonably predictable , and predictability at grid - level improves when combined with national - level prediction .in this work we analyze observations from rain gauges , maintained by the indian meteorological department ( imd ) , and processed to a grid , comprising 357 spatial indices , for the period 1901 - 2011 ( ) .monthly and annual means are considered . in case of month- wise analysis , each data - point has form where denotes the spatial location , the month and the year . with annual means ,each data - point has form . by averaging across locations ( spatial mean ) ,we get all - india mean rainfall ( aimr ) , denoted by .annual rainfall time - series at individual grid - locations indexed by , as well as aimr , exhibit considerable variability .one property of this variability is mean reversion . in case year experiences more rainfall than previous year ,then year is likely to experience less rain than year .this is partly related to the tropical quasi biennial oscillation [ ] .the change in rainfall from year to year is an important variable , and directly related to forecasting the next year s rainfall once the present year s value is known .furthermore , since the changes occur heterogeneously and do not take the same sign uniformly over india , we identify clusters where these changes frequently coincide . for analyzing this , we define a variable called the location - wise phase as . for aimr ,corresponding phase is . for individual locations and aimr , the time - series of phase and mostly alternates between and indicating mean reversion .the limitation of phase as defined here is that it does not measure the magnitude of rainfall changes .however its usefulness lies in its higher spatial coherence than changes in rainfall magnitude , as it is a binary variable , and hence amenability to forecasting . despite the large scale of monsoon systems ,there is coherence in phase over the indian landmass .this is partly a consequence of the discretization involved in defining phase , where only the sign of rainfall change from year - to - year is considered .here we identify locations with high probability of having either the same or the opposite phase as that of aimr , i.e. the national phase . for each location , we compute the set , including years where local phase agrees with national phase , and denote its cardinality as . hence describes the number of years when the grid - location agrees in phase with the national phase .the mean of across grid - locations is 70 , i.e. locations on average agree in 70 years ( out of 110 years of phase data ) with the national phase .this is just one effect of the spatial coherence of phase , which therefore becomes easier to predict than the spatial pattern of rainfall .the histogram of relative frequency , /110 , is shown in figure 1 .the figure also identifies locations where is unusually high , corresponding to frequent agreement with national phase ; and low , corresponding to frequent disagreement .central and western india agree with the national phase with high frequency , whereas locations on the south - eastern coast frequently disagree .locations with frequent agreement or disagreement ( with the national phase ) are where the direction of local change can be predicted with high probability based on the spatial - mean forecasts alone . to make analysis more robust to fluctuations at the grid scale , we estimate , the mean of across the 9 grid locations centered at , ignoring locations outside the indian landmass . the previous analysis is repeated for these so - called `` 1-hop neighborhoods '' .the results are shown in figure 1 .it shows similar results as the previous analysis ; however phase in this case is more spatially coherent ( figure 1 ) , with larger contiguous regions frequently having the same phase . /110 , their relative frequency of conforming with all - india mean phase across the 110 years .( red : over 70% , green : 60 - 70% years , blue : 50 - 60% , yellow : under 50% .b : the histogram of /110 , in percentages .c : same as a , but using mean of 1-hop neighbourhoods ( defined in section 3.3 ) at each location .d : histograms corresponding to c. ] indian meteorological department ( imd ) attempts to predict indian summer monsoon rainfall ( ismr ) each year , based on various meteorological variables .the magnitude of aimr is difficult to predict , but its phase may be easier to predict , since phase is only a binary quantity and also shows strong mean - reverting behavior due to quasi - biennial oscillation ( qbo ) . in this work , we study its predictability by exploiting only this property , without considering any remote teleconnection effects ( such as sea surface temperatures over pacific and indian oceans ) . including thesewill undoubtedly improve the predictability of phase , but that is beyond the scope of this paper .based on the dataset , we make estimates of the conditional distribution .we find that and .this shows that phase of aimr in any year can be predicted with reasonable confidence by simply considering the phase of aimr in the previous year .next , we look at the predictability of phase at grid - level . by studying ,we have already studied the probability , which is around 0.62 whenever .we now evaluate for each location , and find that on average ( across locations ) , , .this means that the mean - reverting property of phase is present at grid - level too , with same strength as in case of aimr .however , the forecast of aimr can be improved based on additional variables and remote teleconnection effects , which may not be possible at the grid - level .therefore , we study how the predictability of grid - level phase can be improved by conditioning on aimr phase .for this , we study the quantities and . we find that in about 300 ( out of 357 in total ) locations , incorporation of national phase of the current year increases this probability i.e. and .furthermore , in 209 locations , incorporation of national phase of the previous year increases this probability i.e. and .thus we find that , due to the mean - reverting property , both aimr phase and grid - level phase in any year can be predicted from its previous year s value with reasonable confidence , and the predictability at grid - level increases when combined with national - level information about phase .extreme rainfall can cause floods and droughts , with significant impacts .imd predicts spatial mean rainfall ( aimr ) , and included in this methodology is the forecasting of extreme years with respect to seasonal mean rainfall at the national scale .however local extremes can be even more consequential .we consider the spatial association between local extremes , in addition to their association with aimr , because we would like to explore the extent to which the incidence of local extremes can be inferred ( and hence probabilistically forecast ) from extremes of aimr .the long - term mean aimr across years is denoted by , and corresponding standard deviation by .similarly , at locations , long - term mean rainfall across years differs by location and its mean is and standard deviation .we examine positive and negative extremities ( pex , nex ) at different spatial scales . at the national scale ,years of spatial positive extremity are defined as , and years of spatial negative extremity as . for location ,local positive extremity years are defined in relation to location - specific statistics , comprising .similarly , local negative extremity years comprise .next , we define two features called locational positive and negative extremities .each year , some locations have local pexs and others local nexs . in some years, an unusually high number of locations simultaneously have local pexs or nexs .these years are defined as locational positive extremity years or locational negative extremity years respectively .these need not coincide with spatial extremity years , as they are defined differently and involve widespread occurrence of local extremes ; in contrast with large deviations in the spatial mean .we define and as the number of locations with local pexs and nexs respectively in year .respective means of these variables , across years , are and , and their standard deviations are and . then locational pex years are the set , while locational nex years comprise .as can be expected , locational pex years turn out to have simultaneously many local pexs , because these years are defined as such , i.e. involving an unusually large number of local pexs .but many locations also experience normal local rain , or even local nexs , during these years .analogous behavior is seen during locational nex years .this is a manifestation of the heterogeneity of rainfall .the average number of locations having local pexs is : 113 during locational pex years , 47 during normal years , and 27 during locational nex years .similar statistics are found for mean numbers of local nexs .figure 2 illustrates the situation in 4 representative years .locations depart in their extreme behavior from the national - scale .some locations have local nexs during several locational pex years , or local pexs during several locational nex years .as stated earlier , this is a manifestation of spatial heterogeneity . for this analysis, we estimate the probability of a local pex / nex event at each location , conditioned on pex / nex events at the national scale . we define random variable as the `` year type '' at location , which can take values 1 ( normal year ) , 2 ( local pex ) , 3 ( local nex ) .similarly we define as the `` year type '' for the national scale , which is either 1 ( normal year ) , 2 ( locational pex ) , or 3 ( locational nex ) .we estimate conditional distributions for each .these describe the conditional probability of local year type , given the year type at all - india level .the results are illustrated in figure 3 .only about 60 of the 357 locations have or above 0.4 .we also observe that there are some locations where or are significant albeit small .these results suggest that there are only few locations at the grid - scale having substantial probability of conforming in any given year to such national scale extremes , and even there the probabilities are not high .the locations with reasonable correspondance are found to be concentrated along western , central and south - western parts of india , while those with low or negative correspondance are mostly on the eastern side .this indicates that making consistent predictions of grid - scale extremities based on national - scale forecasts alone is not possible , because the national scale extremes do not correspond to frequent repeated incidence of grid - level extremes .what national scale extremes entail at the local scale ( i.e. grid - level ) , if not extremes , is considered next .we now analyze the association between the concepts defined above , namely phase and extremity .we do this because , as discussed previously , the phase is an important variable that exhibits coherence and therefore amenable to prediction at the regional scale .furthermore , we have already seen through conditional distributions , illustrated in figure 3 , that local extremities and all - india extremities are not highly correlated .correlation between local ( i.e. grid - level ) phase and all - india phase is somewhat higher ( figure 1 ) .we therefore would like to understand precisely what , if anything , national - scale extremities entail at local scales .we consider correlations between local phase and all - india extremities , by estimating conditional distributions .figure 4 identifies locations where and are more than 0.7 , i.e. where rainfall is likely to increase ( compared to the previous year ) in the years of all - india pex , and where it is likely to decrease in the years of all - india nex .these locations are significant in number ( 137 and 84 respectively ) , and distributed all over the main landmass .thus , there is a significant correlation between all - india extremities and local phase .this property implies that forecasts of strong positive or negative extremities , at the national scale , might be utilized for high probability forecasts of phase in many regions .hence , although local forecasts of extremities are difficult , corresponding forecast of phase is more feasible ; and furthermore , as described below , the probability of correct forecasts increases during years having spatial - mean extremes , when the conditional probabilities of the corresponding phase at the grid - level are higher .we have already explored the relation between grid - level phase and spatial - mean phase through the conditional probabilities .we now examine the quantity , i.e. whether the relation between national and local phase is stronger during years with extremes in the spatial - mean rainfall .it turns out that for all 357 locations , and .this means that in spatial - mean extreme years , all locations are more likely to follow the spatial - mean phase than in normal years .averaged across all locations , the probability of conforming to the spatial - mean phase is about 0.66 in both spatial - mean pex and nex years , and about 0.62 otherwise .geophysical phenomena are spatially coherent , with generally higher correlations at smaller distance .spatial coherence is not merely another property of the dataset but also helps point the way towards the possibility of forecasting as well as methods for doing so .specifically where a variable is more coherent , it indicates the possibility of forecasting using less information , and forecasts can be made for several locations simultaneously .therefore we consider spatial coherence of local extremities and local phase by answering the following questions : 1 .are adjacent locations likely to be in the same phase every year ? 2 .is the agreement or disagreement of the phase at grid - level with the spatial - mean phase spatially coherent ?3 . if a location has a local extremity ( pex / nex ) in any year , are its neighbors more likely to have the same extremity in that year ?does coherence of local extremities increase during national - level extremities ?understanding such properties helps us develop a conceptual basis for clustering of phase and extremes , which is described in the following section . to answer these quantitatively , we define two measures of spatial coherence with respect to any property : 1 . for locations having property ,the mean number of 1-hop neighbors ( mnn ) , with 1-hop neighborhood defined in section 3.3 , also having 2 .mean connected component size ( mccs ) of a graph where each vertex represents a grid location , and two vertices are joined by an edge if and only if they are 1-hop neighbors on the grid and both have property .these two measures elicit different but complementary aspects of coherence .the mnn describes only the mean properties of the isotropic 1-hop neighbourhoods , whereas the mccs considers coherence over more general anisotropic domains .the conclusions from applying these two measures could differ , but we use them in conjunction to evaluate the following properties : being in positive phase in a year ( pp ) , being in negative phase in a year ( np ) , agreement with all - india phase in a year ( ap ) , disagreement with all - india phase in a year ( dp ) , having a local nex in a year ( ln ) , having a local pex in a year ( lp ) , having a local nex in a year of spatial / locational pex / nex ( ln - sp , ln - sn , ln - ln , ln - lp ) and having a local pex in a year of spatial / locational pex / nex ( lp - sp , lp - sn , lp - ln , lp - lp ) .these properties are evaluated for each location , and both the measures of spatial coherence ( mnn and mccs ) introduced above are calculated for each property .results are shown in table 1 .[ tab : tab1 ] .spatial coherence for the different properties introduced in section 5 [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] the main messages emerging from results in table 1 are : 1 .local phase is spatially coherent , because mean number of neighbors for properties pp and np ( being in positive and negative phase respectively ) are about 5 ( which is about 60% of the neighborhood size of 8) .therefore if a location is in positive or negative phase , about 60% of its neighbors have the same phase .however on averaging across grid locations , the mean probabilities of being in positive or negative phase are each close to 50% .therefore these results indicate that being in a certain phase is more likely when its neighbors are in the same phase .agreement with all - india phase is more spatially coherent than disagreement , because both mnn and mccs are larger for property ap ( agreement with national phase ) than for dp ( disagreement with the national phase ) . therefore if a location agrees with the national phase , the probabilities that its neighbors also agree with the national phase are larger than otherwise .local extremities are spatially coherent because the mnn for lp ( local pex in a year ) and ln ( local nex in a year ) correspond to conditional probabilities of pex and nex that are significantly larger than the unconditional probabilities of locations having pex and nex respectively . in an average year , only about 14% locations of india have local pex ( or local nex ) , but 3.9 out of 8 neighbors ( nearly 50% ) of a local pex ( or nex ) also have local pex ( or nex ) .4 . local extremities of any sign are more spatially coherent in years of all - india extremities of same sign ; and less spatially coherent in years of all - india extremities of opposite sign .this is inferred from mnn and mccs being smaller for property ln - sp ( local nex during spatial pex ) than for ln - sn ( local nex during spatial nex ) , and likewise for property lp - sn ( local pex during spatial nex ) than for lp - sp ( local pex during spatial pex ) .similar relations hold in case of locational extremities . for example in a spatial mean nex year , there is larger coherence around locations with local nexs as manifested by larger mnn and mccs . in summary , for phase , the local phase is coherent and agreement with aimr phase is more coherent .this raises the possibility of forecasting phase through clustering , despite the substantial heterogeneities in rainfall .furthermore , despite the generally weaker association between local and national extremes discussed previously , local extremes exhibit coherence and this raises the possibility of using further information to forecast coherent clusters where extreme events can be used . because this coherence is higher during spatial mean extremes of the same sign , such cluster - based forecasting would be more plausible for sub - national scale extremes having the same sign as the national - level extreme .motivated by this discovery of spatial coherence , we now try to identify small but spatially contiguous sets of grid - locations with homogeneous behaviour with respect to phase and local extremes .earlier , different criteria such as mean and variability of rainfall have been used to identify such clusters ( ) . herewe extend such analyses to include phase and extremities as defined above .clustering techniques , well - known in data mining , seek to partition data according to predefined measures of similarity .each partition is called a _cluster_. we use spectral clustering ( ) to partition the grid into relatively homogeneous clusters .this method takes as input an matrix , where is the number of data - points and each entry encodes a measure of similarity between the datapoints indexed by and .this similarity measure is application - specific .we are also required to specify the number of clusters . clustering algorithmsassign to each data - point a cluster index in , with the points having the same index comprising a cluster .generally points assigned to the same cluster are expected to be more `` similar '' to each other than to other points , with respect to the measure of similarity in .the goal of clustering with respect to local phase and extremes is to identify regions at the sub - national scale where these properties are similar .hence we smooth the data by estimating means over 1-hop neighborhoods at each grid - location , before applying the clustering algorithms below .to generate clustering according to phase , for every pair of locations we compute , i.e. is the number of years when and have same phase .thus the measure of similarity is related to the tendency of the locations to be in the same phase . by specifying a small number of clusters ,we identify coherent regions that coincide in phase much of the time . although the spectral clustering algorithm is not biased to manifest spatial contiguity of clusters , spatial neighbors often appear in the same cluster .not all clusters formed by the algorithm have high intra - cluster similarity with respect to defined above . in figure 5, we only show the clusters having high intra - cluster similarity , such that locations within any one of the selected clusters coincide in phase in a large number of years .these are the clusters for which comparatively higher probability forecasts of phase might be possible . to identify locations simultaneously experiencing either local positive or negative extremes , we define similarity measure for each pair as for pex ( the number of years when both and have local pex ) , and for nex ( the number of years when both and have local nex ) . using these as the similarity matrices for spectral clustering with 10 clusters , we find in case of strong intra - cluster similarity the clusters of locations frequently experiencing local pexs ( or nexs ) in the same years .locations in any one of the clusters in fig 6a have simultaneous local pex in at least 8 ( about ) of their local pex years . in fig 6bwe show clusters of locations which have simultaneous local pex in at least 3 ( about 20% ) all - india pex years . for this analysis, we use similarity matrix .locations in any of the clusters in fig 6c have simultaneous local nex in at least 6 ( about of their local nex years , and locations in the clusters of fig 6d have simultaneous local nex in at least 3 ( about 20% ) all - india nex years . for fig 6d ,the similarity matrix used is .such large and spatially contiguous clusters could not have arisen if the occurrence of local nex or pex were independent across the locations within the clusters , and are one manifestation of the spatial coherence described previously .although we are smoothing the results using means over 1-hop neighborhoods before applying the spectral clustering algorithm , we note that similar ( but noisier ) clusters are obtained if the algorithm were implemented on the data without smoothing .however , the threshold probability for including these clusters in the figure is much lower than that of phase . with a higher threshold of ,very few clusters survive .therefore while there is coherent behavior in extremes across many locations , the conditional probability of nexs across these clusters given a spatial mean extreme is small .therefore high probability forecasts of local extremes , or extremes across individual clusters , can not rely on the spatial mean forecast alone .even the clusters that are not conditionally dependent on spatial mean extremes ( in fig 6a and 6c ) do not have simultaneous local extremes with as high probability as in the case of the clusters involving phase in fig 5 .this indicates that forecasting extremes at the sub - national scale is fundamentally more difficult than forecasting phase .forecasting rainfall received in any year is important for india s economy , especially for millions of workers directly dependending on rainfed agriculture for their livelihoods .currently , imd makes forecasts of seasonal rainfall at the national scale , and these seasonal - mean forecasts do not carry direct implications for individual regions .however impacts are mainly felt through events occurring at local and sub - national scales .making regional or grid - level forecasts is far more difficult due to the heterogeneity of rainfall , but our findings show that some weaker forecasts ( related to phase ) might be possible at smaller scales , and that could furthermore be largely based on aimr forecasts .this is mainly because the phase , being a binary quantity , exhibits larger spatial coherence and larger association with the all - india phase , and therefore is more amenable to prediction based on the aimr change .we showed here that , despite the heterogeneity , local phase has a high probability of following the national phase ; and this probability is higher during extreme years .furthermore , the identification of coherent clusters in which the phase coincides with high frequency raises the possibility of the forecasting of phase within these clusters .the treatment of phase has more general consequences for understanding variability ; for example one could think of clusters with mostly the same phase as being those that vary together ( after neglecting the magnitude of changes ) .understanding the phase and its behaviour is important because the knowledge of phase indicates whether rainfall in the following year will be more or less than the present year , and this type of comparison might be important for adaptation to rainfall variability . the results presented here also showed that , corresponding to the mean - reversion of aimr , its phase is reasonably predictable .therefore , years with negative phase have high probability of being followed by positive phase years ; and vice versa .this also carries over to the grid - level , despite the spatial heterogeneity in phase .together , the mean reverting character of large - scale rainfall and the discretization involved in defining the phase make this variable more predictable than many of its alternatives .another aspect of the paper is in understanding the properties of years experiencing extreme rainfall .extreme rainfall is important for impacts , and local and sub - national extremes play the largest roles .statistical forecasting of the monsoon does not extend to seasonal forecasts at the grid level , but it is important to understand the relation between local and national extremes , and whether local extremes can be forecast . herewe consider grid - level extremes , and their relation with spatial - mean extremes .the results show that grid - level as well as regional extremes do not follow the national extremes with high probability .this implies that national - level forecasts alone can not be deployed to infer or predict grid - level extremes with high confidence. however it was objectively shown that grid - level extremes exhibit spatial coherence .moreover the coherence increased in the presence of a national - scale extreme of the same sign .this , together with the demonstration that extremes tend to occur in spatially contiguous clusters , raises the possibility of forecasting extremes at the level of individual clusters where extremes tend to coincide .such forecasts would only be probabilistic , and making such forecasts would require exploiting additional information than merely the occurrence of national - level extremes , such as sea surface temperature patterns . beyond such speculation based on the results presented here ,the exploration and elaboration of such methodology is outside the scope of this paper .however it was also shown that such efforts are likely to face intrinsic difficulties because grid - level extremes within relatively homogeneous clusters do not coincide with high probability , as shown in the previous section .a number of possible regionalizations can be made , some more useful than others . howevera necessary condition for the existence of contiguous clusters based on some variable is that the variable should exhibit spatial coherence . by introducing objective measures of coherence that cover both isotropic and anistropic cases, we were able to identify some variables for which significant regionalizations are possible .long - term simulations of indian rainfall are needed to formulate socio - economic and developmental policies , especially in the presence of climate change , and such simulations should be able to capture regional variations accurately .it has been noted that under the influence of climate change , such spatial differences are actually increasing ( ) .we have identified additional salient characteristics of spatiotemporal heterogeneities relevant to evaluating simulations by climate models ( ) .these findings by previous authors of the non - stationarities of extreme rainfall statistics suggest that our present analysis assuming a stationary climate must be extended to consider how the clusters and associated relationships examined here are evolving in time due to climate change .this research was supported by divecha centre for climate change , indian institute of science .we are thankful to dr .j. srinivasan , dr . v.venugopal and dr .k. rajendran for their valueable inputs .shashi shekhar , zhe jiang , reem y. ali , emre eftelioglu , xun tang , venkata m. v. gunturi and xun zhou ( 2015 ) , spatiotemporal data mining : a computational perspective , _ international journal of geo - information _ , _ 4 _ , 2306 - 2338 .b. jayasankar , sajani surendran , and k. rajendran ( 2015 ) , robust signals of future projections of indian summer monsoon rainfall by ipcc ar5 climate models : role of seasonal cycle and interannual variability , _ geophysical research letters _ , _42(9 ) _ , 3513 - 3520 rajeevan , m and bhate , jyoti and jaswal , ak ( 2008 ) , analysis of variability and trends of extreme rainfall events over india using 104 years of gridded daily rainfall data , _ geophysical research letters _ , _
forecasts of monsoon rainfall for india are made at national scale . but there is spatial coherence and heterogeneity that is relevant to forecasting . this paper considers year - to - year rainfall change and annual extremes at sub - national scales . we use data mining techniques to gridded rain - gauge data for 1901 - 2011 to characterize coherence and heterogeneity and identify spatially homogeneous clusters . we study the direction of change in rainfall between years ( phase ) , and extreme annual rainfall at both grid level and national level . grid - level phase is found to be spatially coherent , and significantly correlated with all - india mean rainfall ( aimr ) phase . grid - level extreme - rainfall years are not strongly associated with corresponding extremes in aimr , although in extreme aimr years local extremes of the same type occur with higher spatial coherence . years of extremes in aimr entail widespread phase of the corresponding sign . furthermore , local extremes and phase are found to frequently co - occur in spatially contiguous clusters .
lease take a look at the images in the top row of fig .[ fig : fig1 ] .which object stands out the most ( i.e. , is the most salient one ) in each of these scenes ?the answer is trivial .there is only one object , thus it is the most salient one .now , look at the images in the third row .these scenes are much more complex and contain several objects , thus it is more challenging for a vision system to select the most salient object .this problem , known as _ salient object detection ( and segmentation ) _ , has recently attracted a great deal of interest in computer vision community .the goal is to simulate the astonishing capability of human attention in prioritizing objects for high - level processing .such a capability has several applications in recognition ( e.g. , ) , image and video compression ( e.g. , ) , video summarization ( e.g. , , media re - targeting and photo collage ( e.g. , ) , image quality assessment ( e.g. , ) , image segmentation ( e.g. , ) , content - based image retrieval and image collection browsing ( e.g. , ) , image editing and manipulating ( e.g. , ) , visual tracking ( e.g. , ) , object discovery ( e.g. , ) , and human - robot interaction ( e.g. , ) . a large number of saliency detection methods have been proposed in the past 7 years ( since ) . in general , a salient object detection model involves two steps : 1 ) _ selecting objects to process _( i.e. , determining saliency order of objects ) , and 2 ) _ segmenting the object area _( i.e. , isolating the object and its boundary ) .so far , models have bypassed the first challenge by focusing on scenes with single objects ( see fig . [fig : fig1 ] ) .they do a decent job on the second step as witnessed by very high performances on existing biased datasets ( e.g. , on asd dataset ) which contain low - clutter images with often a single object at the center .however , it is unclear how current models perform on complex cluttered scenes with several objects . despite the volume of past research, this trend has not been yet fully pursued , mainly due to the lack of two ingredients : 1 ) suitable benchmark datasets for scaling up models and model development , and 2 ) a widely - agreed objective definition of the most salient object . in this paper, we strive to provide solutions for these problems .further , we aim to discover which component might be the weakest link in the possible failure of models when migrating to complex scenes . some related topics , closely or remotely , to visual saliency modeling and salient object detection include : object importance , object proposal generation , memorability , scene clutter , image interestingness , video interestingness , surprise , image quality assessment , scene typicality , aesthetic , and attributes .one of the earliest models , which generated the _ first wave _ of interest in image saliency in computer vision and neuroscience communities , was proposed by itti _et al . _this model was an implementation of earlier general computational frameworks and psychological theories of bottom - up attention based on center - surround mechanisms . in ,itti _ et al ._ showed examples where their model was able to detect spatial discontinuities in scenes .subsequent behavioral ( e.g. , ) and computational studies ( e.g. , ) started to predict fixations with saliency models to verify models and to understand human visual attention .second wave _ of interest appeared with works of liu _ et al . _ and achanta _ et al . _ who treated saliency detection as a binary segmentation problem with 1 for a foreground pixel and 0 for a pixel of the background region .since then it has been less clear where this new definition stands as it shares many concepts with other well - established computer vision areas such as general segmentation algorithms ( e.g. , ) , category independent object proposals ( e.g. , ) , fixation prediction saliency models ( e.g. ) , and general object detection methods .this is partly because current datasets have shaped a definition for this problem , which might not totally reflect full potential of models to _ select and segment salient objects in an image with an arbitrary level of complexity_. reviewing all saliency detection models goes beyond the scope of this paper ( see ) .some breakthrough efforts are as follows .et al . _ introduced a conditional random field ( crf ) framework to combine multi - scale contrast and local contrast based on surrounding , context , and color spatial distributions for binary saliency estimation .et al . _ proposed subtracting the average color from the low - pass filtered input for saliency detection .et al . _ used a patch - based approach to incorporate global context , aiming to detect image regions that represent the scene .et al . _ proposed a region contrast - based method to measure global contrast in the lab color space . in ,wang _ et al . _ estimated local saliency , leveraging a dictionary learned from other images , and global saliency using a dictionary learned from other patches of the same image .et al . _ observed that decomposing an image into perceptually uniform regions , which abstracts away unnecessary details , is important for high quality saliency detection . in ,et al . _ utilized the difference between the color histogram of a region and its immediately neighboring regions for measuring saliency .et al . _ defined a measure of saliency as the cost of composing an image window using the remaining parts of the image , and tested it on pascal voc dataset .this method , in its essence , follows the same goal as in .et al . _ proposed a graphical model for fusing generic objectness and visual saliency for salient object detection .shen and wu modeled an image as a low - rank matrix ( background ) plus sparse noises ( salient regions ) in a feature space .more recently , margolin _ et al . _ integrated pattern and color distinctnesses with high - level cues to measure saliency of an image patch .some studies have considered the relationship between fixations and salincy judgments similar to .for example , xu et al . investigated the role of high - level semantic knowledge ( e.g. , object operability , watchability , gaze direction ) and object information ( e.g. , object center - bias ) for fixation prediction in free viewing of natural scenes .they constructed a large dataset called `` object and semantic images and eye - tracking ( osie ) '' .indeed they found an added value for this information for fixation prediction and proposed a regression model ( to find combination weights for different cues ) that improves fixation prediction performance .koehler et al . collected a dataset known as the ucsb dataset .this dataset contains 800 images .one hundred observers performed an explicit saliency judgment task , 22 observers performed a free viewing task , 20 observers performed a saliency search task , and 38 observers performed a cued object search task .observers completing the free viewing task were instructed to freely view the images . in the explicit saliency judgment task , observers were instructed to view a picture on a computer monitor and click on the object or area in the image that was most salient to them . _ salient _ was explained to observers as something that stood out or caught their eye ( similar to ) .observers in the saliency search task were instructed to determine whether or not the most salient object or location in an image was on the left or right half of the scene .finally , observers who performed the cued object search task were asked to determine whether or not a target object was present in the image .then , they conducted a benchmark and introduced models that perform the best on each of these tasks .a similar line of work to ours in this paper has been proposed by mishra _et al . _ where they combined monocular cues ( color , intensity , and texture ) with stereo and motion features to segment a region given an initial user - specified seed point , practically ignoring the first stage in saliency detection ( which we address here by automatically generating a seed point ) .ultimately , our attempt in this work is to bridge the interactive segmentation algorithms ( e.g. , ) and saliency detection models and help transcend their applicability .perhaps the most similar work to ours has been published by li __ . in their work , they offer two contributions . _ first _ , they collect an eye movement dataset using annotated images from the pascal dataset and call their dataset pascal - s ._ second _ , they propose a model that outperforms other state - of - the - art salient object detection models on this dataset ( as well as four other benchmark datasets ) .their model decouples the salient object detection problem into two processes : 1 ) _ a segment generation process _ , followed by 2 ) _ a saliency scoring mechanism _ using fixation prediction . here ,similar to li _ et al ._ , we also take advantage of eye movements to measure object saliency but instead of first fully segmenting the scene , we perform a shallow segmentation using superpixels .we then only focus on segmenting the object that is most likely to attract attention .in other words , the two steps are similar to li _ et al . _ but are performed in the reverse order .this can potentially lead to better efficiency as the first expensive segmentation part is now only an approximation .we also offer another dataset which is complimentary to li _ et al ._ s dataset and together both datasets ( and models ) could hopefully lead to a paradigm shift in the salient object detection field to avoid using simple biased datasets .further , we situate this field among other similar fields such as general object detection and segmentation , objectness proposal generation models , and saliency models for fixation prediction .several salient object detection datasets have been created as more models have been introduced in the literature to extend capabilities of models to more complex scenes .table [ tab : db ] lists properties of 19 popular salient object detection datasets . although these datasets suffer from some biases ( e.g. , low scene clutter , center - bias , uniform backgrounds , and non - ambiguous objects ) , they have been very influential for the past progress .unfortunately , recent efforts to extend existing datasets have only increased the number of images without really addressing core issues specifically background clutter and number of objects .majority of datasets ( in particular large scale ones such as those derived from the msra dataset ) have scenes with often one object which is usually located at the image center .this has made model evaluation challenging since some high - performing models that emphasize image center fail in detecting and segmenting the most salient off - center object .we believe that now is the time to move on to more versatile datasets and remedy biases in salient object datasets .in this section , we briefly explain how salient object detection models differ from fixation prediction models , what people consider the most salient object when they are explicitly asked to choose one , what are the relationships between these judgments and eye movements , and what salient object detection models actually predict .we investigate properties of salient objects from humans point of view when they are explicitly asked to choose such objects .we then study whether ( and to what extent ) saliency judgments agree with eye movements . while it has been assumed that eye movements are indicators of salient objects , so far few studies ( e.g. , ) have directly and quantitatively confirmed this assumption .moreover , the level of agreement and cases of disagreement between fixations and saliency judgments have not been fully explored .some studies ( e.g. , ) , have shown that human observers choose to annotate salient objects or regions first but they have not asked humans explicitly ( labelme data was analyzed in ) and they have ignored eye movements . knowing which objects humans consider as salient is specially crucial when outputs of a model are going to be interpreted by humans .there are two major differences between models defining saliency as `` where people look '' and models defining saliency as `` which objects stand out '' ._ first _ , the former models aim to predict points that people look in free - viewing of natural scenes usually for 3 to 5 seconds while the latter aim to detect and segment salient objects ( by drawing pixel - accurate silhouettes around them ) . in principle a model that scores well on one problem should not score very well on the other . an optimal model for fixation prediction should only highlight those points that a viewer will look at ( few points inside an object and not the whole object region ) .since salient object detection models aim to segment the whole object region they will generate a lot of false positives ( these points belong to the object but viewers may not fixate at them ) when it comes to fixation prediction . on the contrary, a fixation prediction model will miss a lot of points inside the object ( i.e. , false negatives ) when it comes to segmentation ._ second _ , due to noise in eye tracking or observers saccade landing ( typically around 1 degrees and 30 pixels ) , highly accurate pixel - level prediction maps are less desired .in fact , due to these noises , sometimes blurring prediction maps increases the scores . on the contrary , producing salient object detection maps that can accurately distinguish object boundaries are highly desirable specially in applications . due to these, different evaluation and benchmarks have been developed for comparing models in these two categories . in practice ,models , whether they address segmentation or fixation prediction , are applicable interchangeably as both entail generating similar saliency maps .for example , several researches have been thresholding saliency maps of their models , originally designed to predict fixations , to detect and segment salient proto - objects ( e.g. , ) . in our previous study , we addressed what people consider as the most outstanding ( i.e. , salient ) object in a scene . while in we studied the explicit saliency problem from a behavioral perspective , here we are mainly interested in constructing computational models for automatic salient object detection in arbitrary visual scenes .a total of 70 students ( 13 male , 57 female ) undergraduate usc students with normal or corrected - to - normal vision in the age range between 18 and 23 ( mean = 19.7 , std = 1.4 ) were asked to draw a polygon around the object that stood out the most .participants annotations were supposed not to be too loose ( general ) or too tight ( specific ) around the object .they were shown an illustrative example for this purpose .participants were able to relocate their drawn polygon from one object to another or modify its outline .we were concerned with the case of selection of the single most salient object in an image .stimuli were the images from the dataset by bruce and tsotsos ( 2005 ) 681 pixels .images in this dataset have been presented at random to 20 observers ( in a free - viewing task ) for 4 sec each , with 2 sec of delay ( a gray mask ) in between . ] .see fig .[ fig : sampleio - am ] for sample images from this dataset .we first measured the degree to which annotations of participants agree with each other using the following quantitative measure : where and are annotations of -th and -th participants , respectively ( out of participants ) over the -th image . above measurehas the well - defined lower - bound of 0 , when there is no overlap in segmentations of users , and the upper - bound of 1 , when segmentations have perfect overlap .[ fig : exp1].left shows histogram of values .participants had moderate agreement with each other ( mean ; std ; significantly above chance ) .inspection of images with lowest values shows that these scenes had several foreground objects while images with highest annotation agreement had often one visually distinct salient object ( e.g. , a sign , a person , or an animal ; see fig . [fig : sampleio - am ] ) .we also investigated the relationship between explicit saliency judgments and freeviewing fixations as two indicators of visual attention .here we used shuffled auc ( sauc ) score to tackle center - bias in eye movement data . for each of 120 images , we showed that a map built from annotations of 70 participants explains fixations of free viewing observers significantly above chance ( sauc of , chance , -test ; fig .[ fig : exp1].right ) .the prediction power of this map was as good as the itti98 model .hence , we concluded that explicit saliency judgments agree with fixations .[ fig : sampleio - am ] shows high- and low - agreement cases between fixations and annotations . here, we merge annotations of all 70 participants on each image , normalize the resultant map to [ 0 1 ] , and threshold it at 0.7 to build our first benchmark saliency detection dataset ( called bruce - a ) .prevalent objects in bruce and tsotsos dataset are man - made home supplies in indoor scenes ( see for more details on this dataset ) .similar results , to link fixations with salient objects , have been reported by koehler __ . as in , they asked observers to click on salient locations in natural scenes .they showed high correlation between clicked locations and observers eye movements ( from a different group of subjects ) in free - viewing . while the most salient , important , or interesting object may tell us a lot about a scene , eventually there is a subset of objects that can minimally describe a scene .this has been addressed in the past somewhat indirectly in the contexts of saliency , language and attention , and phrasal recognition .based on results from our saliency judgment experiment , we then decided to annotate scenes of the dataset by judd _et al . _the reason for choosing this dataset is because it is currently the most popular dataset for benchmarking fixation prediction models .it contains eye movements of 15 observers freely viewing 1003 scenes from variety of topics .thus , using fixations we can easily determine which object , out of several annotated objects , is the most salient one .we only used 900 images from the judd dataset and discarded images without well - defined objects ( e.g. , mosaic tiles , flames ) or images with very cluttered backgrounds ( e.g. , nature scenes ) . figure [ fig : figdiscarded ] shows examples of discarded scenes .we asked 2 observers to manually outline objects using the labelme open annotation tool ( http://new-labelme.csail.mit.edu/ ) .observers were instructed to accurately segment as many objects as possible following three rules : 1 ) discard reflection of objects in mirrors , 2 ) segment objects that are not separable as one ( e.g. , apples in a basket ) , and 3 ) interpolate the boundary of occluded objects only if doing otherwise may create several parts for an occluded object .these cases , however , did happen rarely .observers were also told that their outline should be good enough for somebody to recognize the object just by seeing the drawn polygon .observers were paid for their effort .[ fig:1 ] shows sample images and their annotated objects . to determine which object is the most salient one , we selected the object at the peak of the human fixation map . herewe explore some summary statistics of our data . on average ,36.93% of an image pixels was annotated by the 1st observer with a std of 29.33% ( 44.52% , std=29.36% for the 2nd observer ) .27.33% of images had more than 50% of their pixels segmented by the 1st observer ( 34.18% for the 2nd ) . the number of annotated objects in a scene ranged from 1 to 31 with median of 3 for the 1st observer ( 1 to 24 for the 2nd observer with median of 4 ; fig .[ fig : stat].left ) .the median object size was 10% of the total image area for the 1st observer ( 9% for the 2nd observer ) .[ fig : stat].left ( inset ) shows the average annotation map for each observer over all images .it indicates that either more objects were present at the image center and/or observers tended to annotate central objects more .overall , our data suggests that both observers agree to a good extent with each other .finally , in order to create one ground truth segmentation map per image , we asked 5 other observers to choose the best of two annotations ( criteria based on selection of annotated objects and boundary accuracy ) .the best annotation was the one with max number of votes ( 611 images with 4 to 1 votes ) .next , we quantitatively analyzed the relationship between fixations and annotations ( note that we explicitly define the most salient object as the one with the highest fraction of fixations on it ) .we first looked into the relationship between the object annotation order and the fraction of fixations on objects .[ fig : stat].middle shows fraction of fixations as a function of object annotation order . in alignment with previous findings we observe that observers chose to annotate objects that attract more fixations . buthere , unlike which used saliency models to demonstrate that observers prioritize annotating interesting and salient objects , we used actual eye movement data .we also quantized the fraction of fixations that fall on scene objects over the judd - a annotations , and observed that in about 55% of images , the most salient object attracts more than 50% of fixations ( mean fixation ratio of 0.54 ; image background=0.45 ; fig .[ fig : stat].right ) .the most salient object ranged in size from 0.1% to 90.2% of the image size ( median=10.17% ) .the min and max aspect ratio ( w / h ) of bounding boxes fitted to the most salient object were 0.04 and 13.7 , respectively ( median=0.94 ) .judd dataset is known to be highly center - biased , in terms of eye movements , due to two factors : 1 ) the tendency of observers to start viewing the image from the center ( a.k.a viewing strategy ) , and 2 ) tendency of photographers to frame interesting objects at the image center ( a.k.a photographer bias ) . herewe verify the second factor by showing the average annotation map of the most salient object in fig .[ fig : meps ] . our datasets seem to have relatively less center - bias compared to msra-5k and cssd datasets .note that other datasets mentioned in table i are also highly center - biased . to count the number of images with salient objects at the image center , we defined the following criterion .an image is on - centered if its most salient object overlaps with a normalized ( to [ 0 1 ] ) central gaussian filter with .this gaussian filter is resized to the image size and is then truncated above 0.95 . utilizing this criterion, we selected 667 and 223 on - centered and off - centered scenes , respectively .partitioning data in this manner helps scrutinize performance of models and tackle the problem of center - bias . to further explore the amount of center - bias in bruce - a and judd - a datasets , we first calculated the euclidean distance from center of bounding boxes , fitted to object masks , to the image center .we then normalized this distance to the half of the image diagonal ( i.e. , image corner to image center ) .[ fig : arearatio].left shows the distribution of normalized object distances .as opposed to msra-5k and cssd datasets that show an unusual peak around the image center , objects in our datasets are further apart from the image center .[ fig : arearatio].right shows distributions of normalized object sizes .a majority of salient objects in bruce - a and judd - a datasets occupy less than 10% of the image . on average , objects in our datasets are smaller than msra-5k and cssd making salient object detection more challenging .we also analyzed complexity of scenes on four datasets . to this end, we first used the popular graph - based superpixel segmentation algorithm by felzenszwalb and huttenlocher to segment an image into contiguous regions larger than 60 pixels each ( parameter settings : = 1 , segmentation coefficient = 300 ) .the basic idea is that the more superpixels an image contains , the more complex and cluttered it is . by analogy to scenes , an object with several superpixels is less homogeneous , and hence is more complex ( e.g. , a person vs. a ball ) .[ fig : stats ] shows distributions of number of superpixels on the most salient object , the background , and the entire scene .if a superpixel overlapped with the salient object and background , we counted it for both .in general , complexities of backgrounds and whole scenes in our datasets , represented by blue and red curves , are much higher than in the other two datasets .the most salient object in judd - a dataset on average contains more superpixels than salient objects in msra-5k and cssd datasets , even with smaller objects .the reason why number of superpixels is low on the bruce - a dataset is because of its very small salient objects ( see fig . [fig : stats].right ) . further , we inspected types of objects in judd - a images .we found that 45% of images have at least one person in them and 27.2% have more than two people . on averageeach scene has 1.56 persons ( std = 3.2 ) . in about 27% of images , annotators chose a person as the most salient object .we also found that 280 out of 900 images ( 31.1% ) had one or more text in them .other frequent objects were animals , cars , faces , flowers , and signs .in general , it is agreed that for good saliency detection , a model should meet the following three criteria : 1 ) _ high detection rate_. there should be a low probability of failing to detect real salient regions , and low probability of falsely detecting background regions as salient regions , 2 ) _ high resolution_. saliency maps should have high or full resolution to accurately locate salient objects and retain original image information as much as possible , and 3 ) _ high computational efficiency_. saliency models with low processing time are preferred . here , we analyze these factors by proposing a simple baseline salient object detection model . we propose a straightforward model to serve two purposes : 1 ) _ to assess the degree to which our data can be explained by a simple model_. this way our model can be used for measuring bias and complexity of a saliency dataset , and 2 ) _ to gauge progress and performance of the state of the art models_. by comparing performance of best models relative to this baseline model over existing datasets and our datasets , we can judge how powerful and scalable these models are . note that we deliberately keep the model simple to achieve above goals .our model involves the following two steps : + _ * step 1 * _ : given an input image , we compute a saliency map and an over - segmented region map .for the former , we use a fixation prediction model ( traditional saliency models ) to find spatial outliers in scenes that attract human eye movements and visual attention . here , we use two models for this purpose : aws and hounips , which have been shown to perform very well in recent benchmarks and to be computationally efficient . as controls , we also use the generic _ objectness _ measure by alexe _et al . _ , as well as the human fixation map to determine the upper - bound performance .the reason for using fixation saliency models is to obtain an quick initial estimation of locations where people may look in the hope of finding the most salient object .these regions are then fed to the segmentation component in the next step .it is critical to first limit the subsequent expensive processes onto the right region .for the latter , as in the previous section , we use the fast and robust algorithm by felzenszwalb and huttenlocher with same parameters as in section [ statistics ] ._ * step 2 * _ : the saliency map is first normalized to [ 0 1 ] and is then thresholded at ( here = 0.7 ). then all unique image superpixels that spatially overlap with the truncated saliency map are included .here we discarded those superpixels that touch the image boundary because they are highly likely to be part of the background .finally , after this process , the holes inside the selected region will be considered as part of the salient object ( e.g. , filling in operation ) .[ fig : samples ] illustrates the process of segmentation and shows outputs of our model for some images from msra-5k , bruce - a , and judd - a datasets .the essential feature of our simple model is dissociating saliency detection from segmentation , such that now it is possible to pinpoint what might be the cause of mistakes or low performance of a model , i.e. , or .this is particularly important since almost all models have confused these two steps and have faded the boundary .note that currently there is no training stage in our model and it is manually constructed with fixed parameters .the second stage in our model is where more modeling contribution can be made , for example by devising more elaborate ways to include or discard superpixels in the final segmentation .one strategy is to learn model parameters from data .some features to include in a learning method are size and position of a superpixel , a measure of elongatedness , a measure of concavity or convexity , distance between feature distributions of a superpixel and its neighbors , etc . to some extent ,some of these these features have already been utilized in previous models .another direction will be expanding our model to multi scale ( similar to ) .we exhaustively compared our model to 8 state of the art methods which have been shown to perform very well on previous benchmarks .these models come from 3 categories allowing us to perform cross - category comparison : 1 ) _ salient object detection models _ including cbsal , svo , pca , goferman , and fts , 2 ) _ generic objectness measure _ by alexe _et al . _ , and 3 ) _ fixation prediction models _ including aws and hounips .we use two widely adopted metrics : * * precision - recall ( pr ) curve : * for a saliency map normalized to $ ] , we convert it to a binary mask with a threshold . and are then computed as follows given the ground truth mask : [ eqn : precision_recall ] to measure the quality of saliency maps produced by several algorithms , we vary the threshold from 0 to 255 . on each threshold , and valuesare computed .finally , we can get a precision - recall ( pr ) curve to describe the performance of different algorithms .+ we also report the f - measure defined as : + here , as in and , we set to weigh precision more than recall . * * receiver operating characteristics ( roc ) curve : * we also report the false positive rate ( ) and true positive rate ( ) during the thresholding a saliency map : where and denote the opposite ( complement ) of the binary mask and ground - truth , respectively .the roc curve is the plot of versus by varying the threshold .results are shown in fig .[ fig : bruce ] .consistent with previous reports over the msra-5k dataset , cbsal , pca , svo , and alexe models rank on the top ( with f - measures above 0.55 and aucs above 0.90 ) .fixation prediction models perform lower at the level of the map .fts model ranked on the bottom again in alignment with previous results .our models work on par with the best models on this dataset with all f - measures above 0.70 ( max with alexe model about 0.73 ) . moving from this simple dataset ( because our simple models ranked on the top ; see also the analysis in section [ statistics ] ) to more complex datasets ( middle column in fig .[ fig : bruce ] ) we observed a dramatic drop in performance of all models .the best performance now is 0.24 belonging to the pca model .we observed about 72% drop in performance averaged over 5 models ( cbsal , fts , svo , pca , and alexe ) from msra-5k to bruce - a dataset .note in particular how map model is severely degraded here ( poorest with f measure of 0.1 ) since objects are now less at the center .our best model on this dataset is the salbase - human ( f - measure about 0.31 ) .surprisingly , auc results are still high on this dataset since objects are small thus true positive rate is high at all levels of false positive rate ( see also performance of map ) . patterns of results over judd - a dataset are similar to those over bruce - a with all of our models performing higher than others .the lowest performance here belongs to fts followed by the two fixation prediction models .our salbase - human model scores the best with the f - measure about 0.55 . among our models that used a model to pick the most salient location , salbase - aws scores higher over bruce -a and judd - a datasets possibly because aws is better able to find the most salient location .the average drop from msra-5k to judd - a dataset is 41% ( for 5 saliency detection models ) .[ fig : f - measure2 ] shows that these findings are robust to f - measure parameterization .tables [ tab : db - fmeasure ] and [ tab : db - auc ] summarize the f - measure and auc of models ..f - measure accuracy of models .performance of the best model is highlighted in boldface font . [ cols="^,^,^,^ " , ] to study the dependency of results on saliency map thresholding ( i.e. , how many superpixels to include ) , we varied the saliency threshold and calculated f - measure for salbase - human and salbase - aws models ( see fig .[ fig : salthreshold ] ) .we observed that even higher scores are achievable using different parameters .for example , since objects in the judd - a dataset are larger , a lower threshold yields a better accuracy .the opposite holds over the bruce - a dataset . to investigate the dependency of results on segmentation parameters, we varied the parameters of the segmentation algorithm from too fine ( = 1 , k = 100 , min = 20 ; many segments ; over - segmenting ) to too coarse ( = 1 , k = 1000 , min = 800 ; fewer segments ; under - segmenting ) .both of these settings yielded lower performances than results in fig . [fig : bruce ] . results with another parameter setting with = 1 , k = 500 , and min = 50 are shown in fig .[ fig : res_500_50 ] . scores and trends are similar to those shown in fig .[ fig : bruce ] , with salbase - human and salbase - aws being the top contenders . .stars correspond to points shown in fig .[ fig : bruce ] . note that even higher accuracies are possible with different thresholds over our datasets .corresponding f - measure values over msra-5k for salbase - aws model are : 0.62 , 0.67 , 0.70 , 0.73 , and 0.62.,width=321,height=143 ] analysis of cases where our model fails , shown in fig .[ fig : failure2 ] , reveals four reasons : _ first _ , on bruce - a dataset when humans look at an object more but annotators chose a different object ._ second _ , when a segment that touches the image border is part of the salient object ._ third _ , when the object segment falls outside the thresholded saliency map ( or a wrong one is included ) ._ fourth _ , when the first stage ( i.e. , fixation prediction model ) pick the wrong object as the most salient one ( see fig .[ fig : rrr ] , first column ) . regarding the first problem ,care must be taken in assuming what people look is what they choose as the most salient object .although this assumption is correct in a majority of cases ( fig .[ fig : exp1 ] ) , it does not hold in some cases .with respect to the second and third problems , future modeling effort is needed to decide which superpixels to include / discard to determine the extent of an object .the fourth problem points toward shortcomings of fixation prediction models .indeed , in several scenes where our model failed , people and text were the most salient objects .person and text detectors were not utilized in the saliency models employed here .[ fig : rrr ] shows a visual comparison of models over 12 scenes from the judd - a dataset .cbsal and svo generate more visually pleasant maps .goferman highlights object boundaries more than object interiors .pca generates center - biased maps . some models ( e.g. , goferman , fts ) generate sparse saliency maps while some others generate smoother ones ( e.g. , svo , cbsal ) .aws and hounips models generate pointy maps to better account for fixation locations .in this work , we showed that : 1 ) explicit human saliency judgments agree with free - viewing fixations ( thus extending our previous results in ) , 2 ) our new benchmark datasets challenge existing state - of - the - art salient object detection models ( in alignment with li _ et al ._ s dataset ) , and 3 ) a conceptually simple and computationally efficient model ( .2 s for saliency and segmentation maps on a pc with a 3.2 ghz intel i7 cpu and 6 gb ram using matlab ) wins over the state of the art models and can be used as a baseline in the future .we also highlighted a limitation of models which is the main reason behind their failure on complex scenes .they often segment the wrong object as the most salient one .previous modeling effort has been mainly concentrated on biased datasets with images containing objects at the center . here , we focused on this shortcoming and described how unbiased salient object detection datasets can be constructed .we also reviewed datasets that can be used for saliency model evaluation ( in addition to datasets in table [ tab : db ] ) and measured their statistics .no dataset exists so far that has all of object annotations , eye movements , and explicit saliency judgments .bruce - a has fixations , and only explicit saliency judgments but not all object labels .judd - a , osie , and pascal - s datasets have annotations and fixations but not explicit saliency judgments . here , we chose the object that falls at the peak of the fixation map as the most salient one .ucsb dataset lacks object annotations but it has fixations and saliency judgments using clicks ( as opposed to object boundaries in bruce - a ) . future research by collecting all information on a large scale dataset will benefit salient object detection research .here we suggested that the most salient object in a scene is the one that attracts the majority of fixations ( similar to ) .one can argue that the most salient object is the one that observers look at first . while in general , these two definitions may choose different objects , given the short presentation times in our datasets ( 3 sec on judd , 4 sec on bruce ) we suspect that both suggestions will yield to similar results .our model separates detection from segmentation .a benefit of this way of modeling is that it can be utilized for other purposes ( e.g. , segmenting interesting or important objects ) by replacing the first component of our model .further , augmented with a top - down fixation selection strategy , our model can be used as an active observer ( e.g. , ) .our analysis suggests two main reasons for model performance drop over the judd - a dataset : the _ first reason _ that the literature has focused so far is to avoid incorrectly segmenting the object region ( i.e. , increasing true positives and reducing false positives ) .therefore , low performance is partially due to inaccurately highlighting ( segmenting ) the salient object .the _ second reason _ that we attempted to highlight in this paper ( we believe is the main problem causing performance drop as models performed poorly on judd - a compared to msra-5k ) is segmenting the wrong object ( i.e. , not the most salient object ) .note that although here we did not consider the latest proposed salient object detection models in our model comparison ( e.g. , ) , we believe that our results are likely to generalize compared to newer models .the rationale is that even recent models have also used the asd dataset ( which is highly center - biased ) for model development and testing .nontheless , we encourage future works to use our model ( as well as li et al.s model ) as a baseline for model benchmarking . two types of cues can be utilized for segmenting an object : appearance ( i.e. , grouping contiguous areas based on surface similarities ) and boundary ( i.e. , cut regions based on observed pixel boundaries ) .here we mainly focused on the appearance features .taking advantage of both region appearance and contour information ( similar to ) for saliency detection ( e.g. , growing the foreground salient region until reaching the object boundary ) is an interesting future direction . in this regard, it will be helpful to design suitable measures for evaluating accuracy of models for detecting boundary ( e.g. , ) .our datasets allow more elaborate analysis of the interplay between saliency detection , fixation prediction , and object proposal generation . obviously , these models depend on the other . on one hand ,it is critical to correctly predict where people look to know which object is the most salient one . on the other hand , labeled objects in scenes can help us study how objects guide attention and eye movements .for example , by verifying the hypotheses that some parts of objects ( e.g. , object center ) or semantically similar objects ) attract fixations more , better fixation prediction models become feasible .l. itti , c. koch , and e. niebur . a model of saliency - based visual attention for rapid scene analysis ._ ieee trans . pami _ , 1998 .n. d. b. bruce and j. k. tsotsos .saliency based on information maximization ._ nips _ , 2005 .x. hou and l. zhang .dynamic attention : searching for coding length increments ._ nips _ , 2008 .a. garcia - diaz , x. r. fdez - vidal , x. m. pardo , and r. dosil .decorrelation and distinctiveness provide with human - like saliency ._ acivs _ , 2009 .h. jiang , j. wang , z. yuan , y. wu , n. zheng , and s. li .salient object detection : a discriminative regional feature integration approach ._ ieee conference on computer vision and pattern recognition _ , 2013 .a. c. berg , t. l. berg , h. daume , j. dodge , a. goyal , x. han , a. mensch , m. mitchell , a. sood , k. stratos et al ., `` understanding and predicting importance in images , '' in ieee conference on computer vision and pattern recognition ( cvpr ) , 2012 , pp .3562 - 3569 .x. han , a. mensch , m. mitchell , a. sood , k. stratos et al ., `` understanding and predicting importance in images , '' in ieee conference on computer vision and pattern recognition ( cvpr ) , 2012 , pp .3562 - 3569 .s. dhar , v. ordonez , and t. l. berg , `` high level describable attributes for predicting aesthetics and interestingness , '' in ieee conference on computer vision and pattern recognition ( cvpr ) , 2011 , pp .1657 - 1664 .z. wang , a. c. bovik , h. r. sheikh , and e. p. simoncelli , `` image quality assessment : from error visibility to structural similarity , '' ieee transactions on image processing , vol .600 - 612 , 2004 .g. kulkarni , v. premraj , s. dhar , s. li , y. choi , a. c. berg , and t. l. berg , `` baby talk : understanding and generating simple image descriptions , '' in ieee conference on computer vision and pattern recognition ( cvpr ) , 2011 , pp .1601 - 1608 .u. rutishauser , d. walther , c. koch , and p. perona , `` is bottom - up attention useful for object recognition ? '' in proceedings of the ieee conference on computer vision and pattern recognition , 2004 , vol . 2 , 2004 , pp .ii-37 . c. guo and l. zhang , `` a novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression , '' ieee trans . on image processing ,1 , pp . 185 - 198 , 2010 .ma , x .- s .hua , l. lu , and h .- j .zhang , `` a generic framework of user attention model and its application in video summarization , '' ieee transactions on multimedia , vol .907 - 919 , 2005 .a. ninassi , o. le meur , p. le callet , and d. barbba , `` does where you gaze on an image affect your perception of quality ? applying visual attention to image quality metric , '' in ieee conference on image processing ( icip ) , vol . 2 , 2007 , pp .ii-169 .j. li , m. levine , x. an , x. xu , and h. he , `` visual saliency based on scale - space analysis in the frequency domain , '' ieee trans . on pattern analysis and machine intelligence ,996 - 1010 , 2013 .s. frintrop , g. m. garca , and a. b. cremers , `` a cognitive approach for object discovery , '' icpr , 2014 .d. meger , p .- e .forssen , k. lai , s. helmer , s. mccann , t. southey , m. baumann , j. j. little , and d. g. lowe , `` curious george : an attentive semantic robot , '' robotics and autonomous systems , vol .56 , no . 6 , pp .503 - 511 , 2008 .ali borji received his bs and ms degrees in computer engineering from petroleum university of technology , tehran , iran , 2001 and shiraz university , shiraz , iran , 2004 , respectively .he did his ph.d .in cognitive neurosciences at institute for studies in fundamental sciences ( ipm ) in tehran , iran , 2009 and spent four years as a postdoctoral scholar at ilab , university of southern california from 2010 to 2014 .he is currently an assistant professor at university of wisconsin , milwaukee .his research interests include visual attention , active learning , object and scene recognition , and cognitive and computational neurosciences .
salient object detection or salient region detection models , diverging from fixation prediction models , have traditionally been dealing with locating and segmenting the most salient object or region in a scene . while the notion of most salient object is sensible when multiple objects exist in a scene , current datasets for evaluation of saliency detection approaches often have scenes with only one single object . we introduce three main contributions in this paper : first , we take an in - depth look at the problem of salient object detection by studying the relationship between where people look in scenes and what they choose as the most salient object when they are explicitly asked . based on the agreement between fixations and saliency judgments , we then suggest that the most salient object is the one that attracts the highest fraction of fixations . second , we provide two new less biased benchmark datasets containing scenes with multiple objects that challenge existing saliency models . indeed , we observed a severe drop in performance of 8 state - of - the - art models on our datasets ( 40% to 70% ) . third , we propose a very simple yet powerful model based on superpixels to be used as a baseline for model evaluation and comparison . while on par with the best models on msra-5k dataset , our model wins over other models on our data highlighting a serious drawback of existing models , which is convoluting the processes of locating the most salient object and its segmentation . we also provide a review and statistical analysis of some labeled scene datasets that can be used for evaluating salient object detection models . we believe that our work can greatly help remedy the over - fitting of models to existing biased datasets and opens new venues for future research in this fast - evolving field . shell : xx ieee transactions on image processing salient object detection , explicit saliency , bottom - up attention , regions of interest , eye movements
the prediction of microstructural evolution in response to thermo - mechanical loading is important for materials design , processing or thermomechanical fatigue phenomena .computational modeling of evolving texture in response to large plastic deformation and recrystallization has been studied extensively but less so than that produced by thermally - induced stresses i.e .stress - induced texture evolution .we consider a thermo - mechanical setting in which temperature changes cause stresses to develop due to geometrical constraints .the temperature is sufficiently high to generate grain boundary motion and yet low enough such that recrystallization does not occur .the induced stresses may be associated with both elastic and plastic deformation . in a previous work ,a hybrid monte carlo ( hmc ) approach was developed by combining a mc algorithm for grain boundary motion with the material point method ( mpm ) for elastic deformation .purely elastic driving forces , originating from the anisotropic mechanical response of individual grains , are treated as a bulk body force in a potts model for grain boundary evolution .the approach is time accurate through the use of parametric links to sharp - interface ( si ) kinetics .it also takes advantage of the fact that mc grain boundary mobility is independent of the driving force .the present work extends this paradigm to include the influence of inelastic deformation on texture evolution . as in the elastic study , texture evolution is assumed to be dominated by grain boundary kinetics .furthermore , we consider infinitesimal deformation to distinguish the stress - induced texture from deformation texture .the latter is associated with grain and lattice rotation in response to finite deformations .a stochastic , crystal plasticity model , developed from rate - independent crystal plasticity , is applied within the mpm framework as the constitutive model to capture the elasto - plastic response of a polycrystalline media . as opposed to conventional deterministic algorithms ,the stochastic algorithm relies on a mc routine to determine the activated slip system which is therefore referred to as the monte carlo plasticity ( mcp ) .when plastic deformation occurs , dislocations are generated , stored and annihilated within the microstructure .the heterogeneous distribution of these dislocations within the polycrystalline medium constitutes a plastic driving force for grain boundary migration .this is treated as a body force within the mc kinetics using parametric links between mc and si models . a red / black ( rb ) updating scheme is used to parallelize the mc algorithm , although other methods might also be useful .this parallelized hmc approach is used to investigate the microstructural evolution of nickel polycrystals under plastic loading .as expected , the grains with smaller schmid factors gradually dominate the polycrystalline system .the data is subsequently used to construct a macroscopic kinetic equation to predict the evolution of microstructure .plastic response of polycrystalline materials is treated through a classical rate - independent small deformation crystal plasticity formulation .the foundations of the constitutive model assume that the elasto - plastic response of single crystals is dominated by slip deformation mechanisms .a successful numerical algorithm must carry out three tasks : the determination of activated slip systems ; the calculation of the plastic slip on each activated slip system ; and , the solution of redundant constraints associated with a hardening law .various numerical methods have been devised and successfully implemented in deterministic formats .as opposed to deterministic algorithms , the current work adopts a probabilistic approach borrowed from concepts in statistical mechanics in which only one slip system is activated during each time step .plastic slip is therefore treated as a series of discrete , probabilistic events that mimic the sequential accumulation of dislocations at the lattice scale .this monte carlo crystal plasticity ( mcp ) is algorithmically simple because plastic slip can be resolved through the solution of one equation with no redundant constraints . on the other hand ,the associated computational steps has to be sufficiently small such that a sequence of single slips mimics multiple slip behavior . a probabilistic algorithm , detailed in what follows ,is used to determine which slip system is chosen at each step .the constitutive framework and stress updating routine are otherwise standard . given a set of potentially activated slip systems , identified through comparison of resolved shear stress with slip resistance , the elastic energy of a crystal , , can be calculated if each slip system of the set is individually activated .this generates possible states for the deformed crystal .the probability , , of a slip system being selected is computed using the partition function , : {l}\\ , \end{tabular } \label{mc_pla}\ ] ] where is the index of a potentially activated slip system , and is the inverse of the fundamental temperature .dislocation energy can be ignored in eqn .( [ mc_pla ] ) due to the fact that an isotropic hardening model is used .the slip system of the set is activated when the following criterion is met : {l}. \end{tabular } \label{random}\ ] ] here is taken as a random number between and . with the activated slip system determined , the deformation can easily be parsed into elastic and plastic components . as a verification of this mcp method ,a uniaxial tension test was carried out on a single crystal of nickel .the material properties required for this model are listed in table [ plapara ] ..nickel properties required in mcp model . [ plapara ] [ cols="<,<,<",options="header " , ] a prescribed strain increment , was enforced at each time step : , \ ] ] with fig .[ stress_mcp_svd ] compares the results obtained using the mcp logic with their counterparts obtained with a deterministic model wherein a singular value decomposition ( svd ) algorithm is applied to solve the ill - conditioned constraint equations .the two approaches clearly agree .sharp - interface , grain boundary kinetics is the basis for our grain boundary kinetics model .it can be developed by considering a bi - crystal subjected to uniaxial loading that results in plastic deformation as shown in fig .[ si_schem ] .the dislocation energy , , and dislocation density , are related using the relation here is the shear modulus , is the burgers vector , and is the flow stress . the thermodynamic driving force for interfacial accretion is given by \!]+\kappa \gamma^ { * } \ , \label{driv_trac}\ ] ] where is the capillary driving force , is the local mean curvature , and \!]$ ] is the jump in a field across the grain boundary . the elastic driving force is temporarily suppressed in order to study the influence of the plastic behavior on texture evolution .the herring relation is used to describe the accretive normal speed of the interface , : where is grain boundary mobility .the mc paradigm is intended to implement si kinetics within a computationally efficient setting .continuum , deterministic fields are linked to discrete , probabilistic counterparts , and the physical domain is discretized into a square lattice .a q - state potts model is used to represent a crystalline system and grain orientation is described by an integer - valued spin field , , which ranges from to .the system hamiltonian is then written as where is the spatially varying bulk energy associated with the lattice , is the interaction energy between the neighboring spin fields and , is the number of neighbors considered on the selected lattice , and is the kronecker delta .kinetics , within the mc paradigm , is translated to a series of probabilistic trials carried out at all lattice sites .the acceptance of a trial event is determined by a probability function based on the change in the hamiltonian associated with a trial flip .plastic effects can therefore be accounted for if the plastic driving force is treated as a bulk energy term in eqn .( [ h_1 ] ) . parametric links between si and mc models are used to convert si parameters into mc format : where and are reasonable grain boundary stiffness and grain boundary mobility obtained from the literature , while is the computed bulk energy density associated with the si model .the counterparts in the mc paradigm , which can be analytically or numerically derived , are , and , respectively .a characteristic energy , , length , , and time , along with the mc lattice size , interaction energy , effective temperature , inclination angle and time step , are also required to bridge the si and mc paradigms .these links endow the mc simulation with physical time and length scales .in a typical mc implementation , an update of variables immediately follows the acceptance of a trial event as in the metropolis algorithm for example . to improve the efficiency of this approach within a parallelized setting, we adopted a red / black ( rb ) updating rule wherein the domain is decomposed into a checkerboard , and each mc step is divided into red / black and black / red half steps .this is an alternative to the n - fold way algorithm .in contrast to the large number of updates associated with a standard mc time step , states are not updated in the rb format until each half step finishes . when applied within a parallel environment , the domain is divided uniformly among processors with synchronous communication between processors at each half - step .the approach is first implemented within a 2d , bi - crystal setting .a circular grain is placed at the center of a square domain and the inner grain shrinks or grows due to the combined effects of capillary and bulk driving forces .the two - state square lattice ising model is then employed to represent the physical system .mc simulations were carried out on a grid , at an effective temperature .the evolution of inner grain was simulated with four combinations of capillary and bulk driving forces .capillary driving force was fixed by giving a constant interaction energy , i.e. , , while bulk driving force , which is half of the bulk energy difference per site between inner and outer grains , varies from to .in particular , the bulk energy term will be used later on to represent mechanical effect ( dislocation energy ) in realistic plastic deformation .corresponding si simulations were also run using the established parametric links between mc and si models . fig .[ 2d_rb ] ( a ) indicates good agreement between the two models in the prediction of inner grain area as a function of time . an additional test case was also performed for a polycrystalline system where the q - state ( ) potts model with isotropic interaction energy ( ) was used .many mc simulations were carried out at effective temperatures and , respectively .[ 2d_rb ] ( b ) shows the measured average grain size with linear fits to the data .the linear grain growth behavior is consistent with that of the _ isotropic grain growth theory _ . in later applications ,a relatively high mc effective temperature is preferred to remove lattice pinning .. solid line and discrete points represent mc and si results respectively .grid size is .( b ) isotropic grain growth at two mc temperatures for 2-d polycrystals .the gray and black dots demonstrate =0.5 and =1.0 , respectively .grid size is .mc results are averaged over samples.,scaledwidth=60.0% ]the aforementioned parallelized mc algorithm was next applied within the previous polycrystalline setting to study the influence of bulk energy distribution on the evolution of texture .the interaction energy was taken to be isotropic and a gaussian distribution for the bulk energy was assigned to each orientation . fig .[ mc_texture ] ( a ) shows the orientation - dependent gaussian shape bulk energy : , \label{flow}\ ] ] where is orientation , and is the bulk energy associated with this material .the bulk energy is adimensioned within a pure mc setting .mc simulations were carried out at an effective temperature .[ mc_texture ] ( b ) presents the orientation distribution at three time slices .as physically expected , materials points with a lower bulk energy are favored .this indicates that grains which experience the least plastic deformation will be favored as opposed to the ones have larger dislocation content . , and .simulation results were averaged over runs.,scaledwidth=60.0% ] the parallelized hmc approach was then used to consider the microstructural evolution of three - dimensional , polycrystalline nickel in response to plastic deformation .attention was restricted to texture development in response to plastic deformation which is dominated by grain boundary kinetics rather than that of triple junction kinetics . to meet this requirement ,all numerical experiments were performed at an mc effective temperature ( ) , which is a non - physical temperature .note that the corresponding physical temperature ( as opposed to mc temperature ) is still not high enough to initiate recrystallization .in addition , elastic driving forces were suppressed for the sake of clarity .the mcp framework is implemented in mpm as a plastic constitutive model mediating the mechanical response .the required fitting parameters are listed in table [ plapara ] .material properties at room temperature were adopted even though they may be temperature dependent . in order to illustrate the methodology and facilitate the interpretation of the results , grains were distinguished by a single angle of rotation with respect to the z - direction and allowed it to vary from to in angle steps .the other two euler angles were held fixed . a similarsetting has been previously consider with deformation restricted to the elastic regime . as mentioned previously ,plastic slip occurs when the resolved shear stress on a slip system exceeds its threshold resistance . as a consequence ,orientations with bigger schmid factor will be plastically deformed first[ schmid_orn ] shows the orientation dependent schmid factor of a face centered cubic ( fcc ) single crystal .orientations near and have the largest schmid factor .these orientations slip more easily and thus accumulate dislocations the fastest .therefore , subsequent grain boundary motion will tend to remove such grains .in the current study , an mpa uniaxial loading was applied in the x - direction in loading increments . at each step , the strain increment associated with each mpm particle is computed using the mpm algorithm , and the mcp algorithm is then called to partition elastic and plastic deformations .the quasi - static state of the polycrystalline system is reached after a series of iterations when either the relative stress increment or relative strain increment meets the prescribed convergence criterion for all the particles .after each mcp step , dislocation energy is computed and converted into the mc domain using our parametric links . the microstructure is subsequently evolved for one mc step , equal to minutes . figs .[ texture_pla ] ( a ) and ( b ) describe the evolution of orientation distribution at five different time slices .negative rotation angles are used to represent orientation from the to .as expected , texture evolution favors grains which have smaller schmid factors with respect to the loading axis .interestingly , orientations between and are not selected with the same frequency as orientations between and even though the two groups have identical schmid factors .this is because the effective young s moduli of the former group elements are larger than that of their counterparts .therefore less elastic , and thus more plastic strain , results from the same loading . to illustrate this principle more clearly, pure mechanical simulations were performed in single crystals of various orientations .[ orn_sgl ] ( a ) shows that more plastic deformation is obtained on the orientation than the other two orientations . fig .[ orn_sgl ] ( b ) highlights the differences in the mechanical response between and orientations which have the same schmid factor .as expected , more plastic deformation is observed in the orientation .independent runs.,scaledwidth=60.0% ] to quantify the effect of texture evolution with plasticity , the texture histograms were fitted to a time dependent gaussian equation : , \label{text_evl}\ ] ] where is the orientation probability density , is the orientation angle , and is a time - dependent gaussian variance . the fitted results are shown as solid curves in fig .[ fitted_texture ] ( a ) along with the gaussian variance described in fig .[ fitted_texture ] ( b ) .the effects of plastic deformation on texture evolution are quantified with a previously developed hybrid algorithm that blends a discrete , probabilistic algorithm for grain boundary motion with a continuum , deterministic model of elastic deformation .plastic deformation is accounted for using a monte carlo plasticity model in which slip occurs through a sequence of probabilistically determined , single slip events .mechanical loading results in an inhomogeneous distribution of dislocation energy which amounts to a plastic driving force for texture evolution . grains with less damage grow at the expense of those which have been more heavily deformed . in the current approach , the dislocation density associated with the area swept by a moving interfaceis replaced with that of the overtaking grain .this implies that , when a grain boundary moves , the dislocation content adjacent to the grain boundary is extended into the material through which the grain boundary migrates .however , this is not always the case in reality . on the other hand , if one assume that no dislocations were transmitted with the boundary motion , then a new dislocation free grain would be introduced and the material would recrystallize . though recrystallization undoubtedly occurs , and is certainly important in the materials studied and at these levels of deformation , it is not the only option .a grain boundary can simply migrate in a heavily deformed polycrystal .the migration of a grain boundary in a deformed polycrystal involves the propagation of some , but not always all , of the dislocation content that are adjacent to it .the authors are not aware of a meaningful quantitative description of how the dislocation structure propagates with a migrating grain boundary , and thus assume that nearly all the dislocations are carried with a boundary when it moves .this hybrid computational methodology enables the temporal evolution of the microstructure under thermomechanical loading .it offers an alternative to sharp - interface and phase - field modeling .the resulting texture evolution maps are expected to be useful in materials and process design , thermomechanical fatigue as well as in part performance assessment throughout service life .the current work was limited to a single angle of misorientation between grains .an extension of this work to include fully arbitrary misorientation is underway , which will allow us to capture the influence of plastic deformation on the texture evolution of more realistic material systems .the research is funded by sandia national laboratories .sandia national laboratories are operated by the sandia corporation , a lockheed martin company , for the united states department of energy under contract de - ac04 - 94al85000 .we also acknowledge the golden energy computing organization at the colorado school of mines for the use of resources acquired with financial assistance from the national science foundation and the national renewable energy laboratories .10 f. roters , p. eisenlohr , l. hantcherli , d.d .tjahjanto , t.r .bieler , and d. raabe .overview of constitutive laws , kinematics , homogenization and multiscale methods in crystal plasticity finite - element modeling : theory , experiments , applications ., 58(4):1152 1211 , 2010 .a. u. telang , t. r. bieler , a. zamiri , and f. pourboghrat .incremental recrystallization / grain growth driven by elastic strain energy release in a thermomechanically fatigued lead - free solder joint ., 55(7):2265 2277 , 2007 .remi dingreville , corbett c. battaile , luke n. brewer , elizabeth a. holm , and brad l. boyce .the effect of microstructural representation on simulations of microplastic ratcheting ., 26(5):617 633 , 2010 .
a hybrid monte carlo ( hmc ) approach is employed to quantify the influence of inelastic deformation on the microstructural evolution of polycrystalline materials . this approach couples a time explicit material point method ( mpm ) for deformation with a calibrated monte carlo model for grain boundary motion . a rate - independent crystal plasticity model is implemented to account for localized plastic deformations in polycrystals . the dislocation energy difference between grains provides an additional driving force for texture evolution . this plastic driving force is then brought into a mc paradigm via parametric links between mc and sharp - interface ( si ) kinetic models . the mc algorithm is implemented in a parallelized setting using a checkerboard updating scheme . as expected , plastic loading favors texture evolution for grains which have a bigger schmid factor with respect to the loading direction , and these are the grains most easily removed by grain boundary motion . a macroscopic equation is developed to predict such texture evolution . * * monte carlo , grain boundary , plasticity , anisotropy , texture , driving force
since introduced by howard , the concept of the expected value of information has long been studied in the context of decision analysis and applied to various areas , such as medical decision making , environmental science and petroleum engineering .the expected value of information is defined as the expected increase in monetary value brought from reducing some degree of uncertainty on unknown parameters involved in a decision model by obtaining additional information .there are several definitions of the expected value of information depending on the type of information , which includes perfect information , partial perfect information and sample information .in particular , the expected value of partial perfect information ( evppi ) , or sometimes called the partial expected value of perfect information , denotes the value of eliminating uncertainty on a subset of unknown parameters completely , and has been advocated and used as a decision - theoretic sensitivity index for identifying relatively important unknown parameters . for many problems encountered in practice , calculating the evppi analytically is not possible . the simplest and most often - used methodto approximately evaluate the evppi is the nested monte carlo computation .as pointed out in , however , the standard nested monte carlo computation of the evppi results in biased estimates , which directly follows from jensen s inequality .moreover , it can be inferred from ( * ? ? ?* section 2 ) that the standard nested monte carlo computation can not achieve the square - root convergence rate in the total computational budget .in fact , the author of this paper empirically observed a deteriorated convergence rate for a simple toy problem in .therefore , an unbiased and efficient computation of the evppi might be of particular interest to practitioners . in this line of investigation, there have been some recent attempts to construct such computational algorithms .as far as the author knows , however , every algorithm proposed in the literature has its own restrictions , for instance , on a decision model , and there is no general algorithm with mild assumptions . in this paperwe construct general unbiased monte carlo estimators for the evppi as well as the expected value of perfect information ( evpi ) .our estimators for the evppi on a certain subset of unknown parameters only assume that i.i.d .random sampling from the conditional distribution of the complement of unknown parameters should be possible .if this is not the case , it might be necessary to incorporate markov chain monte carlo sampling into our estimators , although such an investigation is beyond the scope of this paper . for a decision model which satisfies the above assumption ,our estimators are quite simple and straightforward to implement .our approach to construct unbiased estimators is based on the multilevel monte carlo ( mlmc ) method , which was first introduced by heinrich for parametric integration and by giles for path simulation , and was later extended by rhee and glynn .we refer to for a state - of - the - art review on the mlmc method .the idea of the mlmc method can be simply described as follows : for a dimension , let ^s) ] be a sequence of functions which approximates with increasing accuracy ( in the norm ) but also with increasing computational cost .we denote by the true integral of , i.e. , ^s}f(x){\,\mathrm{d}}x . \end{aligned}\ ] ] the naive monte carlo computation chooses points independently and randomly from ^s ] , respectively .these estimators are shown to be unbiased .in this setting , the superiority of the mlmc method over the naive monte carlo method depends on the balance between the growth rate of the computational costs for and the decay rate of the variances of .an application of the mlmc method to the nested monte carlo computation in a different context has been done , for instance , in and also mentioned in ( * ? ? ?* section 9 ) .however , the mlmc method has never been applied to computations of the expected value of information . in this paper , we show that the framework of the mlmc method actually fits quite well into constructing unbiased estimators both for the evpi and the evppi .because of their simplicity and efficiency , we believe that our unbiased estimators will be one of the most standard choices particularly for evaluating the evppi .finally , it should be remarked that an unbiased estimator for optimization of expectations has been constructed very recently by blanchet and glynn in a general context , whose main approach is commonly used in this paper .the remainder of this paper is organized as follows . in the next section ,we introduce the definitions of the evpi and the evppi , and then discuss the standard nested monte carlo computations . in section [ sec:3 ] , we construct unbiased estimators for the evpi and the evppi based on the mlmc method , and also briefly discuss some practical issues relating to implementation .we conclude this paper with numerical experiments in section [ sec:4 ] .let be a finite set of decision options .the task of a decision maker is to decide which option is optimal under uncertainty of . here is assumed to be a continuous random variable defined on the -dimensional domain with density , and a monetary value function is assigned for each option . throughout this paper , we assume . under the risk neutrality assumption ,the optimal option is one which maximizes the expected monetary value :=\int_{\omega_x}f_d(x)p_x(x){\,\mathrm{d}}x .\end{aligned}\ ] ] thus the expected monetary value without additional information is given by ] .the evpi denotes how much the expected monetary value is increased by eliminating uncertainty of .thus the evpi is defined by - \max_{d\in d}{\mathbb{e}}_x\left[f_d\right ] .\end{aligned}\ ] ] note that the evpi is equivalent to how much a decision maker is willing to pay for obtaining perfect information .assume that the random variable is separable into ( possibly correlated ) two random variables as with , and that available information is perfect only for .in this situation , a decision maker can decide an optimal option under uncertainty of after eliminating uncertainty of completely .therefore , the monetary value for a decision maker after is indicated by partial perfect information is given by ] .thus , similarly to the evpi , the evppi on is defined by \right ] - \max_{d\in d}{\mathbb{e}}_x\left[f_d\right ] .\end{aligned}\ ] ] here we recall that the marginal density function of and the conditional density function of given are given by and respectively .since both the evpi and the evppi are often difficult to calculate analytically , the monte carlo computations are used in practice .let us consider the evpi first .for , let and be i.i.d .random samples generated from .the evpi is approximated by we can approximate the evppi in a similar way .let .let be i.i.d .random samples generated from , and i.i.d .random samples generated from . for each , let be i.i.d .random samples generated from .then the evppi is approximated by the following nested form in the case where there is no correlation between and , random samples used in the inner sum can be replaced by i.i.d .random samples generated from for all . herewe would emphasize that both the monte carlo estimators and are biased .that is , \neq { \mathrm{evpi}}\quad \text{and}\quad { \mathbb{e}}\left[\overline{{\mathrm{evppi}}}_{x^{(1)}}\right ] \neq { \mathrm{evppi}}_{x^{(1)}}. \end{aligned}\ ] ] this result follows directly from jensen s inequality . in case of the evpi , we have & = { \mathbb{e}}\left[\frac{1}{n}\sum_{n=1}^{n}\max_{d\in d}f_d(x_n)\right ] - { \mathbb{e}}\left[\max_{d\in d}\frac{1}{l}\sum_{l=1}^{l}f_d(x'_l)\right ] \\ & \leq { \mathbb{e}}\left[\frac{1}{n}\sum_{n=1}^{n}\max_{d\in d}f_d(x_n)\right ] - \max_{d\in d}{\mathbb{e}}\left[\frac{1}{l}\sum_{l=1}^{l}f_d(x'_l)\right ] \\ & = { \mathbb{e}}_x\left[\max_{d\in d}f_d \right ] - \max_{d\in d}{\mathbb{e}}_x\left[f_d\right ] = { \mathrm{evpi } } , \end{aligned}\ ] ] where the inequality stems from jensen s inequality .it is clear that the first term of the evpi can be estimated without any bias , whereas the second term is estimated with a positive bias .we refer to for a possible bounding technique to quantify the bias .therefore , in total , is a downward biased estimator of . in case of the evppi ,both the first and second terms of the evppi are estimated with positive biases .thus it is difficult to conclude whether the estimator is biased either upward or downward . nevertheless , these two biases are not cancelled out , so that we have \neq { \mathrm{evppi}}_{x^{(1)}} ] are not known in advance . as an alternative approach ,let us specify the form of as for all with .let us assume that the expectations ] is replaced by , and the expected computational cost to be finite , it suffices that holds .moreover , the optimal choice for obtained in the last paragraph gives .the same value of can be obtained , as done in where the special case with and is considered , by minimizing the work - normalized variance furthermore , in practical applications , one may set the total computational cost instead of , i.e. , the number of i.i.d .copies used in the estimators . in this case , we first generate a random sequence independently from and then define by , we conduct numerical experiments for a simple toy problem . in order to evaluate the approximation error quantitatively, we design a toy problem such that the evpi and the evppi can be calculated analytically .let us consider the following setting .let be a set of two possible actions which can be taken by a decision maker under uncertainty of .for we define where and .the prior probability density of is given by for given and .then we have : = \max\left\ { \int_{{\mathbb{r}}^s}f_{d_1}(x)p(x){\,\mathrm{d}}x , 0\right\ } = \max\left\ { w_0+\sum_{j=1}^{s}w_j\mu_j , 0\right\}.\end{aligned}\ ] ] now let be a subset of . for simplicity , let us focus on the case .we write , and .the evppi on can be calculated analytically as follows .since there is no correlation between and , we have \right ] \\ & \quad = \int_{{\mathbb{r}}^{|u|}}\max\left\ { \int_{{\mathbb{r}}^{s-|u|}}f_{d_1}(x_u , x_{-u})\prod_{j\in -u}p_{x_j}(x_j){\,\mathrm{d}}x_{-u } , 0\right\}\prod_{j\in u}p_{x_j}(x_j){\,\mathrm{d}}x_u \\ & \quad = \int_{{\mathbb{r}}^{|u|}}\max\left\ { \sum_{j\in u}w_jx_j+w_0+\sum_{j\in -u}w_j\mu_j , 0\right\}\prod_{j\in u}p_{x_j}(x_j){\,\mathrm{d}}x_u \\ & \quad = \int_{\omega_{\geq 0}}\left ( \sum_{j\in u}w_jx_j+w_0+\sum_{j\in -u}w_j\mu_j\right ) \prod_{j\in u}p_{x_j}(x_j){\,\mathrm{d}}x_u , \end{aligned}\ ] ] where we write . by changing the variables according to where \in { \mathbb{r}}^{|u|\times |u| } , \end{aligned}\ ] ] we have i.e., the sum of s is nothing but the last component of .moreover , the probability density of is the normal density with the mean and the variance , the above integral can be written into \right ] & = \int_{\omega_{\geq 0}}\left ( \sum_{j\in u}w_jx_j+w_0+\sum_{j\in -u}w_j\mu_j\right ) \prod_{j\in u}p_{x_j}(x_j){\,\mathrm{d}}x_u \\ & = \int_{a}^{\infty}\left ( y_{|u|}-a\right ) p(y_{|u|}){\,\mathrm{d}}y_{|u| } , \end{aligned}\ ] ] where we write finally , the last integral equals \right ] = \left [ 1-\phi\left ( -\frac{\mu_{{\mathrm{all}}}}{\sigma_u}\right)\right]\mu_{{\mathrm{all } } } + \phi\left ( -\frac{\mu_{{\mathrm{all}}}}{\sigma_u}\right ) \sigma_u,\end{aligned}\ ] ] where we write and further , denotes the standard normal density function and does the cumulative distribution function for .thus , the evppi on is given by \mu_{{\mathrm{all } } } + \phi\left ( -\frac{\mu_{{\mathrm{all}}}}{\sigma_u}\right ) \sigma_u -\max\left\ { \mu_{{\mathrm{all } } } , 0\right\}.\end{aligned}\ ] ] note that the analytical expression for the evpi is given by setting .in what follows , we focus on the case where and , , for all for the sake of simplicity .first , let us consider the evpi computation .the analytical calculation of the evpi is given by .we use the three estimators ( [ eq : evpi_mc ] ) , ( [ eq : est_evpi_single ] ) and ( [ eq : est_evpi_coupled ] ) of the evpi for approximate evaluations .when the total computational budget equals , the naive monte carlo estimator ( [ eq : evpi_mc ] ) is set by , whereas our proposed estimators ( [ eq : est_evpi_single ] ) and ( [ eq : est_evpi_coupled ] ) are set as described in the last paragraph of subsection [ subsec : imple ] with and where .for a given total computational budget , 100 independent computations are conducted for each estimator .figure [ fig : evpi ] compares the boxplots of the evpi computations obtained by three estimators as functions of with .it can be seen that the naive monte carlo estimator gives more accurate results than our estimators . in case of the evpi ,the naive monte carlo estimator is not of the nested form so that the approximation error decays at a rate of if the bias decays at a faster rate than the canonical monte carlo rate .moreover , it can be expected that the variances of our estimators are much larger than that of the naive monte carlo estimator , which yields wider variations among the independent evpi computations by our estimators as well as the difficulty in confirming the unbiasedness of our estimators when the total computational budget is small .let us move on to the evppi computation , in which case the situation changes significantly .because of the invariance of parameters , we focus on computing the evppi s on .the analytical calculations of the evppi s are given by , , , and , respectively .we use the three estimators ( [ eq : evppi_mc ] ) , ( [ eq : evppi_mlmc ] ) with the single term estimator ( ) , and ( [ eq : evppi_mlmc ] ) with the coupled sum estimator ( ) of the evpi for approximate evaluations .when the total computational budget equals , the nested monte carlo estimator ( [ eq : evppi_mc ] ) is set by , and as heuristically suggested in subsection [ subsec : nested_mc ] with .in fact , although we also conducted the same numerical experiments by setting both and , we obtained similar results to those with , which are thus omitted in this paper .our proposed estimators ( [ eq : evppi_mlmc ] ) with are set as described in the last paragraph of subsection [ subsec : imple ] with and where .for a given total computational budget , 100 independent computations are conducted for each estimator .figure [ fig : evppi ] compares the boxplots of the evppi computations on ( from upper panels to lower panels ) obtained by three estimators as functions of with .it is obvious that the convergence behavior of the nested monte carlo estimator is much worse than that of the monte carlo estimator used for computing the evpi , as can be expected from the heuristic argument in subsection [ subsec : nested_mc ] . on the other hand , the convergence behaviors of our estimatorsdo not differ so much whether they are used for computing either the evpi or the evppi , and the length of the boxes for both of our estimators decays to 0 much faster than that for the nested monte carlo estimator .hence , as can be seen , both of our estimators give more accurate results than that of the nested monte carlo estimator as increases . in practice , we recommend to use the coupled sum estimator since there are several outliers with large evppi values found in case of the single term estimator .99 andradttir , s. , glynn , p. w , : computing bayesian means using simulation , acm trans . model .* 26 * , 2 , article no . 10 ( 2016 ) .bates , m. e. , sparrevik , m. , lichy , n. , linkov , i. : the value of information for managing contaminated sediments , environ .technol . * 48 * , 94789485 ( 2014 ) .bickel , j. e. , gibson , r. l. , mcvay , d. a. , pickering , s. , waggoner , j. : quantifying the reliability and value of 3d land seismic , spe res .eval . & eng .* 11 * , 832841 ( 2008 ) .blanchet , j. h. , glynn , p. w. : unbiased monte carlo for optimization and functions of expectations via multi - level randomization , in : proc .2015 winter simulation conference ( 2015 ) .bratvold , r. b. , bickel , j. e. , lohne , h. p. : value of information in the oil and gas industry : past , present , and future , spe res .eval . & eng .* 12 * , 630638 ( 2009 ) .brennan , a. , kharroubi , s. , ohagan , a. , chilcott , j. : calculating partial expected value of perfect information via monte carlo sampling algorithms , med .decis . making * 27* , 448470 ( 2007 ) .bujok , k. , hambly , b. m. , reisinger , c. : multilevel simulation of functionals of bernoulli random variables with application to basket credit derivatives , methodol .* 17 * , 579604 ( 2015 ) .claxton , k. : bayesian approaches to the value of information : implications for the regulation of new health care technologies , health econ .* 8 * , 269274 ( 1999 ) .coyle , d. , oakley , j. e. : estimating the expected value of partial perfect information : a review of methods , eur . j. health econ .* 9 * , 251259 ( 2008 ) .dakin , m. e. , toll , j. e. , small , m. j. , brand , k. p. : risk - based environmental remediation : bayesian monte carlo analysis and the expected value of sample information , risk analysis * 16* , 6779 ( 1996 ) .delqui , p. : the value of information and intensity of preference , decision analysis * 5 * , 129139 ( 2008 ) .felli , j. c. , hazen , g. b. : sensitivity analysis and the expected value of perfect information , med .decis . making * 18 * , 95109 ( 1998 ) .giles , m. b. : multilevel monte carlo path simulation , operations research * 56 * , 607617 ( 2008 ) .giles , m. b. : multilevel monte carlo methods , acta numer . * 24 * , 259328 ( 2015 ) .heinrich , s. : monte carlo complexity of global solution of integral equations , j. complexity * 14 * , 151175 ( 1998 ) .howard , r. a. : information value theory , ieee trans .* 2 * , 2226 ( 1966 ) .madan , j. , ades , a. e. , price , m. , maitland , k. , jemutai , j. , revill , p. , welton , n. j. : strategies for efficient computation of the expected value of partial perfect information , med .decis . making * 34 * , 327342 ( 2014 ) .mak , w .- k . ,morton , d. p. , wood , r. k. : monte carlo bounding techniques for determining solution quality in stochastic programs , operations research letters * 24 * , 4756 ( 1999 ) .nakayasu , m. , goda , t. , tanaka , k. , sato , k. : evaluating the value of single - point data in heterogeneous reservoirs with the expectation - maximization algorithm , spe econ . & mgmt .* 8 * , 110 ( 2016 ) .oakley , j. e. : decision - theoretic sensitivity analysis for complex computer models , technometrics * 51 * , 121129 ( 2009 ) .oakley , j. e. , brennan , a. , tappenden , p. , chilcott , j. : simulation sample sizes for monte carlo partial evpi calculations , j. health econ .* 29 * , 468477 ( 2010 ) .raiffa , h. : decision analysis : introductory lectures on choices under uncertainty .addison - wesley publishing company , massachusetts ( 1968 ) .rhee , c .- h . ,glynn , p. w. : a new approach to unbiased estimation for sdes , in : proc .2012 winter simulation conference ( 2012 ) .rhee , c .- h ., glynn , p. w. : unbiased estimation with square root convergence for sde models , operations research * 63 * , 10261043 ( 2015 ) .sadatsafavi , m. , bansback , n. , zafari , z. , najafzadeh , m. , marra , c. : need for speed : an efficient algorithm for calculation of single - parameter expected value of partial perfect information , value health * 16 * , 438448 ( 2013 ) .samson , d. , wirth , a. , rickard , j. : the value of information from multiple sources of uncertainty in decision analysis , eur .39 * , 254260 ( 1989 ) .sato , k. : value of information analysis for adequate monitoring of carbon dioxide storage in geological reservoirs under uncertainty , int .j. greenh .gas control * 5 * , 12941302 ( 2011 ) .strong , m. , oakley , j. e. : an efficient method for computing single - parameter partial expected value of perfect information , med .decis . making * 33 * , 755766 ( 2013 ) .( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ] ( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ] ( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ] + ( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ] ( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ] ( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ] + ( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ] ( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ] ( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ] + ( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ] ( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ] ( from upper to lower ) by the naive nested monte carlo estimator ( left ) , the single term estimator ( middle ) , and the coupled sum estimator ( right ) with the total computational budgets .,title="fig:",scaledwidth=32.0% ]
the expected value of partial perfect information ( evppi ) denotes the value of eliminating uncertainty on a subset of unknown parameters involved in a decision model . the evppi can be regarded as a decision - theoretic sensitivity index , and has been widely used for identifying relatively important unknown parameters . it follows from jensen s inequality , however , that the standard nested monte carlo computation of the evppi results in biased estimates . in this paper we introduce two unbiased monte carlo estimators for the evppi based on multilevel monte carlo method , introduced by heinrich ( 1998 ) and giles ( 2008 ) , and its extension by rhee and glynn ( 2012 , 2015 ) . our unbiased estimators are simple and straightforward to implement , and thus are of highly practical use . numerical experiments show that even the convergence behaviors of our unbiased estimators are superior to that of the standard nested monte carlo estimator . _ keywords _ : value of information , expected value of partial perfect information , unbiased estimation , multilevel monte carlo
in many applications , the input and output of the controller are quantized signals .this is due to the physical properties of the actuators / sensors and the data - rate limitation of links connected to the controller .quantized control for linear time - invariant systems actively studied from various point of view , as surveyed in .moreover , in the context of systems with discrete jumps such as switched systems and piecewise affine ( pwa ) systems , control problems with limited information have recently received increasing attention . for sampled - data switched systems , a stability analysis under finite - level static quantization has been developed in , and an encoding and control strategy for stabilization has been proposed in the state feedback case , whose related works have been presented for the output feedback case and for the case with bounded disturbances . also , our previous work has studied the stabilization of continuous - time switched systems with quantized output feedback , based on the results in . however ,relatively little work has been conducted on quantized control for pwa systems . in ,a sufficient condition for input - to - state stability has been obtained for time - delay pwa systems with quantization signals , but logarithmic quantizers in have an infinite number of quantization levels . the main objective of this paper is to stabilize discrete - time pwa systems with quantized signals . in order to achieve the local asymptotic stabilization of discrete - time pwa plants with finite data rates , we extend the event - based encoding method in .it is assumed that we are given feedback controllers that stabilize the closed - loop system in the sense that there exists a piecewise quadratic lyapunov function . in the input quantization case, the controller receives the original state . on the other hand , in the state quantization case , the quantized state and the currently active mode of the plant are available to the controller .the information on the active mode prevents a mode mismatch between the plant and the controller , and moreover , allows the controller side to recompute a better quantization value if the quantized state transmitted from the quantizer is near the boundaries of quantization regions .this recomputation is motivated in section 7.2 in .we also investigate the design of quantized feedback controllers . to this end , we consider the stabilization problem of discrete - time pwa systems with bounded disturbances ( under no quantization ) . the lypunov - based stability analysis and stabilization of discrete - time pwa systems has been studied in and in terms of linear matrix inequalities ( lmis ) and bilinear matrix inequalities ( bmis ) . in proofs that lyapunov functions decrease along the trajectories of pwa systems , the one - step reachable set, that is , the set to which the state belong in one step , plays an important role . in stability analysis, the one - step reachable set can be obtained by linear programming . by contrast , in the stabilization case , since the next - step state depends on the control input , it is generally difficult to obtain the one - step reachable set .therefore many previous works for the design of stabilizing controllers assume that the one - step reachable set is the total state space .however , if disturbances are bounded , then this assumption leads to conservative results and high computational loads as the number of the plant mode increases .we aim to find the one - step reachable set for pwa systems with bounded disturbances . to this effect, we derive a sufficient condition on feedback controllers for the state to belong to a given polyhedron in one step .this condition can be used to add constraints on the state and the input as well .furthermore , we obtain a set containing the one - step reachable set by using the information of the input matrix and the input bound .this set is conservative because the affine feedback structure for mode is not considered , but it can be used when we design the polyhedra that are assumed to be given in the above sufficient condition . combining the proposed condition with results in for lyapunov functions to be positive and decrease along the trajectories , we can design stabilizing controllers for pwa systems with bounded disturbances .this paper is organized as follows .the next section shows a class of quantizer and a basic assumption on stability . in sections iii and iv , we present an encoding strategy to achieve local stability for pwa systems in the input quantization case and the state quantization case , respectively . in sectionv , we study the one - step reachable set for the stabilization problem of pwa systems with bounded disturbances . finally , concluding remarks are given in section vi . due to space constraints , all proofs and a numerical example have been omitted and can be found in * ? ? ?_ notation _ : for a set , we denote by the closure of . for sets , let denote their minkowski sum .let and denote the smallest and the largest eigenvalue of .let denote the transpose of . for , we denote the -th entry of by .let be a vector all of whose entries are one . for vectors , the inequality means that for every .on the other hand , for a square matrix , the notation ( ) means that is symmetric and semi - positive ( positive ) definite . the euclidean norm of is denoted by .the euclidean induced norm of is defined by .the -norm of ^{\top} ] . since } \subset \mathcal{b}_i ] for every .hence for all unless the quantizer saturates. moreover , if we define } ] . thus holds for every and } ] , where is the number of rows of . if we define by in satisfies ._ it suffices to prove that indeed , if holds , then implies where and .this leads to .let us study the first element of .let , , and be the -th entry of , and the first entry of , respectively .also let and be the -th element of and , respectively .if and , then the first element of satisfies since we have the same result for the other elements of , it follows that holds .let us next investigate the case when and are unknown .the set given in theorem [ lem : set_containing_si ] works for stability analysis in the presence of bounded disturbances , but is dependent on the feedback gain and the affine term .hence we can not use it for their design . herewe obtain a set , which does not depend on , .moreover , we derive a sufficient condition on , for the state to belong to a given polyhedron in one step .let be the polyhedron defined by and we make an additional constraint that for all . similarly to , using the information on the input matrices and the input bound , we obtain a set independent of , to which the state belong in one step .[ coro : si_sufficient_with_disturbance ] _ assume that for each , and satisfy and if and . let the closure of be given by . define then we have for all and , and hence in satisfies . _define . to show, it suffices to prove that for all and , there exists such that .suppose , on the contrary , that there exist and such that for every .since , it follows that for some . also , by definition since , , , and , it follows that .hence we have for some .thus we have a contradiction and holds for every and .let us next prove .let . by definition , there exists and such that . also , we see from that there exists such that . hence we have , which implies .thus we have .see remark [ rem : total_ss_cond ] for the assumption that and for all . *( a ) * in theorem [ coro : si_sufficient_with_disturbance ] , we have used the counterpart of given in theorem [ lem : set_containing_si ] , but one can easily modify the theorem based on in corollary [ cor : tilde_sei ] . *( b ) * if is full row rank , then for all , , and , there exists such that . in this case, we have the trivial fact : .theorem [ coro : si_sufficient_with_disturbance ] ignores the affine feedback structure ( ) , which makes this theorem conservative .since the one - step reachable set depends on the unknown parameters and , we can not utilize the feedback structure unless we add some conditions on and . in the next theorem ,we derive linear programming on and for a bounded , which is a sufficient condition for the one - step reachable set under bounded disturbances to be contained in a given polyhedron .[ thm : add_cond_ki ] _ let a polyhedron , and let be a bounded polyhedron .let and be the vertices of and , respectively .a matrix and a vector satisfy for all and if linear programming is feasible for every and for every ._ define . relying on the results ( * ? ? ? * chap .6 ) ( see also ) , we have where means the convex hull of a set .we therefore obtain for all and if and only if , or , holds for every and .thus the desired conclusion is derived . *( a ) * to use theorem [ thm : add_cond_ki ] , we must design a polyhedron in advance .one design guideline is to take such that for some , where is defined in theorem [ coro : si_sufficient_with_disturbance ] . *( b ) * as in theorem [ lem : set_containing_si ] , the conservatism in theorem [ thm : add_cond_ki ] arises only from . *( c ) * theorem [ thm : add_cond_ki ] gives a trade - off on computational complexity : in order to reduce the number of pairs such that holds , we need to solve the linear programming problem . *( d ) * when the state is quantized , then in , and hence depends on linearly .in this case , however , theorem [ thm : add_cond_ki ] can be used for the controller design .[ rem : total_ss_cond ] assumptions [ ass : input_ver ] , [ ass : state_ver ] and theorem [ coro : si_sufficient_with_disturbance ] require conditions on and that and for all and all . if and , then these conditions always hold . if but if is a bounded polyhedron , then theorem [ thm : add_cond_ki ] gives linear programming that is sufficient for to hold .also , theorem [ thm : add_cond_ki ] with , , and can be applied to .if and hold for bounded and , then we can easily set the quantization parameter in to avoid quantizer saturation .similarly , we can use theorem [ thm : add_cond_ki ] for constraints on the state and the input . by theorems [ coro : si_sufficient_with_disturbance ] and [ thm :add_cond_ki ] , we obtain linear programing on and for a set containing the one - step reachable set under bounded disturbances .however , in lmi conditions of for and , is obtained via the variable transformation , where and are auxiliary variables . without variable transformation / elimination ,we obtain only bmi conditions for to hold as in theorem 7.2.2 of .the following theorem also gives bmi conditions on for and to hold , but we can apply the cone complementary linearization ( ccl ) algorithm to these bmi conditions : [ thm : pwa_lmi ] _ consider the pwa system with control affine term . let a matrix satisfy if and and if there exist , , and with all elements non - negative such that and hold for all and , then there exist such that ( ) satisfies and for every , , and . _ furthermore , consider the case and . forgiven with , if there exist , , and with all elements non - negative such that and hold for all and , then there exist such that ( ) satisfies and for every , , and .since the positive definiteness of implies , it is enough to show that and lead to and , respectively . for satisfying the second lmi in and, we have .furthermore , if and only if .define .applying the schur complement formula to the lmi condition in , we have since , there exists such that for every .hence we obtain . as regards, it follows from theorem 3.1 in that holds for some if pre- and post - multiplying and using the schur complement formula , we obtain the first lmi in .since , the conditions in theorem [ thm : pwa_lmi ] are feasible if the problem of minimizing under / has a solution .in addition to lmis and , we can consider linear programming for the constraint on the one - step reachable set .the ccl algorithm solves this constrained minimization problem .the ccl algorithm may not find the global optimal solution , but , in general , we can solve the minimization problem in a more computationally efficient way than the original non - convex feasibility problem .consider a pwa system in with quantized state feedback , where the matrix and the vector in characterizing the region are given by let and let us use a uniform - type quantizer whose parameters in are and . by using theorems [ thm : add_cond_ki ] and [ thm : pwa_lmi ] , we designed feedback gains such that the lyapunov function ( ) satisfies and for every , , and , and the following constraint conditions hold the resulting were given by and we obtained the decease rate in of the `` zoom '' parameter with and .[ fig : state_trajectory_example ] shows the state trajectories with initial states on the boundaries and .we observe that all trajectories converges to the origin and that the constraint conditions and are satisfied in the presence of quantization errors .we have provided an encoding strategy for the stabilization of pwa systems with quantized signals . for the stability of the closed - loop system, we have shown that the piecewise quadratic lyapunov function decreases in the presence of quantization errors .for the design of quantized feedback controllers , we have also studied the stabilization problem of pwa systems with bounded disturbances . in order to reduce the conservatism and the computational cost of controller designs , we have investigated the one - step reachable set . herewe give the proof of the following proposition for completeness : _ let be a bounded and closed polyhedron , and let be the vertices of . for every , we have _ choose arbitrarily , and let let -th entry of , , and be , , and , respectively .for every , we have hence this completes the proof .the first author would like to thank dr .k. okano of university california , santa barbara for helpful discussions on quantized control for pwa systems .the authors are also grateful to anonymous reviewers whose comments greatly improved this paper .m. xiaowu and g. yang , `` global input - to - state stabilization with quantized feedback for discrete - time piecewise affine systems with time delays , '' _ j. syst .sci . complexity _ , vol .26 , pp . 925939 , 2013 .j. qiu , g. feng , and h. gao , `` approaches to robust static output feedback control of discrete - time piecewise - affine systems with norm - bounded uncertainties , '' _ int .j. robust and nonlinear control _ , vol .21 , pp . 790814 , 2011 .l. e. ghaoui , f. oustry , and m. aitrami , `` a cone complementarity linearization algorithm for static output - feedback and related problems , '' _ ieee trans .11711176 , 1997 .
this paper studies quantized control for discrete - time piecewise affine systems . for given stabilizing feedback controllers , we propose an encoding strategy for local stability . if the quantized state is near the boundaries of quantization regions , then the controller can recompute a better quantization value . for the design of quantized feedback controllers , we also consider the stabilization of piecewise affine systems with bounded disturbances . in order to derive a less conservative design method with low computational cost , we investigate a region to which the state belong in the next step .
in classical statistics , it is often assumed that the outcome of an experiment is precise and the uncertainty of observations is solely due to randomness . under this assumption ,numerical data are represented as collections of real numbers . in recent years , however , there has been increased interest in situations when exact outcomes of the experiment are very difficult or impossible to obtain , or to measure .the imprecise nature of the data thus collected is caused by various factors such as measurement errors , computational errors , loss or lack of information . under such circumstances and , in general , any other circumstances such as grouping and censoring ,when observations can not be pinned down to single numbers , data are better represented by intervals .practical examples include interval - valued stock prices , oil prices , temperature data , medical records , mechanical measurements , among many others . in the statistical literature ,random intervals are most often studied in the framework of random sets , for which the probability - based theory has developed since the publication of the seminal book matheron ( 1975 ) .studies on the corresponding statistical methods to analyze set - valued data , while still at the early stage , have shown promising advances .see stoyan ( 1998 ) for a comprehensive review .specifically , to analyze interval - valued data , the earliest attempt probably dates back to 1990 , when diamond published his paper on the least squares fitting of compact set - valued data and considered interval - valued input and output as a special case ( see diamond ( 1990 ) ) . due to the embedding theorems started by brunn and minkowski and later refined by radstrm ( see radstrm ( 1952 ) ) and hrmander ( see hrmander ( 1954 ) ) , , the space of all nonempty compact convex subsets of , is embedded into the banach space of support functions .diamond ( 1990 ) defined an metric in this banach space of support functions , and found the regression coefficients by minimizing the metric of the sum of residuals .this idea was further studied in gil et al .( 2002 ) , where the metric was replaced by a generalized metric on the space of nonempty compact intervals , called `` w - distance '' , proposed earlier by krner ( 1998 ) .separately , billard and diday ( 2003 ) introduced the central tendency and dispersion measures and developed the symbolic interval data analysis based on those .( see also carvalho et al .( 2004 ) . )however , none of the existing literature considered distributions of the random intervals and the corresponding statistical methods .it is well known that normality plays an important role in classical statistics .but the normal distribution for random sets remained undefined for a long time , until the 1980s when the concept of normality was first introduced for compact convex random sets in the euclidean space by lyashenko ( 1983 ) .this concept is especially useful in deriving limit theorems for random sets .see , puri et al .( 1986 ) , norberg ( 1984 ) , among others .since a compact convex set in is a closed bounded interval , by the definition of lyashenko ( 1983 ) , a normal random interval is simply a gaussian displacement of a fixed closed bounded interval . from the point of view of statistics ,this is not enough to fully capture the randomness of a general random interval . in this paper, we extend the definition of normality given by lyashenko ( 1983 ) and propose a normal hierarchical model for random intervals . with one more degree of freedom on `` shape '' , our model conveniently captures the entire randomness of random intervals via a few parameters .it is a natural extension from lyashenko ( 1983 ) yet a highly practical model accommodating a large class of random intervals . in particular , when the length of the random interval reduces to zero , it becomes the usual normal random variable .therefore , it can also be viewed as an extension of the classical normal distribution that accounts for the extra uncertainty added to the randomness .in addition , there are two interesting properties regarding our normal hierarchical model : 1 ) conditioning on the first hierarchy , it is exactly the normal random interval defined by lyashenko ( 1983 ) , which could be a very useful property in view of the limit theorems ; 2 ) with certain choices of the distributions , a linear combination of our normal hierarchical random intervals follows the same normal hierarchical distribution .an immediate consequence of the second property is the possibility of a factor model for multi - dimensional random intervals , as the `` factor '' will have the same distribution as the original intervals . for random sets models , it is important , in the stage of parameter estimation , to take into account the geometric characteristics of the observations .for example , tanaka et al . ( 2008 ) proposed an approximate maximum likelihood estimation for parameters in the neyman - scott point processes based on the point pattern of the observation window .for another model , heinrich ( 1993 ) discussed several distance functions ( called `` contrast functions '' ) between the parametric and the empirical contact distribution function that are used towards parameter estimation for boolean models .bearing this in mind , to estimate the parameters of our normal hierarchical model , we propose a minimum contrast estimator ( mce ) based on the hitting function ( capacity functional ) that characterizes the distribution of a random interval by the hit - and - miss events of test sets .see matheron ( 1975 ) .in particular , we construct a contrast function based on the integral of a discrepancy function between the empirical and the parametric distribution measure .theoretically , we show that under certain conditions our mce satisfies a strong consistency and asymptotic normality .the simulation study is consistent with our theorems .we apply our model to analyze a daily temperature range data and , in this context , we have derived interesting and promising results .the use of an integral measure of probability discrepancy here is not new .for example , the integral probability metrics ( ipms ) , widely used as tools for statistical inferences , have been defined as the supremum of the absolute differences between expectations with respect to two probability measures .see , e.g. , zolotarev ( 1983 ) , mller ( 1997 ) , and sriperumbudur et al .( 2012 ) , for references . especially , the empirical estimation of ipms proposed by sriperumbudur et al .( 2012 ) drastically reduces the computational burden , thereby emphasizing the practical use of the ipms .this idea is potentially applicable to our mce and we expect similar reduction in computational intensity as for ipms . the rest of the paper is organized as follows .section [ sec : model ] formally defines our normal hierarchical model and discusses its statistical properties .section [ sec : mce ] introduces a minimum contrast estimator for the model parameters , and presents its asymptotic properties .a simulation study is reported in section [ sec : simu ] , and a real data application is demonstrated in section [ sec : real ] .we give concluding remarks in section [ sec : conclu ] .proofs of the theorems are presented in section [ sec : proofs ] .useful lemmas and other proofs are deferred to the appendix .let be a probability space . denote by the collection of all non - empty compact subsets of .a random compact set is a borel measurable function , being equipped with the borel -algebra induced by the hausdorff metric .if is convex for almost all , then is called a random compact convex set .( see molchanov ( 2005 ) , p.21 , p.102 . )denote by the collection of all compact convex subsets of . by theorem 1 of lyashenko ( 1983 ) , a compact convex random set in the euclidean space is gaussian if and only if can be represented as the minkowski sum of a fixed compact convex set and a -dimensional normal random vector , i.e. as pointed out in lyashenko ( 1983 ) , gaussian random sets are especially useful in view of the limit theorems discussed earlier in lyashenko ( 1979 ) .that is , if the conditions in those theorems are satisfied and the limit exists , then it is gaussian in the sense of ( [ def_lsko ] ) .puri et al .( 1986 ) extended these results to separable banach spaces . in the following, we will restrict ourselves to compact convex random sets in , that is , bounded closed random intervals .they will be called random intervals for ease of presentation . according to ( [ def_lsko ] ), a random interval is gaussian if and only if a is representable in the form where is a fixed bounded closed interval and is a normal random variable .obviously , such a random interval is simply a gaussian displacement of a fixed interval , so it is not enough to fully capture the randomness of a general random interval .in order to model the randomness of both the location and the `` shape '' ( length ) , we propose the following normal hierarchical model for random intervals : where is another random variable and ] , for which is precisely the center of the interval , ( [ mod - simple ] ) has an extra parameter .notice that the center of is , so controls the difference between and the center , and therefore is interpreted as modeling the uncertainty that the normal random variable is not necessarily the center .[ rmk:1 ] there are some existing works in the literature to model the randomness of intervals . for example , a random interval can be viewed as the crisp " version of the lr - fuzzy random variable , which is often used to model the randomness of imprecise intervals such as [ approximately 2 , approximately 5 ] .see krner ( 1997 ) for detailed descriptions .however , as far as the authors are aware , models with distribution assumptions for interval - valued data have not been studied yet .our normal hierarchical random interval is the first statistical approach that extends the concept of normality while modeling the full randomness of an interval .an interesting property of the normal hierarchical random interval is that its linear combination is still a normal hierarchical random interval .this is seen by simply observing that for arbitrary constants , where `` '' denotes the minkowski addition .this is very useful in developing a factor model for the analysis of multiple random intervals .especially , if we assume , then the `` factor '' has exactly the same distribution as the original random intervals .we will elaborate more on this issue in section [ sec : simu ] . without loss of generality, we can assume in the model ( [ def : a_1])-([def : a_2 ] ) that .we will make this assumption throughout the rest of the paper . according to the choquet theorem ( molchanov ( 2005 ) , p.10 ) ,the distribution of a random closed set ( and random compact convex set as a special case ) a , is completely characterized by the hitting function defined as : writing ] : )\\ & = & p([a , b]\cap a\neq\emptyset)\\ & = & p([a , b]\cap a\neq\emptyset,\eta\geq 0)+p([a , b]\cap a\neq\emptyset,\eta < 0)\\ & = & p(a-\eta b_0\leq\epsilon\leq b-\eta a_0,\eta\geq 0)+p(a-\eta a_0\leq\epsilon\leq b-\eta b_0,\eta < 0).\end{aligned}\ ] ] the expectation of a compact convex random set is defined by the aumann integral ( see aumann ( 1965 ) , artstein and vitale ( 1975 ) ) as in particular , the aumann expectation of a random interval is given by ,\ ] ] where and are the interval ends .therefore , the aumann expectation of the normal hierarchical random interval is {(\eta\geq 0)}+[b_0\eta , a_0\eta]i_{(\eta<0)}\right\}\\ & = & e\left[a_0\eta i_{(\eta\geq 0)}+b_0\eta i_{(\eta<0)},b_0\eta i_{(\eta\geq 0)}+a_0\eta i_{(\eta<0)}\right]\\ & = & \left[a_0e\eta_{+}+b_0e\eta_{-},b_0e\eta_{+}+a_0e\eta_{-}\right],\end{aligned}\ ] ] where notice that can be interpreted as the positive part of , but is not the negative part of , as when .the variance of a compact convex random set in is defined via its support function . in the special casewhen , it is shown by straightforward calculations that or equivalently , where and denote the center and radius of a random interval .see krner ( 1995 ) . again , as we pointed out in remark [ rmk:1 ] , a random interval can be viewed as a special case of the lr - fuzzy random variable .therefore , formulae ( [ var-1 ] ) and ( [ var-2 ] ) coincide with the variance of the lr - fuzzy random variable , when letting the left and right spread both equal to 0 , i.e. , . see krner ( 1997 ) .for the normal hierarchical random interval , ^ 2\\ & = & e\epsilon^2+a_0 ^ 2var(\eta_{+})+b_0 ^ 2var(\eta_{-})\\ & & + 2\left(a_0e\epsilon\eta_{+}+b_0e\epsilon\eta_{-}-a_0b_0e\eta_{+}e\eta_{-}\right),\end{aligned}\ ] ] and , analogously , the variance of is then found to be \\ & & + ( a_0+b_0)e\epsilon\eta-2a_0b_0e\eta_{+}\eta_{-}.\end{aligned}\ ] ] assuming , we have with .this formula certainly includes the special case of the naive " model ] the hitting function of with parameter . in order to introduce the mce , we will need some extra notations .let be a basic set and be a -field over it .let denote a family of probability measures on ( * x*, ) and be a mapping from to some topologial space . denotes the parameter value pertaining to , .the classical definition of mce given in pfanzagl ( 1969 ) is quoted below . ] , , is a family of contrast functions for , if there exists a function : such that and [ def : mce ] a -measurable function : , which depends on only , is called a minimum contrast estimator ( mce ) if we make the following assumptions to present the theoretical results in this section .[ aspt:1 ] is compact , and is an interior point of .[ aspt:2 ] the model is identifiable .[ aspt:3 ] ) ] , , exist and are finite on a bounded region .[ aspt:5 ] ) ] , and ) ] .define an empirical estimator ;x(n)) ] as : ;x(n))=\frac{\ # \left\{x_i : [ a , b]\cap x_i\neq\emptyset , i=1,\cdots , n\right\}}{n}.\ ] ] extending the contrast function defined in heinrich ( 1993 ) ( for parameters in the boolean model ) , we construct a family of functions : )-\hat{t}([a , b];x(n))\right]^2w(a , b)\mathrm{d}a\mathrm{d}b,\ ] ] for , where , and is a weight function on ] .we show in the next proposition that , defined in ( [ h_def ] ) is a family of contrast functions for .this , together with theorem [ thm : strong - consist ] , immediately yields the strong consistency of the associated mce .this result is summarized in corollary [ coro : consist ] .[ prop : cf ] suppose that assumption [ aspt:2 ] and assumption [ aspt:3 ] are satisfied .then , , as defined in ( [ h_def ] ) , is a family of contrast functions with limiting function )-t_{\boldsymbol{\zeta}}([a , b])\right]^2w(a , b)\mathrm{d}a\mathrm{d}b.\ ] ] in addition , is equicontinuous w.r.t . .[ coro : consist ] suppose that assumption [ aspt:1 ] , assumption [ aspt:2 ] , and assumption [ aspt:3 ] are satisfied .let be defined as in ( [ h_def ] ) , and then as .next , we show the asymptotic normality for . as a preparation ,we first prove the following proposition .the central limit theorem for is then presented afterwards .[ prop : parh ] assume the conditions of lemma 1 ( in the appendix ) .define ^{t},\nonumber\ ] ] as the gradient vector of w.r.t .then , \stackrel{\mathcal{d}}{\rightarrow}n\left(0,\xi\right),\nonumber\ ] ] where is the symmetric matrix with the component \neq\emptyset , x_1\cap[c , d]\neq\emptyset\right ) -t_{{\boldsymbol}{\theta}_0}\left([a , b]\right)t_{{\boldsymbol}{\theta}_0}\left([c , d]\right)\right\}\nonumber\\ & & \frac{\partial t_{{\boldsymbol}{\theta}_0}}{\partial\theta_i}\left([a , b]\right ) \frac{\partial t_{{\boldsymbol}{\theta}_0}}{\partial\theta_j}\left([c , d]\right ) w(a , b)w(c , d)\mathrm{d}a\mathrm{d}b\mathrm{d}c\mathrm{d}d.\label{def : xi}\end{aligned}\ ] ] [ thm : clt ] let be defined in ( [ h_def ] ) and be defined in ( [ def : theta ] ) .assume the conditions of corollary [ coro : consist ] . if additionally assumption [ aspt:5 ] is satisfied ,then where )w(a , b)\mathrm{d}a\mathrm{d}b ] in this caseis continuous and also infinitely continuously differentiable .therefore , all the assumptions are satisfied and the corresponding mce achieves the strong consistency and asymptotic normality . according to the assigned parameter values given in ( [ eqn : par - val ] ) , .therefore the hitting function is well approximated by )\\ & \approx&p(a-\eta b_0\leq\epsilon\leq b-\eta a_0,\eta\geq 0)\\ & \approx&p(a-\eta b_0\leq\epsilon\leq b-\eta a_0)\\ & = & p\left ( \begin{bmatrix}1 & a_0\\ -1 & -a_0 - 1\end{bmatrix } \begin{bmatrix}\epsilon\\ \eta\end{bmatrix}\leq \begin{bmatrix}b\\-a\end{bmatrix}\right)\\ & = & \phi\left ( \begin{bmatrix}b\\-a\end{bmatrix } ; d\begin{bmatrix}0\\ \mu\end{bmatrix } , d\sigma d^{'}\right),\end{aligned}\ ] ] where we use this approximate hitting function to simplify computation in our simulation study .the model parameters can be estimated by the method of moments . in most casesit is reasonable to assume , and consequently , .so the moment estimates for and are approximately where and denote the sample means of and , respectively . denoting by the center of the random interval , we further notice that . by the same approximation we have .define a random variable then , the moment estimate for is approximately given by the sample variance - covariance matrix of and , i.e. our simulation experiment is designed as follows : we first simulate an i.i.d. random sample of size from model ( [ def : a_1])-([def : a_2 ] ) with the assigned parameter values , then find the initial parameter values by ( [ mm-1])-([mm-3 ] ) based on the simulated sample , and lastly the initial values are updated to the mce using the function_ fminsearch.m _ in matlab 2011a .the process is repeated 10 times independently for each , and we let , successively , to study the consistency and efficiency of the mce s . figure [ fig : sample_simu ] shows one random sample of 100 observations generated from the model .we show the average biases and standard errors of the estimates as functions of the sample size in figure [ fig : results_simu ] . here , the average bias and standard error of the estimates of are the norms of the average bias and standard error matrices , respectively .as expected from corollary [ coro : consist ] and theorem [ thm : clt ] , both the bias and the standard error reduce to 0 as sample size grows to infinity .the numerical results are summarized in table [ tab : mc_1 ] .finally , we point out that the choice of the region of integration is important .a larger usually leads to more accurate estimates , but could also result in more computational complexity .we do not investigate this issue in this paper .however , based on our simulation experience , an that covers most of the points such that ] , by ignoring the small probability .therefore , we choose , and the estimates are satisfactory .+ + .average biases and standard errors of the mce s of the model parameters in the simulation study .[ cols= " > , > , > , > , > , > , > , > , > " , ]in this section , we apply our normal hierarchical model and minimum contrast estimator to analyze the daily temperature range data .we consider two data sets containing ten years of daily minimum and maximum temperatures in january , in granite falls , minnesota ( latitude 44.81241 , longitude 95.51389 ) from 1901 to 1910 , and from 2001 to 2010 , respectively .each data set , therefore , is constituted of 310 observations of the form : [ minimum temperature , maximum temperature ] .we obtained these data from the national weather service , and all observations are in fahrenheit .the plot of the data is shown in figure [ fig : real ] .the obvious correlations of the data play no roles here .+ + same as in the simulation , we assume a bivariate normal distribution for and ] is the sample mean of i.i.d .random variables defined as : \neq\emptyset , \\ 0 , & \text{otherwise}. \end{cases}.\ ] ] therefore , an application of the strong law of large numbers in the classical case yields : \neq\emptyset\right ) = t_{\boldsymbol{\theta}_0}\left([a , b]\right),\ \text{as}\ n\to\infty,\ ] ] , and assuming is the true parameter value .that is , ;x(n)\right)\stackrel{a.s.}{\rightarrow}t_{\boldsymbol{\theta}_0}\left([a , b]\right ) , \nonumber\ ] ] as .it follows immediately that ;x(n))-t_{\boldsymbol{\theta}_0}\left([a , b]\right)\right]^2w(a , b)\stackrel{a.s.}{\rightarrow}0 .\nonumber\ ] ] notice that , ;x(n))-t_{\boldsymbol{\theta}_0}\left([a , b]\right)\right]^2w(a , b) ] , for , except on a lebesgue set of measure 0 .this together with ( [ eqn : n ] ) gives which proves that , is a family of contrast functions . to see the equicontinuity of , notice that , we have )-\hat{t}([a , b];x(n))\right)^2w(a , b)\mathrm{d}a\mathrm{d}b\\ & & -\iint\limits_{s}\left(t_{\boldsymbol{\theta}_2}([a , b])-\hat{t}([a , b];x(n))\right)^2w(a , b)\mathrm{d}a\mathrm{d}b|\\ & = & |\iint\limits_{s}\left(t_{\boldsymbol{\theta}_1}([a , b])-t_{\boldsymbol{\theta}_2}([a , b])\right ) \left(t_{\boldsymbol{\theta}_1}([a , b])+t_{\boldsymbol{\theta}_2}([a , b ] ) -2\hat{t}([a , b];x(n))\right)w(a , b)\mathrm{d}a\mathrm{d}b|\\ & \leq&4c\iint\limits_{s}\left|t_{\boldsymbol{\theta}_1}([a , b])-t_{\boldsymbol{\theta}_2}([a , b])\right|\mathrm{d}a\mathrm{d}b,\end{aligned}\ ] ] since , by definition ( [ h_def ] ) , is uniformly bounded by , then the equicontinuity of follows from the continuity of ) ] to simplify notations . exchanging differentiation and integration by the bounded convergence theorem , we get \right)-\hat{t}\left([a , b];x(n)\right)\right)^2w(a , b)\mathrm{d}a\mathrm{d}b\nonumber\\ & = & \iint\limits_{s}\frac{\partial}{\partial\theta_i } \left(t_{{\boldsymbol}{\theta}_0}\left([a , b]\right)-\hat{t}\left([a , b];x(n)\right)\right)^2w(a , b)\mathrm{d}a\mathrm{d}b\nonumber\\ & = & \iint\limits_{s}2\left(t_{{\boldsymbol}{\theta}_0}\left([a , b]\right)-\hat{t}\left([a , b];x(n)\right)\right ) t_{{\boldsymbol}{\theta}_0}^i\left(a , b\right)w(a , b)\mathrm{d}a\mathrm{d}b.\nonumber\end{aligned}\ ] ] define as in ( [ y_def ] ) . then , \right)-\frac{1}{n}\sum_{k=1}^{n}y_k\left(a , b\right)\right ) t_{{\boldsymbol}{\theta}_0}^i\left(a , b\right)w(a , b)\mathrm{d}a\mathrm{d}b\nonumber\\ & = & \frac{2}{n}\iint\limits_{s}\sum_{k=1}^{n}\left(t_{{\boldsymbol}{\theta}_0}\left([a , b]\right)-y_k\left(a , b\right)\right ) t_{{\boldsymbol}{\theta}_0}^i\left(a , b\right)w(a , b)\mathrm{d}a\mathrm{d}b\nonumber\\ & = & \frac{1}{n}\sum_{k=1}^{n}2\iint\limits_{s}\left(t_{{\boldsymbol}{\theta}_0}\left([a , b]\right)-y_k\left(a , b\right)\right ) t_{{\boldsymbol}{\theta}_0}^i\left(a , b\right)w(a , b)\mathrm{d}a\mathrm{d}b\label{eqn : parh}\\ & : = & \frac{1}{n}\sum_{k=1}^{n}r_k.\nonumber\end{aligned}\ ] ] notice that s are i.i.d .random variables : .+ let be a partition of , and be any point in , .let .denote by the area of . by the definition of the double integral , \right)-y_k\left(a , b\right)\right ) t_{{\boldsymbol}{\theta}_0}^i\left(a , b\right)w(a , b)\mathrm{d}a\mathrm{d}b\nonumber\\ & = & \lim_{\lambda\rightarrow 0}\left\{\sum_{j=1}^{m}\left(t_{{\boldsymbol}{\theta}_0 } \left([a_j , b_j]\right)-y_k\left(a_j , b_j\right)\right ) t_{{\boldsymbol}{\theta}_0}^i\left(a_j , b_j\right)w(a_j , b_j)\delta\sigma_j\right\}.\nonumber\end{aligned}\ ] ] therefore , by the lebesgue dominated convergence theorem , \right)-y_k\left(a_j , b_j\right)\right ) t_{{\boldsymbol}{\theta}_0}^i\left(a_j , b_j\right)w(a_j , b_j)\delta\sigma_j\right\}\\ & = & 2\lim_{\lambda\rightarrow 0}\left\{\sum_{j=1}^{m}\left[e\left(t_{{\boldsymbol}{\theta}_0 } \left([a_j , b_j]\right)-y_k\left(a_j , b_j\right)\right)\right ] t_{{\boldsymbol}{\theta}_0}^i\left(a_j , b_j\right)w(a_j , b_j)\delta\sigma_j\right\}\label{eqn_1}\\ & = & 2\lim_{\lambda\rightarrow 0}\left\{\sum_{j=1}^{m}0\right\}=0.\end{aligned}\ ] ] moreover , \right)-y_k\left(a_j , b_j\right)\right ) t_{{\boldsymbol}{\theta}_0}^i\left(a_j , b_j\right)w(a_j , b_j)\delta\sigma_j\right\}\right\}^2\\ & = & 4e\lim_{\lambda_1\rightarrow 0}\lim_{\lambda_2\rightarrow 0 } \left\{\sum_{j_1=1}^{m_1}\left(t_{{\boldsymbol}{\theta}_0 } \left([a_{j_1},b_{j_1}]\right)-y_k\left(a_{j_1},b_{j_1}\right)\right ) t_{{\boldsymbol}{\theta}_0}^i\left(a_{j_1},b_{j_1}\right)w(a_{j_1},b_{j_1})\delta\sigma_{j_1}\right\}\\ & & \left\{\sum_{j_2=1}^{m_2}\left(t_{{\boldsymbol}{\theta}_0 } \left([a_{j_2},b_{j_2}]\right)-y_k\left(a_{j_2},b_{j_2}\right)\right ) t_{{\boldsymbol}{\theta}_0}^i\left(a_{j_2},b_{j_2}\right)w(a_{j_2},b_{j_2})\delta\sigma_{j_2}\right\}\\ & = & 4e\lim_{\lambda_1\rightarrow 0}\lim_{\lambda_2\rightarrow 0}\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2 } \left(t_{{\boldsymbol}{\theta}_0}\left([a_{j_1},b_{j_1}]\right)-y_k\left(a_{j_1},b_{j_1}\right)\right ) \left(t_{{\boldsymbol}{\theta}_0}\left([a_{j_2},b_{j_2}]\right)-y_k\left(a_{j_2},b_{j_2}\right)\right)\\ & & t_{{\boldsymbol}{\theta}_0}^i\left(a_{j_1},b_{j_1}\right)t_{{\boldsymbol}{\theta}_0}^i\left(a_{j_2},b_{j_2}\right ) w(a_{j_1},b_{j_1})w(a_{j_2},b_{j_2})\delta\sigma_{j_1}\delta\sigma_{j_2}\\ & = & 4\lim_{\lambda_1\rightarrow 0}\lim_{\lambda_2\rightarrow 0}\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2 } e\left(t_{{\boldsymbol}{\theta}_0}\left([a_{j_1},b_{j_1}]\right)-y_k\left(a_{j_1},b_{j_1}\right)\right ) \left(t_{{\boldsymbol}{\theta}_0}\left([a_{j_2},b_{j_2}]\right)-y_k\left(a_{j_2},b_{j_2}\right)\right)\\ & & t_{{\boldsymbol}{\theta}_0}^i\left(a_{j_1},b_{j_1}\right)t_{{\boldsymbol}{\theta}_0}^i\left(a_{j_2},b_{j_2}\right ) w(a_{j_1},b_{j_1})w(a_{j_2},b_{j_2})\delta\sigma_{j_1}\delta\sigma_{j_2}\label{eqn_2}\\ & = & 4\lim_{\lambda_1\rightarrow 0}\lim_{\lambda_2\rightarrow 0}\sum_{j_1=1}^{m_1}\sum_{j_2=1}^{m_2 } cov\left(y_k\left(a_{j_1},b_{j_1}\right),y_k\left(a_{j_2},b_{j_2}\right)\right)\\ & & t_{{\boldsymbol}{\theta}_0}^i\left(a_{j_1},b_{j_1}\right)t_{{\boldsymbol}{\theta}_0}^i\left(a_{j_2},b_{j_2}\right ) w(a_{j_1},b_{j_1})w(a_{j_2},b_{j_2})\delta\sigma_{j_1}\delta\sigma_{j_2}\\ & = & 4\iiiint\limits_{s\times s}cov\left(y_k\left(a , b\right),y_k\left(c , d\right)\right ) t_{{\boldsymbol}{\theta}_0}^i\left(a , b\right)t_{{\boldsymbol}{\theta}_0}^i\left(c , d\right ) w(a , b)w(c , d)\mathrm{d}a\mathrm{d}b\mathrm{d}c\mathrm{d}d\\ & = & 4\iiiint\limits_{s\times s}\left\{p\left(x_k\cap[a , b]\neq\emptyset , x_k\cap[c , d]\neq\emptyset\right ) -t_{{\boldsymbol}{\theta}_0}\left([a , b]\right)t_{{\boldsymbol}{\theta}_0}\left([c , d]\right)\right\}\\ & & t_{{\boldsymbol}{\theta}_0}^i\left(a , b\right)t_{{\boldsymbol}{\theta}_0}^i\left(c , d\right ) w(a , b)w(c , d)\mathrm{d}a\mathrm{d}b\mathrm{d}c\mathrm{d}d.\end{aligned}\ ] ] from the central limit theorem for i.i.d .random variables , the desired result follows . by the cramr - wold device, it suffices to prove for arbitrary real numbers .it is easily seen from ( [ eqn : parh ] ) in the proof of lemma 1 that \right)-y_k\left(a , b\right)\right ) \frac{\partial t_{{\boldsymbol}{\theta}_0}}{\partial\theta_i}\left([a , b]\right)w(a , b)\mathrm{d}a\mathrm{d}b\right)\nonumber\\ & : = & \frac{1}{n}\sum\limits_{k=1}^{n}\left(2\sum\limits_{i=1}^p\lambda_iq_k^i\right).\nonumber\end{aligned}\ ] ] by lemma 1 , in view of the central limit theorem for i.i.d . random variables , ( [ prop1:target ] ) is reduced to proving by a similar argument as in lemma 1 , together with some algebraic calculations , we obtain \right)-y_k\left(a , b\right)\right ) \frac{\partial t_{{\boldsymbol}{\theta}_0}}{\partial\theta_i}\left([a , b]\right)w(a , b)\mathrm{d}a\mathrm{d}b\right)\\ & & \left(\iint\limits_{s}\left(t_{{\boldsymbol}{\theta}_0}\left([a , b]\right)-y_k\left(a , b\right)\right ) \frac{\partial t_{{\boldsymbol}{\theta}_0}}{\partial\theta_j}\left([a , b]\right)w(a , b)\mathrm{d}a\mathrm{d}b\right)\\ & = & 4\sum\limits_{1\leq i , j\leq p}\lambda_i\lambda_j\iiiint\limits_{s\times s}\left\{p\left(x_1\cap[a , b]\neq\emptyset , x_1\cap[c , d]\neq\emptyset\right ) -t_{{\boldsymbol}{\theta}_0}\left([a , b]\right)t_{{\boldsymbol}{\theta}_0}\left([c , d]\right)\right\}\\ & & \frac{\partial t_{{\boldsymbol}{\theta}_0}}{\partial\theta_i}\left([a , b]\right ) \frac{\partial t_{{\boldsymbol}{\theta}_0}}{\partial\theta_j}\left([c , d]\right ) w(a , b)w(c , d)\mathrm{d}a\mathrm{d}b\mathrm{d}c\mathrm{d}d.\end{aligned}\ ] ] this validates ( [ prop1:target2 ] ) , and hence finishes the proof .
many statistical data are imprecise due to factors such as measurement errors , computation errors , and lack of information . in such cases , data are better represented by intervals rather than by single numbers . existing methods for analyzing interval - valued data include regressions in the metric space of intervals and symbolic data analysis , the latter being proposed in a more general setting . however , there has been a lack of literature on the parametric modeling and distribution - based inferences for interval - valued data . in an attempt to fill this gap , we extend the concept of normality for random sets by lyashenko and propose a normal hierarchical model for random intervals . in addition , we develop a minimum contrast estimator ( mce ) for the model parameters , which we show is both consistent and asymptotically normal . simulation studies support our theoretical findings , and show very promising results . finally , we successfully apply our model and mce to a real dataset .
estimation is a crucial application in the energy management system ( ems ) .the well - known static state estimation ( sse ) methods assume that the power system is operating in quasi - steady state , based on which the static states the voltage magnitude and phase angles of the buses are estimated by using scada and/or synchrophasor measurements .sse is critical for power system monitoring as it provides inputs for other ems applications such as automatic generation control and optimal power flow .however , sse may not be sufficient for desirable situational awareness as the system states evolve more rapidly due to an increasing penetration of renewable generation and distributed energy resources . therefore , dynamic state estimation ( dse ) processes estimating the dynamic states ( i.e., the internal states of generators ) by using highly synchronized pmu measurements with high sampling rates will be critical for the wide - area monitoring , protection , and control of power systems . for both sse and dse, two significant challenges make their practical application significantly difficult .first , the system model and parameters used for estimation can be inaccurate , which is often called _ model uncertainty _ , consequently deteriorating estimation in some scenarios .second , the measurements used for estimation are vulnerable to cyber attacks , which in turn leads to compromised measurements that can greatly mislead the estimation . for the first challenge , there are recent efforts on validating the dynamic model of the generator and calibrating its parameters , which dse can be based on. however , model validation itself can be very challenging .hence , it is a more viable solution to improve the estimators by making them more robust to the model uncertainty . for the second challenge ,false data injection attacks targeted against sse are proposed in . in , a probabilistic risk mitigation model is presented for cyber attacks against pmu networks , focusing on topological observability of sse .however , in this paper we discuss cyber attacks against dse and estimators that are robust to cyber attacks . as for the approaches for performing dse, there are mainly two classes of methods that have been proposed : 1 ._ stochastic estimators _ : given a discrete - time representation of a dynamical system , the observed measurements , and the statistical information on process noise and measurement noise , kalman filter and its many derivatives have been proposed that calculate the kalman gain as a function of the relative certainty of the current state estimate and the measurements .2 . _ deterministic observers _ : given a continuous- or discrete - time dynamical system depicted by state - space matrices , a combination of matrix equalities and inequalities are solved , while guaranteeing asymptotic ( or bounded ) estimation error .the solution to these equations is often matrices that are used in an observer to estimate states and other dynamic quantities . for power systems, dse has been implemented by several stochastic estimators , such as extended kalman filter ( ekf ) , unscented kalman filter ( ukf ) , square - root unscented kalman filter ( sr - ukf ) , extended particle filter , and ensemble kalman filter .while these techniques produce good estimation under nominal conditions , most of them lack the ability to deal with significant model uncertainty and malicious cyber attacks .the goal of this paper is to present alternatives that address these major limitations . to achieve this , we study dse by utilizing recently developed kalman filters and nonlinear observers .the contributions of this paper include : 1 .introducing cubature kalman filter ( ckf ) that possesses an important virtue of mathematical rigor rooted in the third - degree spherical - radial cubature rule for numerically computing gaussian - weighted integrals ; 2 . presenting observers for dse of nonlinear power systems with model uncertainties and cyber attacks ; 3 . comparing the strengths and limitations of different estimation methods for dse with significant model uncertainty and cyber attacks .the remainder of this paper is organized as follows . in section [ sec :multimachinedynamics ] , we discuss the nonlinear dynamics of the multi - machine power system .the physical and technical depictions of the model uncertainty and attack - threat model are introduced in section [ sec : at ] .the ckf and one dynamic observer are introduced in sections [ sec : ckf1 ] and [ sec : observers ] .then , numerical results are given in section [ sec : numericalresults ] .finally , insightful remarks and conclusions are presented in sections [ sec : remarks ] and [ sec : conc ] .[ sec : multimachinedynamics ] here we briefly discuss the power system model used for dse . each of the generators is described by the fourth - order transient model in local - reference frame : where is the generator serial number , is the rotor angle , is the rotor speed in rad / s , and and are the transient voltage along and axes ; and are stator currents at and axes ; is the mechanical torque , is the electric air - gap torque , and is the internal field voltage ; is the rated value of angular frequency , is the inertia constant , and is the damping factor ; and are the open - circut time constants for and axes ; and are the synchronous reactance and and are the transient reactance respectively at the and axes . the and in ( [ gen model ] ) are considered as inputs .the set of generators where pmus are installed is denoted by . for generator ,the terminal voltage phaosr and current phasor can be measured and are used as the outputs. correspondingly , the state vector , input vector , and output vector are ^\top \\ { \boldsymbol}{u } & = \big[{\boldsymbol}{t_m}^\top \quad { \boldsymbol}{e_{fd}}^\top\big]^\top \\ { \boldsymbol}{y } & = \big[{\boldsymbol}{e_r}^\top \quad { \boldsymbol}{e_i}^\top \quad { \boldsymbol}{i_r}^\top \quad { \boldsymbol}{i_i}^\top\big]^\top.\end{aligned}\ ] ] the , , and can be written as functions of : [ temp ] where is the voltage source , and are column vectors of all generators and , and are the terminal voltage at and axes , is the row of the admittance matrix of the reduced network , and and are the system base mva and the base mva for generator , respectively . in , the outputs and are written as functions of . similarly , the outputs and can also be written as function of : [ erei ] the dynamic model ( [ gen model ] ) can then be rewritten in a general state space form as where , \notag\ ] ] , \ ; \notag { \boldsymbol}{\phi}=\left [ \renewcommand{\arraystretch}{1.3 } \begin{array}{c } -\omega_0 { \boldsymbol}{1}_g \\ \frac{\omega_0}{2{h}}(-{\boldsymbol}{t}_e + { \boldsymbol}{k}_d { \boldsymbol}{1}_n ) \\ \frac{1}{{\boldsymbol}{t}'_{d0 } } \big ( -({\boldsymbol}{x}_d - { \boldsymbol}{x}'_d ) { \boldsymbol}{i}_{d } \big ) \\\frac{1}{{\boldsymbol}{t}'_{q0 } } \big ( ( { \boldsymbol}{x}_q - { \boldsymbol}{x}'_q ) { \boldsymbol}{i}_{q } \big ) \end{array}\right ] , \notag\ ] ] and include functions ( [ ir])([ii ] ) and ( [ erei ] ) for all generators .note that the model presented here is used for dse for which the real - time inputs are assumed to be unavailable and and only take steady - state values , mainly because these inputs are difficult to measure .however , when we simulate the power system to mimic the real system dynamics , we model an ieee type dc1 excitation system and a simplified turbine - governor system for each generator and thus and change with time due to the governor and the excitation control .we do not directly use a detailed model including the exciter and governor as in for the dse mainly because 1 ) a good model should be simple enough to facilitate design , 2 ) it is harder to validate a more detailed model and there are also more parameters that need to be calibrated , and 3 ) the computational burden can be higher for a more detailed model , which may not satisfy the requirement of real - time estimation .[ sec : at ] here we discuss two great challenges for an effective dse , which are the model uncertainty and potential cyber attacks .the term _ model uncertainty _ refers to the differences or errors between models and reality . assuming that the dynamical models are perfectly accurate can generate sub - optimal control or estimation laws .various control and estimation theory studies investigated methods that addresses the discrepancy between the actual physics and models .the model uncertainty can be caused by the following reasons . 1 .* unknown inputs * : the unknown inputs against the system dynamics include ( representing the unknown plant disturbances ) , ( denoting the unknown control inputs ) , and ( depicting potential actuator faults ) . for simplicity , we can combine them into one unknown input quantity , , defined as + defining to be the known weight distribution matrix of the distribution of unknown inputs with respect to each state - equation .the term models a general class of unknown inputs such as : nonlinearities , modeling uncertainties , noise , parameter variations , unmeasurable system inputs , model reduction errors , and actuator faults .for example , the equation most likely has no unknown inputs , as there is no modeling uncertainty related to that process .hence , the first row of can be identically zero .the process dynamics under unknown inputs can be written as follows : 2 . * unavailable inputs* : as discussed in section [ sec : multimachinedynamics ] , the real - time inputs in ( [ gen model ] ) can be unavailable , in which case the steady - states inputs are used for estimation .* parameter inaccuracy * : the parameters in the system model can be inaccurate .for example , the reduced admittance matrix can be inaccurate when a fault or the following topology change are not detected . the national electric sector cybersecurity organization resource ( nescor ) developed cyber - security failure scenarios with corresponding impact analyses .the wampac failure scenarios motivate the research in this paper include : a ) _ measurement data ( from pmus ) compromised due to pdc authentication compromise _ and b ) _ communications compromised between pmus and control center _ . specifically , we consider the following three types of attacks . 1 .* data integrity attacks * : an adversary attempts to corrupt the content of either the measurement or the control signals .a specific example of data integrity attacks are man - in - the - middle attacks , where the adversary intercepts the measurement signals and modifies them in transit . for dsethe pmu measurements can be modified and corrupted .* denial of service ( dos ) attack * : an attacker attempts to introduce a denial in communication of measurement .the communication of a sensor could be jammed by flooding the network with spurious packets . for dse the consequence can be that the updated measurements can not be sent to the control center .* replay attacks * : a special case of data integrity attacks , where the attacker replays a previous snapshot of a valid communication packet sequence that contains measurements in order to deceive the system . for dse the pmu measurements can be changed to be those in the past .[ sec : ckf1 ] unlike many estimation methods that are either computationally unmanageable or require special assumptions about the form of the process and observation models , kalman filter ( kf ) only utilizes the first two moments of the state ( mean and covariance ) in its update rule .it consists of two steps : in prediction step , the filter propagates the estimate from last time step to current time step ; in update step , the filter updates the estimate using collected measurements .kf was initially developed for linear systems while for power system dse the system equations and outputs have strong nonlinearity .thus variants of kf that can deal with nonlinear systems have been introduced , such as ekf and ukf . here , we briefly introduce ekf and ukf , and give more details for cubature kalman filter ( ckf ) .although ekf maintains the elegant and computationally efficient recursive update form of kf , it works well only in a ` mild ' nonlinear environment , owing it to the first - order taylor series approximation for nonlinear functions .it is sub - optimal and can easily lead to divergence .also , the linearization can be applied only if the jacobian matrix exists and calculating jacobian matrices can be difficult and error - prone .for dse , ekf has been discussed in . the unscented transformation ( ut ) is developed to address the deficiencies of linearization by providing a more direct and explicit mechanism for transforming mean and covariance information . based on ut , julier et al . the ukf as a derivative - free alternative to ekf .the gaussian distribution is represented by a set of deterministically chosen sample points called sigma points . the ukf has been applied to power system dse , for which no linearization or calculation of jacobian matrices is needed .however , for the sigma - points , the stem at the center ( the mean ) is highly significant as it carries more weight which is usually negative for high - dimensional systems .therefore , the ukf is supposed to encounter numerical instability troubles when used in high - dimensional problems .several techniques including the square - root unscented kalman filter ( sr - ukf ) have been proposed to solve this problem .[ sec : ckff ] ekf and ukf can suffer from the curse of dimensionality while becoming detrimental in high - dimensional state - space models of size twenty or more especially when there are high degree of nonlinearities in the equations that describe the state - space model . making use of the spherical - radial cubature rule , arasaratnam et al . propose ckf , which possesses an important virtue of mathematical rigor rooted in the third - degree spherical - radial cubature rule for numerically computing gaussian - weighted integrals . a nonlinear system ( without model uncertainty or attack vectors )can be written in discrete - time form as where , , and are states , inputs , and observed measurements at time step ; the estimated mean and estimated covariance of the estimation error are and ; and are vectors consisting of nonlinear state transition functions and measurement functions ; is the gaussian process noise at time step ; is the gaussian measurement noise at time step ; and and are covariance matrices of and .the procedure of ckf consists of a prediction step and an update step and is summarized in algorithms [ algokf1 ] and [ algokf2 ] .similar to other kalman filters , in prediction step ckf propagates the estimate from last time step to current time step and in update step it updates the estimate using collected measurements .similar to ukf , ckf also uses a weighted set of symmetric points to approximate the gaussian distribution .but the cubature - point set obtained in step 1 of algorithm [ algokf1 ] does not have a stem at the center and thus does not have the numerical instability problem of ukf discussed in section [ ukf ] .* draw * cubature points from the intersections of the dimensional unit sphere and the cartesian axes . where is a vector with dimension , whose element is one and the other elements are zero .* propagate * the cubature points .the matrix square root is the lower triangular cholesky factor * evaluate * the cubature points with dynamic model function * estimate * the predicted state mean * estimate * the predicted error covariance * draw * cubature points from the intersections of the dimensional unit sphere and the cartesian axes . *propagate * the cubature points * evaluate * cubature points using measurement function * estimate * the predicted measurement * estimate * the innovation covariance matrix * estimate * the cross - covariance matrix * calculate * the kalman gain * estimate * the updated state * estimate * the updated error covariance [ sec : observers ] dynamic observers have been thoroughly investigated for different classes of systems . to mention a few ,they have been developed for linear time - invariant ( lti ) systems , nonlinear time - invariant ( nlti ) systems , lti and nlti systems with unknown inputs , sensor and actuator faults , stochastic dynamical systems , and hybrid systems .most observers utilize the plant s outputs and inputs to generate real - time estimates of the plant states , unknown inputs , and sensor faults .the cornerstone is the innovation function sometimes a simple gain matrix designed to nullify the effect of unknown inputs and faults . linear and nonlinear functional observers , sliding - mode observers , unknown input observers , and observers for fault detection and isolation are all examples on developed observers for different classes of systems , under different assumptions . in comparison with kf techniques , observers have not been utilized for power system dse .however , they inherently possess the theoretical , technical , and computational capabilities to perform good estimation of the power system s dynamic states . as for implementation , observers are simpler than kfs . for observers ,matrix gains are computed offline to guarantee the asymptotic stability of the estimation error . here, we present a recently developed observer in that can be applied for dse in power systems .this observer assumes that the nonlinear function in ( [ model ] ) satisfies the one - sided lipschitz condition .specifically , there exists such that in a region including the origin with respect to the state , there is where is the inner product . besides, the nonlinear function is also assumed to be quadratically inner - bounded as where and are real numbers .similar results related to the dynamics of multi - machine power systems established a similar quadratic bound on the nonlinear component ( see ) . to determine the constants ,a simple offline algorithm can be implemented and adding a possible bound on the unknown inputs and disturbances .determining those constants affects the design of the observer , and hence it is advised to choose conservative bound - constants on the nonlinear function .the nonlinearities present in the multi - machine power system are bounded ( e.g. , sines and cosines of angles , multiplications of bounded quantities such as voltages and currents ) . ] . following this assumption ,the dynamics of this observer can be written as where is a matrix gain determined by algorithm [ algoobs2 ] .* compute * constants and via an offline search algorithm * solve * this lmi for and : \left({\boldsymbol}p+\dfrac{\varphi\,\epsilon_2-\epsilon_1}{2}{\boldsymbol}i_n\right)^{\top } & -\epsilon_2 { \boldsymbol}i_n \end{array } \right ]< 0.\ ] ] * obtain * the observer design gain matrix : * simulate * observer design given in first , given the lipschitz constants , and , the linear matrix inequality in is solved for positive constants , and and a symmetric positive semi - definite matrix . utilizing the in ,the state estimates generated from are guaranteed to converge to the actual values of the states .note that the observer design utilizes linearized measurement functions , which for power system dse can be obtained by linearizing the nonlinear functions in ( [ model ] ) .however , since the measurement functions have high nonlinearity , when performing the estimation we do not use ( [ equ : observertwo ] ) , as in , but choose to directly use the nonlinear measurement functions as [ sec : numericalresults ] here we test ekf , sr - ukf , ckf , and the nonlinear observer on the 16-machine 68-bus system extracted from power system toolbox ( pst ) . for the dse we consider both unknown inputs to the system dynamics and cyber attacks against the measurements including data integrity , dos , and replay attacks ;see section [ sec : at ] .all tests are performed on a 3.2-ghz intel(r ) core(tm ) i7 - 4790s desktop .the simulation data is generated as follows . 1 .the simulation data is generated by the model in section [ sec : multimachinedynamics ] with an additional ieee type dc1 excitation system and a simplified turbine - governor system for each generator .the sampling rate is 60 samples / s .2 . in order to generate dynamic response ,a three - phase fault is applied at bus of branch and is cleared at the near and remote ends after and .all generators are equipped with pmus .the sampling rate of the measurements is set to be 60 frames / s to mimic the pmu sampling rate .gaussian process noise is added and the corresponding process noice covariance is set as a diagonal matrix , whose diagonal entries are the square of 5% of the largest state changes .gaussian noise with variance is added to the pmu measurements .each entry of the unknown input coefficients is a random number that follows normal distribution with zero mean and variance as the square of 50% of the largest state changes .note that the variance here is much bigger than that of the process noisethe unknown input vector is set as a function of as where is the frequency of the given signals .the unknown inputs are manually chosen , showing different scenarios for inaccurate model and parameters without a clear , predetermined known distribution or waveform .the kalman filters and the observer are set as follows . 1 .dse is performed on the post - contingency system on time period ] the reduced admittance matrix is that for pre - contingency state .data integrity , dos , and replay attacks , as discussed in section [ model cyber ] , are added to the pmu measurements .more details are given in the ensuing sections .since the results for different runs of estimation using different methods are very similar , in the following sections we only give results for one estimation .data integrity attack is added to the first eight measurements , i.e. , for .the compromised measurements are obtained by scaling the real measurements by and , respectively , for the first four and the last four measurements .the 2-norm of the relative error of the states , , for different estimation methods is shown in fig .[ sce1_norm ] .it is seen that the error norm for both ckf and the observer can quickly converge among which the observer converges faster , while the value that ckf converges to is slightly smaller in magnitude .by contrast , ekf and sr - ukf do not perform well and their error norm can not converge to small values , which means that the estimates are erroneous .we also show the states estimation for generator 1 in fig .[ sce1_states ] .it is seen that the observer and ckf converge rapidly while the ekf fails to converge after 10 seconds . the estimation for sr - ukfis separately shown in fig .[ sce1_states_ukf ] because its estimated states are far away from the real states .note that the real system dynamics are stable while the sr - ukf estimation misled by the data integrity attack indicates that the system is unstable .the real , compromised , and estimated measurements for the first measurement , , are shown in fig .[ sce1_meas ] . for the observer and ckf , the estimated measurements are very close to the actual ones .the results for ekf there show that are some differences between the estimates and the real values , while sr - ukf s generated estimates are close to the compromised measurements , indicating that sr - ukf is completely misled by the cyber attacks . in scenario 1.,width=249 ]the first eight measurements are kept unchanged for ] , which are the , , , and measurements , respectively for ekf , sr - ukf , ckf , and the observer .the observer and ckf clearly outperform the other two methods. replay attack is added on the first eight measurements for which there is for ] , which are the , , , and measurements , respectively for ekf , sr - ukf , ckf , and the observer . for the above three scenarios , the average time for estimation by different methodsis listed in table [ time ] .it is seen that ekf and the observer are more efficient than the other methods while ckf is the least efficient .note that the time reported here is from matlab implementations .it can be greatly reduced by more efficient , such as c - based implementations ..time for performing estimation for 10 seconds . [ cols="^,^,^,^",options="header " , ]here , various functionalities of dse methods and their strengths and weaknesses relative to each functionality are presented based on ( a ) the technical , theoretical capabilities and ( b ) experimental results in section [ sec : numericalresults ] .* _ nonlinearities in the dynamics : _ ukf ( sr - ukf ) , ckf , and the observer in section [ sec : observers ] all work on nonlinear systems while ekf assumes linearized system dynamics .besides , the presented observer uses linearized measurement functions for design but directly uses nonlinear measurement functions for estimation . *_ solution feasibility : _ the main principle governing the design of most observers is based on finding a matrix gain satisfying a certain condition , such as a solution to a matrix inequality .the state estimates are guaranteed to converge to the actual ones if a solution to the lmi exists .in contrast , kf methods do not require that .* _ unknown initial conditions : _ most observer designs are independent on the knowledge of the initial conditions of the system .however , if the estimator s initial condition is chosen to be reasonably different from the actual one , estimates from kf might not converge to the actual ones . * _ robustness to model uncertainty and cyber attacks : _ the observer in section [ sec : observers ] and the ckf outperforms ukf ( sr - ukf ) and ekf in the state estimation under model uncertainty and attack vectors . * _ tolerance to process and measurement noise : _ the observer in section [ sec : observers ] is tolerant to measurement and process noise similar to the ones assumed for kfs . by design , kf techniques are developed to deal with such noise . *_ convergence guarantees : _ observers have theoretical guarantees for convergence while for kf there is no strict proof to guarantee that the estimation converges to actual states .* _ numerical stability : _ observers do not have numerical stability problems while the classic ukf can encounter numerical instability because the estimation error covariance matrix is not always guaranteed to be positive semi - definite . * _ tolerance to parametric uncertainty : _ kf - based methods can tolerate inaccurate parameters to some extent .dynamic observers deal with parametric uncertainty in the sense that all uncertainties can be augmented to the unknown input component in the state dynamics ( ) . * _ computational complexity : _ the ckf , ukf ( sr - ukf ) , and ekf all have computational complexity of .since the observers matrix gains are obtained offline by solving lmis , observers are easier to implement as only the dynamics are needed in the estimation .[ sec : conc ] in this paper , we discuss different dse methods by presenting an overview of state - of - the - art estimation techniques and developing alternatives , including the cubature kalman filter and dynamic observers , to address major limitations of existing methods such as intolerance to inaccurate system model and malicious cyber attacks .the proposed methods are tested on a 16-machine 68-bus system , under significant model uncertainty and cyber attacks against the synchrophaosr measurements .it is shown that the ckf and the presented observer are more robust to model uncertainty and cyber attacks .based on both the theoretical , technical capabilities and the experimental results , we summarize the strengths and weaknesses of different estimation techniques especially for power system dse .
kalman filters and observers are two main classes of dynamic state estimation ( dse ) routines . power system dse has been implemented by various kalman filters , such as the extended kalman filter ( ekf ) and the unscented kalman filter ( ukf ) . in this paper , we discuss two challenges for an effective power system dse : ( a ) model uncertainty and ( b ) potential cyber attacks . to address this , the cubature kalman filter ( ckf ) and a nonlinear observer are introduced and implemented . various kalman filters and the observer are then tested on the 16-machine , 68-bus system given realistic scenarios under model uncertainty and different types of cyber attacks against synchrophasor measurements . it is shown that ckf and the observer are more robust to model uncertainty and cyber attacks than their counterparts . based on the tests , a thorough qualitative comparison is also performed for kalman filter routines and observers . cubature kalman filter , cyber attack , dynamic state estimation , extended kalman filter , model uncertainty , observer , phasor measurement unit ( pmu ) , unscented kalman filter .
synchronization describes the adjustment of rhythms of self - sustained oscillators due to their interaction .such collective behavior has important ramifications in myriad natural and laboratory systems ranging from conservation and pathogen control in ecology to applications throughout physics , chemistry , and engineering .numerous studies have considered the effects of coupling on synchrony using model systems such as kuramoto oscillators . in a variety of real - world systems , including sets of neurons and ecological populations , it is also possible for synchronization to be induced by noise . in many such applications, one needs to distinguish between extrinsic noise common to all oscillators ( which is the subject of this paper ) and intrinsic noise , which affects each oscillator separately .consequently , studying oscillator synchrony can also give information about the sources of system noise .nakao et al . recently developed a theoretical framework for noise - induced synchronization using phase reduction and averaging methods on an ensemble of uncoupled identical oscillators .they demonstrated that noise alone is sufficient to synchronize a population of identical limit - cycle oscillators subject to independent noises , and similar ideas have now been applied to a variety of applications .papers such as characterized a system s synchrony predominantly by considering the probability distribution function ( pdf ) of phase differences between pairs of oscillators .this can give a good qualitative representation of ensemble dynamics , but it is unclear how to subsequently obtain quantitative measurements of aggregate synchrony .it is therefore desirable to devise new order parameters whose properties can be studied analytically ( at least for model systems ) .investigations of the combined effects of common noise and coupling have typically taken the form of studying a pdf for a pair of coupled oscillators in a specific application .recently , however , nagai and kori considered the effect of a common noise source in a large ensemble of globally coupled , nonidentical oscillators .they derived some analytical results as the number of oscillators by considering a nonlinear partial differential equation ( pde ) describing the density of the oscillators and applying the ott - antonsen ( oa ) ansatz . in the present paper, we consider the interaction between noise and coupling .we first suppose that each oscillator s natural frequency ( ) is drawn from a unimodal distribution function . for concreteness , we choose a generalized cauchy distribution whose width is characterized by the parameter .the case yields the cauchy - lorentz distribution , and is the mean frequency .we investigate the effects on synchrony of varying the distribution width .taking the limit yields the case of identical oscillators ; by setting the coupling strength to , our setup makes it possible to answer the hitherto unsolved question of whether common noise alone is sufficient to synchronize nonidentical oscillators .we then consider noise introduced through a general phase - sensitivity function , . ] which we express in terms of fourier series .when only the first fourier mode is present , we obtain good agreement between theory and simulations .however , our method breaks down when higher fourier modes dominate , as clustering effects imply that common noise can cause a decrease in our measure of synchrony .nevertheless , we show that such noise can reinforce clustering caused by different forms of coupling .finally , we consider noise - induced synchrony in antiferromagnetically coupled systems , in which pairs of oscillators are negatively coupled to each other when they belong to different families but positively coupled to each other when they belong to the same family .we start by considering globally coupled phase oscillators subject to a common external force : where and are ( respectively ) the phase and natural frequency of the oscillator , is the coupling strength , is a common external force , the parameter indicates the strength of the noise , and the _ phase - sensitivity function _ represents how the phase of each oscillator is changed by noise . as in ref . , we will later assume that is gaussian white noise , but we treat it as a general time - dependent function for now . as mentioned above, indicates how the phase of each oscillator is affect by noise .such a phase sensitivity function can also be used for deterministic perturbations ( e.g. , periodic forcing ) . in the absence of coupling, one can envision that equation ( [ eq:1 ] ) is a phase - reduced description of an -dimensional dynamical system that exhibits limit - cycle oscillations and which is then perturbed by extrinsic noise : one can reduce ( [ eq:1.5 ] ) to a phase - oscillator system of the form , where is the phase resetting curve ( prc ) . in this case , .we study the distribution of phases in the limit .first , we define the ( complex ) kuramoto order parameter .the magnitude characterizes the degree of synchrony in the system , and the phase gives the mean phase of the oscillators . from equation ( [ eq:1 ] ) , it then follows that the instantaneous velocity of an oscillator with frequency at position is .combined with the normalization condition , the conservation of oscillators of frequency then implies that the phase distribution satisfies the nonlinear fokker - planck equation ( fpe ) =0\ , .\label{eq:2}\ ] ] for more details about the derivation of this evolution equation , see ref . . to obtain an equation for the order parameter , we follow the approach of nagai and kori and use the oa ansatz that the phase distribution is of the form \right)\,,\ ] ] where is a complex - valued function .this form of makes it possible to perform contour integration and obtain .see ref . for a discussion about multimodal .we express the phase - sensitivity function in terms of its fourier series : \notag \\ & = c_0+\sum_{m=1}^\infty[c_m \exp(im\theta)+c_m^*\exp(-im\theta)]\ , , \end{aligned}\ ] ] where .we substitute the series expansions ( [ expand1 ] ) and ( [ expand2 ] ) into ( [ eq:2 ] ) to obtain to study the magnitude of , we let ] .we use the parameter values and .the solid curves are from analytical calculations , and the circles are from direct numerical simulations . the insets show snapshots of oscillators for .the left panel is for , and the right panel is for ., title="fig : " ] and coupling between oscillators of the form )$ ] .we use the parameter values and .the solid curves are from analytical calculations , and the circles are from direct numerical simulations . the insets show snapshots of oscillators for .the left panel is for , and the right panel is for ., title="fig : " ]we now consider interactions of noise and coupling for oscillator systems with _antiferromagnetic coupling _ , in which there are two groups of oscillators with positive coupling between oscillators in the same group but negative coupling between oscillators in different groups .we label the two groups as odd " and even " oscillators .the temporal evolution of the phase of the oscillator is where if is even and if it is odd .we show that the oscillators form two distinct clusters when in the absence of noise ( i.e. , for ) .we define an _ antiferromagnetic order parameter _ and demonstrate that the dependence of on and is analogous to what occurs in the conventional kuramoto model . by considering odd oscillators and even oscillators as separate groups of oscillators ,we define the complex order parameters for the odd and even oscillators , respectively ( also see ref .the antiferromagnetic order parameter can then be expressed as . as with the usual global , equally weighted , sinusoidal coupling in the kuramoto model ( which we call _ ferromagnetic coupling _ ) , we let the number of oscillators and examine continuum oscillator densities . following the analysis for the kuramoto model in ref . , the continuity equations for the densities of the oscillators take the form of a pair of nonlinear fpes : =0\,.\ ] ] one can then apply kuramoto s original analysis to this system .alternatively , one can proceed as in the ferromagnetic case and apply the oa ansatz separately to each family of oscillators .one thereby obtains the coupled ordinary differential equations ( odes ) \,.\end{aligned}\ ] ] taking the sum and difference of the two equations in ( [ nineteen ] ) yields in the case of ferromagnetic coupling , we let . if one were to proceed analogously in antiferromagnetic coupling and define , one would obtain four coupled sdes for and , and it is then difficult to make analytical progress . however , we seek to quantify the aggregate level of synchrony only in the absence of noise . in this case , after initial transients , steady states satisfy and , where is the phase difference between the two groups .( we can not use this method in the presence of noise , as noise breaks the symmetry . ) equations ( [ eq:14 ] ) then simplify to \ , , \notag\\ \frac{da}{dt}\cos\left(\frac{\psi}{2}\right)-a\sin\left(\frac{\psi}{2}\right)\frac{d\psi}{dt}&=-2\gamma a\cos\left(\frac{\psi}{2}\right)+\frac{1}{2}ka^2\left[\cos\left(\frac{3\psi}{2}\right)-\cos\left(\frac{\psi}{2}\right)\right]\,.\end{aligned}\ ] ] this , in turn , yields by setting , we seek equilibria of the system .when , there is an unstable equilibrium at and a stable equilibrium at . when , this equilibrium point is unstableadditionally , there is an unstable equilibrium at and a stable equilibrium at . in practice, this implies that , so the threshold for observing synchrony is ( just as in the kuramoto model ) .similarly , the antiferromagnetic order parameter has a stable steady state at , which has the same dependence on as the kuramoto order parameter does in the traditional kuramoto model .we plot the antiferromagnetic order parameter versus the coupling strength in fig .[ fig:7 ] and obtain excellent agreement with direct numerical simulations of the coupled oscillator system . versus coupled strength for width parameter in the absence of noise ( i.e. , for .the solid curve is the analytical steady state , and circles are from direct numerical simulations of the odes for an ensemble of oscillators . ]we now consider the effect of correlated noise on the system ( [ eq : af1 ] ) . as we have seen previously ,the effect of noise when the first fourier mode of dominates is to synchronize the oscillators ( i.e. , to form a single cluster ) . in fig .[ fig:6 ] , we explore this using direct numerical simulations . in agreement with our intuition ,the noise and coupling have contrasting effects .accordingly , the antiferromagnetic synchrony decreases with increasing noise strength ( see fig . [ fig:6]a ) . as shown in the inset , the noise causes the system to jump " between states with low and high values of .by contrast , as shown in fig .[ fig:6]b , decreases with increasing natural frequency distribution width parameter .additionally , the decrease in synchrony , , correlates positively with the increase in the traditional measure of synchrony ( see fig .[ fig:6]c ) .( the pearson correlation coefficient between and is . )there is no such relationship in the case in which is increased , as remains small and approximately constant ( see fig .[ fig:6]d ) . versus noise strength for and .in the inset , we show a sample realization for between times and .( b ) antiferromagnetic synchrony versus for and . in the inset, we show a sample realization for between times and .( c ) circles give the decrease of antiferromagnetic synchrony ( ) , and crosses give the square of the usual kuramoto measure of synchrony .( d ) same as panel ( c ) , except the horizontal axis is the natural frequency distribution width parameter rather than .[ each data point in the figures in the main panels represents the temporal average of ( [ eq : af1 ] ) with oscillators .] , title="fig : " ] versus noise strength for and . in the inset, we show a sample realization for between times and .( b ) antiferromagnetic synchrony versus for and . in the inset, we show a sample realization for between times and .( c ) circles give the decrease of antiferromagnetic synchrony ( ) , and crosses give the square of the usual kuramoto measure of synchrony .( d ) same as panel ( c ) , except the horizontal axis is the natural frequency distribution width parameter rather than .[ each data point in the figures in the main panels represents the temporal average of ( [ eq : af1 ] ) with oscillators .] , title="fig : " ] versus noise strength for and . in the inset, we show a sample realization for between times and .( b ) antiferromagnetic synchrony versus for and . in the inset, we show a sample realization for between times and .( c ) circles give the decrease of antiferromagnetic synchrony ( ) , and crosses give the square of the usual kuramoto measure of synchrony .( d ) same as panel ( c ) , except the horizontal axis is the natural frequency distribution width parameter rather than .[ each data point in the figures in the main panels represents the temporal average of ( [ eq : af1 ] ) with oscillators .] , title="fig : " ] versus noise strength for and . in the inset, we show a sample realization for between times and .( b ) antiferromagnetic synchrony versus for and . in the inset, we show a sample realization for between times and .( c ) circles give the decrease of antiferromagnetic synchrony ( ) , and crosses give the square of the usual kuramoto measure of synchrony .( d ) same as panel ( c ) , except the horizontal axis is the natural frequency distribution width parameter rather than .[ each data point in the figures in the main panels represents the temporal average of ( [ eq : af1 ] ) with oscillators .] , title="fig : " ]we have examined noise - induced synchronization , desynchronization , and clustering in globally coupled , nonidentical oscillators .we demonstrated that noise alone is sufficient to synchronize nonidentical oscillators .however , the extent to which common noise induces synchronization depends on the magnitude of the coefficient of the first fourier mode . in particular , the domination of higher fourier modes can disrupt synchrony by causing clustering .we then considered higher - order coupling and showed that the cluster synchrony generated by such coupling is reinforced by noise if the phase - sensitivity function consists of fourier modes of the same order as the coupling .one obvious avenue for future work is the development of a theoretical framework that would make it possible to consider multiple harmonics of both the coupling function and the phase - sensitivity function .it would also be interesting to consider generalizations of antiferromagnetic coupling , such as the variant studied in ref .one could also examine the case of uncorrelated noise , which has been studied extensively via an fpe of the form =\frac{\partial^2f}{\partial \theta^2}\,.\ ] ] however , proceeding using fourier expansions like the ones discussed in this paper could perhaps yield a good estimate of the effect of uncorrelated noise on such systems . because of the second derivative in this system , the oa ansatz no longer applies , and a generalized or alternative theoretical framework needs to be developed .the present work is relevant for applications in many disciplines .for example , examining the synchronization of oscillators of different frequencies might be helpful for examining spike - time reliability in neurons .one could examine the interplay of antiferromagnetic coupling and noise - induced synchrony using electronic circuits such as those studied experimentally in , and our original motivation for studying antiferromagnetic synchrony arose from experiments on nanomechanical oscillators .yml was funded in part by a grant from kaust .we thank sherry chen for early work on antiferromagnetic synchronization in summer 2007 and mike cross for his collaboration on that precursor project .we thank e. m. bollt and j. sun for helpful comments .a. pikovsky and m. rosenblum , scholarpedia , 2(12):1459 , 2007 ; a. pikovsky , m. rosenblum , and j. kurths , synchronization : a universal concept in nonlinear sciences .cambridge university press , uk , 2003 .b. t. grenfell , k. wilson , b. f. finkenstdt , t. n. coulson , s. murray , s. d. albon , j. m. pemberton , t. h. clutton - brock , and m. j. crawley , nature , 394:674 , 1998 ; e. mccauley , r. m. nisbet , w. w. murdoch , a. m. de roos , and w. s. c. gurney , nature , 402:653 , 1999 .
we study ensembles of globally coupled , nonidentical phase oscillators subject to correlated noise , and we identify several important factors that cause noise and coupling to synchronize or desychronize a system . by introducing noise in various ways , we find a novel estimate for the onset of synchrony of a system in terms of the coupling strength , noise strength , and width of the frequency distribution of its natural oscillations . we also demonstrate that noise alone is sufficient to synchronize nonidentical oscillators . however , this synchrony depends on the first fourier mode of a phase - sensitivity function , through which we introduce common noise into the system . we show that higher fourier modes can cause desychronization due to clustering effects , and that this can reinforce clustering caused by different forms of coupling . finally , we discuss the effects of noise on an ensemble in which antiferromagnetic coupling causes oscillators to form two clusters in the absence of noise .