text
stringlengths
16
1.79M
label
int64
0
10
constrained submodular maximization via a technique nov niv moran november abstract the study of combinatorial optimization problems with a submodular objective has attracted much attention in recent years such problems are important in both theory and practice because their objective functions are very general obtaining further improvements for many submodular maximization problems boils down to finding better algorithms for optimizing a relaxation of them known as the multilinear extension in this work we present an algorithm for optimizing the multilinear relaxation whose guarantee improves over the guarantee of the best previous algorithm which was given by ene and nguyen moreover our algorithm is based on a new technique which is arguably simpler and more natural for the problem at hand in a nutshell previous algorithms for this problem rely on symmetry properties which are natural only in the absence of a constraint our technique avoids the need to resort to such properties and thus seems to be a better fit for constrained problems department of statistics and operations research school of mathematical sciences tel aviv university israel email depart of mathematics and computer science the open university of israel email moranfe introduction the study of combinatorial optimization problems with a submodular objective has attracted much attention in recent years such problems are important in both theory and practice because their objective functions are very functions generalize for example cuts functions of graphs and directed graphs the mutual information function matroid weighted rank functions and more specifically from a theoretical perspective many problems in combinatorial optimization are in fact submodular maximization problems including generalized assignment and facility location from a practical perspective submodular maximization problems have found uses in social networks vision machine learning and many other areas the reader is referred for example to a comprehensive survey by bach the techniques used by approximation algorithms for submodular maximization problems usually fall into one of two main approaches the first approach is combinatorial in nature and is mostly based on local search techniques and greedy rules this approach has been used as early as the late s for maximizing a monotone submodular function subject to a matroid constraint some of these works apply only to specific types of matroids later works used this approach to handle also problems with submodular objective functions and different constraints yielding in some cases optimal algorithms however algorithms based on this approach tend to be highly tailored for the specific structure of the problem at hand making extensions quite difficult the second approach used by approximation algorithms for submodular maximization problems overcomes the above obstacle this approach resembles a common paradigm for designing approximation algorithms and involves two steps in the first step a fractional solution is found for a relaxation of the problem known as the multilinear relaxation in the second step the fractional solution is rounded to obtain an integral one while incurring a bounded loss in the objective this approach has been used to obtain improved approximations for many problems various techniques have been developed for rounding the fractional solution these techniques tend to be quite flexible and usually can extend to many related problem in particular the contention resolution schemes framework of yields a rounding procedure for every constraint which can be presented as the intersection of a few basic constraints such as knapsack constraints matroid constraints and matching constraints given this wealth of rounding procedures obtaining further improvements for many important submodular maximization problems such as maximizing a submodular function subject to a matroid or knapsack constraint boils down to obtaining improved algorithms for finding a good fractional solution optimizing the multilinear relaxation maximizing the multilinear relaxation at this point we would like to present some terms more formally a submodular function is a set function f r obeying f a f b f a b f a b for any sets a b n a submodular maximization problem is the problem of finding a set s n maximizing f subject to some constraint formally let i be the set of subsets of n obeying the constraint then we are interested in the following problem max f a a i a relaxation of the above problem replaces i with a polytope p n containing the characteristic vectors of all the sets of i in addition a relaxation must replace the function f with an extension function f n thus a relaxation is a fractional problem of the following format max f x x p n defining the right extension function f for the relaxation is a challenge as unlike the linear case there is no single natural candidate the objective that turned out to be useful and is thus used by multilinear relaxation is known as the multilinear extension first introduced by the value f x of this extension for any vector x n is defined as the expected value of f over a random subset r x n containing every element u n independently with probability xu formally for every x n y x y xu f x e r x f s xu the first algorithm for optimizing the multilinear relaxation was the continuous greedy algorithm designed by calinescu et al when the submodular function f is and and p is this algorithm finds a vector x p such that e f x o f op t where op t is the set maximizing f among all sets whose characteristic vectors belongs to p interestingly the guarantee of continuous greedy is optimal for monotone functions even when p is a simple cardinality constraint optimizing the multilinear relaxation when f is not necessarily monotone proved to be a more challenging task initially several algorithms for specific polytopes were suggested later on improved general algorithms were designed that work whenever f is and p is and solvable designing algorithms that work in this general setting is highly important as many natural constraints fall into this framework moreover the restriction of the algorithms to polytopes is unavoidable as proved that no algorithm can produce a vector x p obeying e f x c f op t for any constant c when p is solvable but not up until recently the best algorithm for this general setting was called measured continuous greedy it guaranteed to produce a vector x p obeying e f x o f op t f op t the natural feel of the guarantee of measured continuous greedy and the fact that it was not improved for a few years made some people suspect that it is optimal recently an evidence against this conjecture was given by which described an algorithm for the special case of a cardinality constraint with an improved approximation guarantee of even more recently ene and nguyen shuttered the conjecture completely by extending the technique used by they showed that one can get an approximation guarantee for every and solvable polytope p on the inapproximability side oveis gharan and proved that no algorithm can achieve approximation better than even when p is the matroid polytope of a partition matroid closing the gap between the best algorithm and inapproximability result for this fundamental problem remains an important open problem a a a upper set function f r is monotone if f a f b for every a b n polytope is solvable if one can optimize linear functions over it polytope p n is if y p implies that every vector x n which is bounded by y must belong to p as well our contribution our main contribution is an algorithm with an improved guarantee for maximizing the multilinear relaxation theorem there exists a polynomial time algorithm that given a submodular function f and a solvable polytope p n finds a vector x p obeying f x f op t where op t arg max f s p and f is the multilinear extension of f admittedly the improvement in the guarantee obtained by our algorithm compared to the guarantee of is relatively small however the technique underlying our algorithm is very different and arguably much cleaner than the technique underlying the previous results improving over the natural guarantee of moreover we believe our technique is more natural for the problem at hand and thus is likely to yield further improvements in the future in the rest of this section we explain the intuition on which we base this belief the results of are based on the observation that the guarantee of measured continuous greedy improves when the algorithm manages to increase all the coordinates of its solution at a slow rate based on this observation run an instance of measured continuous greedy or a discretized version of it and force it to raise the coordinates slowly if this extra restriction does not affect the behavior of the algorithm significantly then it produces a solution with an improved guarantee otherwise argue that the point in which the extra restriction affect the behavior of measured continuous greedy reveals a vector x p which contains a significant fraction of op t once x is available one can use the technique of unconstrained submodular maximization described by that has higher approximation guarantee of to extract from x a vector y x of large value the of p guarantees that y belongs to p as well unfortunately the use of the unconstrained submodular maximization technique in the above approach is very problematic for two reasons first this technique is based on ideas that are very different from the ideas used by the analysis of measured continuous greedy this makes the combination of the two quite involved second on a more abstract level the unconstrained submodular maximization technique is based on a symmetry which exists in the absence of a constraint since s f n s is and submodular whenever f has these properties however this symmetry breaks when a constraint is introduced and thus the unconstrained submodular maximization technique does not seem to be a good fit for a constrained problem our algorithm replaces the symmetry based unconstrained submodular maximization technique with a local search algorithm more specifically it first executes the local search algorithm if the output of the local search algorithm is good then our algorithm simply returns it otherwise we observe that the poor value of the output of the local search algorithm guarantees that it is also far from op t in some sense our algorithm then uses this far from op t solution to guide an instance of measured continuous greedy and help it avoid bad decisions as it turns out the analysis of measured continuous greedy and the local search algorithm use similar ideas and notions thus the two algorithms combine quite cleanly as can be observed from section preliminaries our analysis uses another useful extension of submodular functions given a submodular function f r its extension is a function n r defined by x z f x where x u n xu the extension has many important applications see however in this paper we only use it in the context of the following known result which is an immediate corollary of the work of lemma given the multilinear extension f and the extension of a submodular function f r it holds that f x x for every vector x n we now define some additional notation that we use given a set s n and an element u n we denote by and the characteristic vectors of the sets s and u respectively and by s u and s u the sets s u and s u respectively given two vectors x y n we denote by x y x y and x y the maximum minimum and multiplication respectively of x and finally given a vector x n and an element u n we denote by f x the derivative of f with respect to u at the point x the following observation gives a simple formula for f x this observation holds because f is a multilinear function observation let f x be the multilinear extension of a submodular function f then for every u n and x n xu f x f x f x in the rest of the paper we assume without loss of generality that p for every element u n and that n is larger than any given constant the first assumption is justified by the observation that every element u violating this assumption can be safely removed from n since it can not belong to op t the second assumption is justified by the observation that it is possible to find a set s obeying p and f s f op t in constant time when n is a constant another issue that needs to be kept in mind is the representation of submodular functions we are interested in algorithms whose time complexity is polynomial in however the representation of the submodular function f might be exponential in this size thus we can not assume that the representation of f is given as part of the input for the algorithm the standard way to bypass this difficulty is to assume that the algorithm has access to f through an oracle we assume the standard value oracle that is used in most of the previous works on submodular maximization this oracle returns given any subset s n the value f s main algorithm in this section we present the algorithm used to prove theorem this algorithm uses two components the first component is a close variant of a fractional local search algorithm suggested by chekuri et al which has the following properties more formally for every element u n x y u max xu yu x y u min xu yu and x y u xu yu lemma follows from chekuri et al there exists a polynomial time algorithm which returns vector x p such that with high probability for every vector y p f x f x y f x y o f op t proof let m max f u f n u u n and let a be an arbitrary constant larger than then lemmata and of chekuri et al imply that with high probability the fractional local search algorithm they suggest terminates in polynomial time and outputs a vector x p obeying for every vector y p x f x y f x y moreover the output vector x is in p whenever the fractional local search algorithm terminates our assumption that p for every element u n implies by submodularity that f s n f op t for every set s n since m is the maximum over values of f we get also m n f op t using this observation and plugging a we get that there exists an algorithm which with high probability terminates after t n operations for some polynomial function t n t for every vector y p and outputs a vector x p obeying x f x y f x y op n moreover the output vector x belongs to p whenever the algorithm terminates to complete the lemma we consider a procedure that executes the above algorithm for t n operations and return its output if it terminates within this number of operations if the algorithm fails to terminate within this number of operations which happens with a diminishing probability then the procedure simply returns which always belongs to p since p is one can observe that this procedure has all the properties guaranteed by the lemma the second component of our algorithm is a new auxiliary algorithm which we present and analyze in section this auxiliary algorithm is the main technical contribution of this paper and its guarantee is given by the following theorem theorem there exists a polynomial time algorithm that given a vector z n and a value ts outputs a vector x p obeying e f x ets ts o f op t f z t ts f z t our main algorithm executes the algorithms suggested by lemma followed by the algorithm suggested by theorem notice that the second of these algorithms has two parameters in addition to f and p a parameter z which is set to be the output of the first algorithm and a parameter ts which is set to be a constant to be determined later after the two above algorithms terminate our algorithm returns the output of the first algorithm with probability p for a constant p to be determined later and with the remaining probability it returns the output of the second a formal description of our algorithm is given as algorithm observe that lemma and theorem imply together that algorithm is a polynomial time algorithm which always outputs a vector in p to prove theorem it remains to analyze the quality of the solution produced by algorithm clearly it is always better to return the better of the two solution instead of randomizing between them however doing so will require the algorithm to either have an oracle access to f or estimate the values of the solutions using sampling the later can be done using standard for the sake of simplicity we chose here the easier to analyze approach of randomizing between the two solutions algorithm main algorithm f p execute the algorithm suggested by lemma and let p be its output execute the algorithm suggested by theorem with z and let be its output return with probability p the solution and the solution otherwise lemma when its parameters are set to ts and p algorithm produces a solution whose expected value is at least f op t proof let e be the event that the output of the algorithm suggested by lemma satisfies inequality since e is a high probability event it is enough to prove that conditioned on e algorithm produces a solution whose expected value is at least c f op t for some constant c the rest of the proof of the lemma is devoted to proving the last claim throughout it everything is implicitly conditioned on as we are conditioning on e we can plug y t and respectively y t into inequality to get f f t f t o f op t and f f t o f op t where the last inequality follows by noticing that t next let e f denote the expected value of f conditioned on the given value of inequality guarantees that e f ets ts o f op t f t ts f t recall that algorithm returns with probability p and otherwise hence the expected value of its output is e p f p e f where the expectation is over optimizing the constants we would like to derive from inequalities and the best lower bound we can get on to this end let and be two numbers such that p and let using the above inequalities and this notation can now be lower bounded by e f t e f t o f op t e f t o f op t ets ts o f op t e f t ts e f t which can be rewritten as ets e f t ets ts e f t ets ts f op t o f op t to get the most out of this lower bound we need to maximize the coefficient of f op t while keeping the coefficients of e f t and e f t so that they can be ignored due to of f this objective is formalized by the following program max ets ts ets ets ts p p p ts solving the program we get that the best solution is approximately and ts and the objective function value corresponding to this solution is at least hence we have managed to lower bound and thus also the expected value of the output of algorithm by f op t for p and ts which completes the proof of the lemma aided measured continuous greedy in this section we present the algorithm used to prove theorem proving the above theorem directly is made more involved by the fact that the vector z might be fractional instead we prove the following simplified version of theorem for integral values and show that the simplified version implies the original one theorem there exists a polynomial time algorithm that given a set z n and a value ts outputs a vector x p obeying e f x ets ts o f op t f z op t ts f z op t next is the promised proof that theorem implies theorem proof of theorem given theorem consider an algorithm alg that given the z and ts arguments specified by theorem executes the algorithm guaranteed by theorem with the same value ts and with a random set z distributed like r z the output of alg is then the output produced by the algorithm guaranteed by theorem let us denote this output by theorem guarantees that for every given z e f x z ets ts o f op t f z op t ts f z op t to complete the proof we take the expectation over z over the two sides of the last inequality and observe that e f z op t e f r z op t e f r z t f z t and e f z op t e f r z op t e f r z t f z t in the rest of this section we give a proof of theorem this proof explains the main ideas necessary for proving the theorem but uses some simplifications such as allowing a direct oracle access to the multilinear extension f and giving the algorithm in the form of a continuous time algorithm which can not be implemented on a discrete computer there are known techniques for getting rid of these simplifications see and a formal proof of theorem based on these techniques is given in appendix a the algorithm we use for the proof of theorem is given as algorithm this algorithm starts with the empty solution y at time and grows this solution over time until it reaches the final solution y at time the way the solution grows varies over time during the time range ts the solution grows like in the measured continuous greedy algorithm of on the other hand during the earlier time range of ts the algorithm pretends that the elements of z do not exist by giving them negative marginal profits and grows the solution in the way measured continuous greedy would have grown it if it was given the ground set n z the value ts is the time in which the algorithm switches between the two ways it uses to grow its solution thus the s in the notation ts stands for switch algorithm aided measured continuous greedy f p z ts let y foreach t do for each u n let wu t f y t f y t p p arg wu t xu t xu t if t ts let x t arg if t ts wu t xu t increase y t at a rate of dy t dt y t x t return y we first note that algorithm outputs a vector in p observation y p proof observe that x t p at eachr time t which implies that y t x t is also in p since p is therefore y y t x t dt is a convex combination of vectors in p and thus belongs to p the following lemma lower bounds the increase in f y t as a function of lemma for every t f y t t f y t df y t dt f y t t f y t if t ts if t ts proof by the chain rule x x dyu t y y df y t yu t xu t dt dt t t x x xu t f y t f y t xu t wu t x t w t consider first the case this time period algorithm chooses x t as the p t ts during p vector in p maximizing wu t xu t xu t since p is x t t is in p and has value t w t and thus we have x t w t t w t plugging this observation into equality yields df y t x t w t t w t dt f y t t f y t x f y t f y t t where the last inequality holds by the submodularity of f similarity when t ts algorithm chooses x t as the vector in p maximizing x t w t since t p we get this time x t w t t w t plugging this observation into equality yields x df y t x t w t t w t f y t f y t dt t f y t t f y t where the last inequality holds again by the submodularity of f lemma for every time t and set a n it holds that f y t max max f a f a z f a proof first we note that for every time t and element u n if u z yu t max s if u z this follows for the following reason since x t is always in p n yu t obeys the differential inequality dy t yu t x t yu t dt using the initial condition yu the solution for this differential inequality is yu t to get the tighter bound for u z we note that at every time t ts algorithm chooses as x t a vector maximizing a linear function in p which assigns a negative weight to elements of z since p is this maximum must have xu t for every element u z this means that yu t whenever u z and t ts moreover plugging the improved initial condition yu ts into the above differential inequality yields the promised tighter bound also for the range ts next let be the extension of f then by lemma z f y t f y t f y t z max z f y t z f y t f y t f a max max f a f a z f a max inequality follows by the of f equality follows since for inequality guarantees that yu t for every u n and thus y t a finally inequality follows since for max inequality guarantees that yu t for every u z and thus y t b a for some b n z by the of f f b a also by the submodularity and of f for every such set b f b a f a f b z a f z a f a f z a plugging the results of lemma into the lower bound given by lemma on the improvement in f y t as a function of t yields immediately the useful lower bound given by the next corollary for every t f op t z f z op t f y t df y t dt ets f op t ets f z op t f y t if t ts if t ts using the last corollary we can complete the proof of theorem proof of theorem we have already seen that y output of algorithm to p it remains to show that f y ets ts f op t f z op t ts f z op t corollary describes a differential inequality for f y t given the boundary condition f y the solution for this differential inequality within the range t ts is f y t f op t z f z op t plugging t ts into the last inequality we get f y ts f op t z ts f z op t let v f op t z ts f z op t be the right hand side of the last inequality next we solve again the differential inequality given by corollary for the range t ts with the boundary condition f y ts the resulting solution is f y t t ts ets f op t ets f z op t vets plugging t and the value of v we get f y ts ets f op t ets f z op t vets ts ts e f op t ets f z op t e ets f op t f op t z ts f z op t ets ts f op t f z op t ts f z op t where inequality follows since by the submodularity and of f f op t z f op t f op t z f f op t f op t z note that corollary follows from a weaker version of lemma which only guarantees f y t e f a f a z f a we proved the stronger version of the lemma above because it is useful in the formal proof of theorem given in appendix a max references a ageev and sviridenko an approximation algorithm for the uncapacitated facility location problem discrete appl july noga alon and joel spencer the probabilistic method wiley interscience second edition per austrin siavosh benabbas and konstantinos georgiou better balance by being biased a for max bisection in soda pages francis bach learning with submodular functions a convex optimization perspective foundations and trends in machine learning boykov and jolly interactive graph cuts for optimal boundary region segmentation of objects in images in iccv volume pages niv buchbinder moran feldman joseph seffi naor and roy schwartz a tight linear time for unconstrained submodular maximization in focs pages niv buchbinder moran feldman joseph seffi naor and roy schwartz submodular maximization with cardinality constraints in soda pages gruia chandra chekuri martin and jan maximizing a monotone submodular function subject to a matroid constraint siam j chandra chekuri and alina ene approximation algorithms for submodular multiway partition in focs pages chandra chekuri and sanjeev khanna a polynomial time approximation scheme for the multiple knapsack problem siam j september chandra chekuri jan and rico zenklusen dependent randomized rounding via exchange properties of combinatorial structures in focs pages chandra chekuri jan and rico zenklusen submodular function maximization via the multilinear relaxation and contention resolution schemes in stoc pages chandra chekuri jan and rico zenklusen submodular function maximization via the multilinear relaxation and contention resolution schemes siam j reuven cohen liran katzir and danny raz an efficient approximation for the generalized assignment problem information processing letters conforti and submodular set functions matroids and the greedy algorithm tight worstcase bounds and some generalizations of the radoedmonds theorem disc appl cornuejols fisher and nemhauser location of bank accounts to optimize float an analytic study of exact and approximate algorithms management sciences cornuejols fisher and nemhauser on the uncapacitated location problem annals of discrete mathematics alina ene and huy nguyen constrained submodular maximization beyond in focs uriel feige a threshold of ln n for approximating set cover acm uriel feige and michel goemans aproximating the value of two prover proof systems with applications to max and max dicut in istcs pages uriel feige vahab mirrokni and jan maximizing submodular functions siam journal on computing uriel feige and jan approximation algorithms for allocation problems improving the factor of in focs pages moran feldman maximization problems with submodular objective functions phd thesis technion israel institute of technology june moran feldman joseph naor and roy schwartz a unified continuous greedy algorithm for submodular maximization in focs pages moran feldman joseph seffi naor roy schwartz and justin ward improved approximations for systems in esa pages fisher nemhauser and wolsey an analysis of approximations for maximizing submodular set functions ii in polyhedral combinatorics volume of mathematical programming studies pages springer berlin heidelberg lisa fleischer michel goemans vahab mirrokni and maxim sviridenko tight approximation algorithms for maximum general assignment problems in soda pages alan frieze and mark jerrum improved approximation algorithms for max and max bisection in ipco pages shayan oveis gharan and jan submodular maximization by simulated annealing in soda pages michel goemans and david williamson improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming journal of the acm eran halperin and uri zwick combinatorial approximation algorithms for the maximum directed cut problem in soda pages jason hartline vahab mirrokni and mukund sundararajan optimal marketing strategies over social networks in www pages johan some optimal inapproximability results acm july hausmann and korte algorithms for independence systems oper res ser hausmann korte and jenkyns worst case analysis of greedy type algorithms for independence systems math prog study jegelka and bilmes submodularity beyond submodular energies coupling edges in graph cuts ieee conference on computer vision and pattern recognition jenkyns the efficacy of the greedy algorithm cong richard karp reducibility among combinatorial problems in miller and thatcher editors complexity of computer computations pages plenum press david kempe jon kleinberg and tardos maximizing the spread of influence through a social network in sigkdd pages subhash khot guy kindler elchanan mossel and ryan o donnell optimal inapproximability results for and other csps siam j april khuller moss and naor the budgeted maximum coverage problem information processing letters korte and hausmann an analysis of the greedy heuristic for independence systems annals of discrete andreas krause ajitsingh and carlos guestrin sensor placements in gaussian processes theory efficient algorithms and empirical studies mach learn january andreas krause and carlos guestrin nonmyopic value of information in graphical models in uai page andreas krause jure leskovec carlos guestrin jeanne vanbriesen and christos faloutsos efficient sensor placement optimization for securing large water distribution networks journal of water resources planning and management november ariel kulik hadas shachnai and tami tamir approximations for monotone and nonmonotone submodular maximization with knapsack constraints math oper jon lee vahab mirrokni viswanath nagarajan and maxim sviridenko maximizing nonmonotone submodular functions under matroid or knapsack constraints siam journal on discrete mathematics jon lee maxim sviridenko and jan submodular maximization over multiple matroids via generalized exchange properties in approx pages hui lin and jeff bilmes summarization via budgeted maximization of submodular functions in north american chapter of the association for computational language technology conference los angeles ca june hui lin and jeff bilmes a class of submodular functions for document summarization in hlt pages submodular functions and convexity in bachem and korte editors mathematical programming the state of the art pages springer and schrijver the ellipsoid method and its consequences in combinatorial optimization combinatoria nemhauser and wolsey best algorithms for approximating the maximum of a submodular set function mathematics of operations research nemhauser wolsey and fisher an analysis of approximations for maximizing submodular set functionsi mathematical programming maxim sviridenko a note on maximizing a submodular set function subject to knapsack constraint operations research letters luca trevisan gregory sorkin madhu sudan and david williamson gadgets approximation and linear programming siam j april jan symmetry and approximability of submodular maximization problems siam j a a formal proof of theorem in this section we give a formal proof of theorem this proof is based on the same ideas used in the proof of this theorem in section but employs also additional known techniques in order to get rid of the issues that make the proof from section the algorithm we use to prove theorem is given as algorithm this algorithm is a discrete variant of algorithm while reading the algorithm it is important to observe that the choice of the values and guarantees that the variable t takes each one of the values ts and at some point and thus the vectors y ts and y are well defined algorithm aided measured continuous greedy f p z ts initialization let ts and ts let t and y t growing y t while t do foreach u n do let wu t be an estimate of e f u r y t obtained by averaging the values of f u r y t for r ln independent samples of r y t p p arg wu t xu t xu t if t ts let x t if t ts arg wu t xu t let be when t ts and when t ts let y t y t y t x t update t t return y we begin the analysis of algorithm by showing that y t remains within the cube n throughout the execution of the algorithm without this observation the algorithm is not welldefined observation for every value of t y t n proof we prove the observation by induction on clearly the observation holds for y assume the observation holds for some time t then for every u n yu t yu t yu t xu t where the inequality holds since the induction hypothesis implies yu t a similar argument also implies yu t yu t yu t xu t yu t yu t using the last observation it is now possible to prove the following counterpart of observation corollary algorithm always outputs a vector in p proof let t be the set of values t takes p p during the execution of algorithm we observe that which implies that t x t is a convex combination of the vectors x t t t p as all these vectors belong to p and p is convex any convex combination of them including x t must be in p next we rewrite the output of algorithm as x x x t y t x t y by the above discussion the rightmost hand side of this inequality is a vector in p which implies that y p since p is the next step towards showing that algorithm proves theorem is analyzing its approximation ratio we start this analysis by showing that with high probability all the estimations made by the algorithm are quite accurate let a be the event that t f u r y t op t for every u n and time lemma the symmetric version of theorem in let xi i k be mutually independent with all e xi and all set s xk then pr a corollary pr a proof consider the calculation of wu t for a given u n and time this calculation is done by averaging the value of f u r y t for r independent samples of r y t let yi denote the value y t then by definition of f u r y t obtained for the sample and let xi yi f op t wu t pr yi r f op t pr xi r e f u r y t since yi is distributed like f u r y t the definition of xi guarantees that e xi for every i additionally for every such i since the absolute values of both yi and e f u r y t are upper bounded by f s n f op t the last inequality follows from our assumption that p for every element u n thus by lemma r x r xi rn pr t e f u r y t f op t pr ln observe that algorithm calculates wu t for every combination of element u n and time t since there are n elements in n and times smaller than the union bound implies that the probability that for at least one such value wu t we have t e f u r y t f op t is upper bounded by n n which completes the proof of the corollary our next step is to give a lower bound on the increase in f y t as a function of t given a this lower bound is given by corollary which follows from the next two lemmata the statement and proof of the corollary and the next lemma is easier with the following definition let op denote the set op t z when t ts and op t otherwise p lemma given a for every time t yu t xu t f y t f y t f y t o n f op t proof let us calculate the weight of op according to the weight function w t x x wu t e f u r y t f op t w t x f r y t u f r y t f op t e f r y t op f r y t f op t f y t t f y t f op t where the first inequality follows from the definition of a and the last follows from the submodularity of f recall that x t is the vector in p maximizing some objective function which depends on t for t ts the objective function maximized by x t assigns the value w t t w t to the vector p similarly for t ts the objective function maximized by x t assigns the value w t t w t to the vector p thus the definition of x t guarantees that in both cases we have w t x t w t f y t f y t f op t hence x yu t xu t f y t x xu t f y t f y t x xu t e f u r y t x xu t wu t f op t x t w t f op t f y t f y t f op t where the first inequality holds by the definition of a and the second equality holds since f y t f y t e f r y t u e f r y t e f u r y t lemma a rephrased version of lemma in consider two vectors x n p such that for every u n then f f x xu f x o f u corollary given a for every time t f y t f y t f y t f y t o f op t proof observe that for every u n t yu t yu t xu t hence by lemma x f y t f y t yu t yu t f y t o max f u x yu t xu t f y t o max f u consider the rightmost hand side of the last inequality by lemma the first term on this side can be bounded by x yu t xu t f y t f y t f y t o f op t f y t f y t o f op t on the other hand the second term of can be bounded by o max f u o f op t since by definition and f u f op t by our assumption that p for every u n the lower bound given by the last corollary is in terms of f y t to make this lower bound useful we need to lower bound the term f y t this is done by the following two lemma which corresponds to lemma lemma corresponds to lemma for every time t and set a n it holds that f y t max o max f a f a z o f a the proof of this lemma goes along the same lines as the proof of its corresponding lemma in section except that the bounds on the coordinates of y t used by the proof from section are replaced with the slightly weaker bounds given by the following lemma lemma for every time t and element u n o yu t max o if u z if u z proof let and observe that for every time our first objective is to prove by induction on t that if yu for some time then yu t for every time t for t the claim holds because yu next assume the claim holds for some t and let us prove it for t yu t yu t yu t xu t yu t yu t yu t where the last inequality holds since x is a decreasing function for x we complete the proof for the case u z by choosing clearly yu and observing that for every time t t t o it remains to prove the lemma for the case u z note that at every time t ts algorithm chooses as x t a vector maximizing a linear function in p which assigns a negative weight to elements of z since p is this maximum must have xu t for an element u z this means that yu t for t ts in addition to proving the lemma for this time range the last inequality also allows us to choose ts which gives for t ts yu t ets o combining corollary with lemma gives us the following corollary corollary given a for every time t ts f y t f y t f op t z f z op t f y t o f op t and for every time t ts f y t f y t f op t ets max f op t f z op t f y t o f op t proof for every time t ts corollary and lemma imply together f y t f y t o max f op t z f op t z o f op t z o f op t o f op t z f z op t f y t o f op t we observe that this inequality is identical to the inequality promised for this time range by the corollary except that it has an extra term of o f op t z on its right hand side since f op t z is upper bounded by f op t due to the of p the absolute value of this extra term is at most o f op t o f op t which completes the proof for the time range t ts consider now the time range t ts for this time range corollary and lemma imply together f y t f y t ets o max f op t f op t z o f op t o f op t we observe again that this inequality is identical to the inequality promised for this time range by the corollary except that it has extra terms of o f op t and o max f op t f op t z on its right hand side the corollary now follows since the absolute value of both these terms is upper bounded by o f op t corollary bounds the increase in f y t in terms of f y t itself thus it gives a recursive formula which can be used to lower bound f y t our remaining task is to solve this formula and get a lower bound on f y t let g t be defined as follows g and for every time t g g t f op t z f z op t g t g t f op t ets max f op t f z op t g t if t ts if t ts the next lemma shows that a lower bound on g t yields a lower bound on f y t lemma given a for every time t g t f y t o t f op t proof let c be the larger constant among the constants hiding behind the big o notations in corollary we prove by induction on t that g t f y t f op t for t this clearly holds since g f y assume now that the claim holds for some t and let us prove it for t there are two cases to consider if t ts then the induction hypothesis and corollary imply for a large enough n g t g t f op t z f z op t g t g t f op t z f z op t f y t f op t f op t z f z op t f y t f op t z f z op t f y t f op t f y t f op t f op t f y t c t f op t similarly if t ts then we get g t g t f op t ets max f op t f z op t g t g t f op t ets max f op t f z op t f y t f op t f op t ets max f op t f z op t f y t f op t ets max f op t f z op t f y t f op t f y t f op t f op t f y t c t f op t it remains to find a expression that lower bounds g t and thus also f y t let t ts r and t ts r be defined as follows t f op t z f z op t and t t ts f op t ets max f op t f op t z ets ts lemma for every time t ts t g t proof the proof is by induction on for t g f op t z f z op t assume now that the lemma holds for some t ts and let us prove it holds also for t by the induction hypothesis z t t t z f op t z f z op t t t t f op t z f z op t t f op t z f z op t g t f op t z f z op t g t where the first inequality holds since is a decreasing function of and is an increasing function of in the range lemma for every time ts t t g t proof the proof is by induction on for t ts by lemma ts ts g ts assume now that the lemma holds for some ts t and let us prove it holds also for t to avoid repeating complex expressions let us denote a f op t ets max f op t f z op t notice that a is independent of moreover using this notation we can rewrite t as t t ts a ets ts thus for every ts ts a ets ts a ts a ets ts the definition of a and the of f imply immediately that a we would like to prove also that ts a ets ts there are two cases to consider first if f op t f z op t then ts a ets ts ts f op t ts ets max f op t f z op t ets f op t z ets ts f z op t ts ets f op t ts ets f z op t ets f op t ets ts f z op t ts ets ets f op t f z op t where the inequality uses the fact that f op t f op t z because of the of p on the other hand if f op t f z op t then ts a ets ts ts f op t ets f op t z ets ts f z op t ts f op t ets f op t ets ts f op t using the above observations and the induction hypothesis we can now get t t z t z ts a ets ts t t t t ts a ets ts t a g t a g t the last two lemmata give us the promised lower bound on g t which can be used to lower bound the approximation ratio of algorithm corollary e f y ets ts o f op t f z op t ts f z op t proof by lemma given a f y g o f op t by lemma g ts f op t ets max f op t f z op t ets f op t z ets ts f z op t ts ets f op t ets f z op t ets f op t f z op t ets ts f z op t ets ts f op t f z op t ts f z op t where the second inequality holds since the submodularity and of f imply f op t z f op t f f z op t f op t f z op t combining the above observations we get that given a f y ets ts o f op t f z op t ts f z op t since f y is always this implies by the law of total expectation e f y pr a ets ts o f op t f z op t ts f z op t ets ts o f op t f z op t ts f z op t ets ts o f op t n ets ts o f op t f z op t ts f z op t where the second inequality holds since pr a by corollary theorem now follows immediately by combining corollaries and
8
self organizing maps whose topologies can be learned with adaptive binary search trees using conditional rotations jun astudillo john abstract numerous variants of maps soms have been proposed in the literature including those which also possess an underlying structure and in some cases this structure itself can be defined by the user although the concepts of growing the som and updating it have been studied the whole issue of using a adaptive data structure ads to further enhance the properties of the underlying som has been unexplored in an earlier work we impose an arbitrary topology onto the codebooks which consequently enforced a neighborhood phenomenon and the bubble of activity boa in this paper we consider how the underlying tree itself can be rendered dynamic and adaptively transformed to do this we present methods by which a som with an underlying binary search tree bst structure can be adaptively using conditional rotations conrot these rotations on the nodes of the tree are local can be done in constant time and performed so as to decrease the weighted path length wpl of the entire tree in doing this we introduce the pioneering concept referred to as neural promotion where neurons gain prominence in the neural network nn as their significance increases we are not aware of any research which deals with the issue of neural promotion the advantages of such a scheme is that the user need not be aware of any of the topological peculiarities of the stochastic data distribution rather the algorithm referred to as the ttosom with conditional rotations ttoconrot converges in such a manner that the neurons are ultimately placed in the input space so as to represent its stochastic distribution and additionally the neighborhood properties of the neurons suit the best bst that represents the data these properties have been confirmed by our experimental results on a variety of data sets we submit that all of these concepts are both novel and of a pioneering sort keywords adaptive data structures binary search trees maps universidad this de talca merced chile castudillo author is assistant professor at the department of computer science with the universidad de talca this work is partially supported by the fondecyt grant chile a very preliminary version of this paper was presented at ai the australasian joint conference on artificial intelligence melbourne australia in december that paper won the award of being the best paper of the conference we are also very grateful for the comments made by the associate editor and the anonymous referees their input helped in improving the quality of the final version of this paper thank you very much school of computer science carleton university ottawa canada oommen chancellor s professor fellow ieee and fellow iapr this author is also an adjunct professor with the university of agder in grimstad norway the work of this author was partially supported by nserc the natural sciences and engineering research council of canada introduction this paper is a pioneering attempt to merge the areas of maps soms with the theory of adaptive data structures adss put in a nutshell we can describe the goal of this paper as follows consider a som with n neurons rather than having the neurons merely possess information about the feature space we also attempt to link them together by means of an underlying data structure ds this ds could be a list a list or a binary search tree bst etc the intention is that the neurons are governed by the laws of the som and the underlying ds observe now that the concepts of neighborhood and bubble of activity boa are not based on the nearness of the neurons in the feature space but rather on their proximity in the underlying ds having accepted the premise we intent to take this entire concept to a higher level of abstraction and propose to modify this ds itself adaptively using operations specific to it as far as we know the combination of these concepts has been unreported in the literature before we proceed to place our results in the right perspective it is probably wise to see how the concept of neighborhood has been defined in the som literature kohonen in his book mentions that it is possible to distinguish between two basic types of neighborhood functions the first family involves a kernel function which is usually of a gaussian nature the second is the neighborhood set also known as the bubble of activity boa this paper focuses on the second type of neighborhood function even though the traditional som is dependent on the neural distance to estimate the subset of neurons to be incorporated into the boa this is not always the case for the included in the literature indeed the different strategies described in the utilize families of schemes to define the boa we mainly identify three the first type of boa uses the concept of the neural distance as in the case of the traditional som once the best matching unit bmu is identified the neural distance is calculated by traversing the underlying structure that holds the neurons an important property of the neural distance between two neurons is that it is proportional to the number of connections separating them examples of strategies that use the neural distance to determine the boa are the growing cell structures gcs the growing grid gg the incremental grid growing igg the growing som gsom the som tssom the hierarchical feature map hfm the growing hierarchical som ghsom the selforganizing tree algorithm sota the evolving tree et the topology oriented som ttosom among others a second subset of strategies employ a scheme for determining the boa that does not depend on the connections instead such strategies utilize the distance in the feature space in these cases it is possible to distinguish between two types of neural networks nns the simplest situation occurs when the boa only considers the bmu it constitutes an instance of hard competitive learning cl as in the case of the vq tsvq and the tree map sotm a more sophisticated and computationally expensive scheme involves ranking the neurons as per their respective distances to the stimulus in this scenario the boa is determined by selecting a subset of the closest neurons an example of a som variant that uses such a ranking is the neural gas ng according to the authors of the variants included in the literature attempt to tackle two main goals they either try to design a more flexible topology which is usually useful to analyze large datasets or to reduce the the most task required by the som namely the search for the bmu when the input set has a complex nature in this paper we focus on the former of the two mentioned goals in other words our goal is to enhance the capabilities of the original som algorithm so as to represent the underlying data distribution and its structure in a more accurate manner we also intend to do so by constraining the neurons so that they are related to each other not just based on their neural indices and stochastic distribution but also based on a bst relationship furthermore as a long term ambition we also anticipate methods which can accelerate the task of locating the nearest neuron during the cl phase this work will present the details of the design and implementation of how an adaptive process applied to the bst can be integrated into the som regardless of the fact that numerous variants of the som has been devised few of them possess the ability of modifying the underlying topology moreover only a small subset use a tree as their underlying ds these strategies attempt to dynamically modify the nodes of the som for example by adding nodes which can be a single neuron or a layer of a however our hypothesis is that it is also possible to attain to a better understanding of the unknown data distribution by performing structural modifications on the tree which although they preserve the general topology attempt to modify the overall configuration by altering the way by which nodes are interconnected and yet continue as a bst we accomplish this by dynamically adapting the edges that connect the neurons by the nodes within the bst that holds the whole structure of neurons as we will explain later this is further achieved by local modifications to the overall structure in a constant number of steps thus we attempt to use rotations neighbors and the feature space to improve the quality of the som motivations acquiring information about a set of stimuli in an unsupervised manner usually demands the deduction of its structure in general the topology employed by any artificial neural network ann possessing this ability has an important impact on the manner by which it will absorb and display the properties of the input set consider for example the following a user may want to devise an algorithm that is capable of learning a distribution as the one depicted in figure the som tries to achieve this by defining an underlying topology and to fit the grid within the overall shape as shown in figure duplicated from however from our perspective a topology does not naturally fit a distribution and thus one experiences a deformation of the original lattice during the modeling phase as opposed to this figure shows the result of applying one of the techniques developed by us namely the ttosom as the reader can observe from figure a tree seems to be a far more superior choice for representing the particular shape in question the operation of rotation is the one associated with bsts as will be presently explained a the grid learned by the som b the tree learned by the ttosom figure how a distribution is learned through unsupervised learning on closer inspection figure depicts how the complete tree fills in the triangle formed by the set of stimuli and further seems to do it uniformly the final position of the nodes of the tree suggests that the underlying structure of the data distribution corresponds to the triangle additionally the root of the tree is placed roughly in the center of mass of the triangle it is also interesting to note that each of the three main branches of the tree cover the areas directed towards a vertex of the triangle respectively and their fill in the surrounding space around them in a recursive manner which we identify as being a behavior of course the triangle of figure serves only as a very simple prima facie example to demonstrate to the reader in an informal manner how both techniques will try to learn the set of stimuli indeed in problems these techniques can be employed to extract the properties of samples one can argue that imposing an initial topological configuration is not in accordance with the founding principles of unsupervised learning the phenomenon that is supposed to occur without supervision within the human brain as an initial response we argue that this supervision is required to enhance the training phase while the information we provide relates to the initialization phase indeed this is in line with the principle that very little can be automatically learned about a data distribution if no assumptions are made as the next step of motivating this research endeavor we venture into a world where the neural topology and structure are themselves learned during the training process this is achieved by the method that we propose in this paper namely the ttosom with conditional rotations ttoconrot which in essence dynamically extends the properties of the ttosom again to accomplish this we need key concepts that are completely new to the field of soms namely those related to adaptive data structure ads indeed as demonstrated by our experiments the results that we have already obtained have been applauded by the research and these to the best of our knowledge have remained unreported in the literature another reason why we are interested in such an integration deals with the issue for devising efficient methods that add neurons to the tree even though the schemes that we are currently proposing as mentioned earlier a paper which reported the preliminary results of this study won the best paper award in a international ai conference in this paper focus on tree adaptation by means of rotations we envision another type of dynamism one which involves the expansion of the tree structure through the insertion of newly created nodes the considers different strategies that expand trees by inserting nodes which can be a single neuron or a that essentially are based on a quantization error qe measure in some of these strategies the error measure is based on the hits the number of times a neuron has been selected as the bmu the strategy that we have chosen for adapting the tree namely using conditional rotations conrot already utilizes this bmu counter and distinct to the previous strategies that attempt to search for a node to be expanded which in the case of soms is usually at the level of the leaves we foresee and advocate a different approach our ttoconrot method asymptotically positions frequently accessed nodes close to the root and so according to this property it is the root node which should be split observe that if we follow such a philosophy one would not have to search for a node with a higher qe measure rather the conrot will be hopefully able to migrate the candidates closer to the root of course this works with assumption that a larger number of hits indicates that the degree of granularity of a particular neuron justifies refinement the concept of using the root of the tree for growing a som is in and of itself pioneering as far as we know contributions of the paper the contributions of the paper can be summarized as follows we present an integration of the fields of soms and ads this we respectfully submit as pioneering the neurons of the som are linked together using an underlying ds and they are governed by the laws of the ttosom paradigm and simultaneously the restructuring adaptation provided by conrot the definition of distance between the neurons is based on the tree structure and not in the feature space this is valid also for the boa rendering the migrations distinct from the the adaptive nature of the ttoconrot is unique because adaptation is perceived in two forms the migration of the codebook vectors in the feature space is a consequence of the som update rule and the rearrangement of the neurons within the tree as a result of the rotations organization of the paper the rest of the paper is organized as follows the next section surveys the relevant which involves both the field of soms including their instantiations and the respective field of bsts with conditional rotations after that in section we provide an explanation of the ttoconrot philosophy which is our primary contribution the subsequent section shows the capabilities of the approach through a series of experiments and finally section concludes the paper for the sake of space the literature review has been considerably condensed however given that there is no survey paper on the area of soms reported in the literature we are currently preparing a paper that summarizes the field literature review the som one of the most important families of anns used to tackle clustering problems is the well known som typically the som is trained using un supervised learning so as to produce a neural representation in a space whose dimension is usually smaller than that in which the training samples lie further the neurons attempt to preserve the topological properties of the input space the som concentrates all the information contained in a set of n input samples belonging to the ddimensional space say x xn utilizing a much smaller set of neurons c cm each of which is represented as a vector each of the m neurons contains a weight vector w wd t ird associated with it these vectors are synonymously called weights prototypes or codebook vectors the vector wi may be perceived as the position of neuron ci in the feature space during the training phase the values of these weights are adjusted simultaneously so as to represent the data distribution and its structure in each training step a stimulus a representative input sample from the data distribution x is presented to the network and the neurons compete between themselves so as to identify which is the winner also known as the best matching unit bmu after identifying the bmu a subset of the neurons close to it are considered to be within the bubble of activity boa which further depends on a parameter specified to the algorithm namely the radius thereafter this scheme performs a migration of the codebooks within that boa so as to position them closer to the sample being examined the migration factor by which this update is effected depends on a parameter known as the learning rate which is typically expected to be large initially and which decreases as the algorithm proceeds and which ultimately results in no migration at all algorithm describes the details of the som philosophy in algorithm the parameters are scheduled by defining a sequence s ss i where each si corresponds to a tuple ri ti that specifies the learning rate and the radius ri for a fixed number of training steps ti the way in which the parameters decay is not specified in the original algorithm and some alternatives are that the parameters remain fixed decrease linearly exponentially etc algorithm som x s input i x the input sample set ii s the schedule for the parameters method initialize the weights wm by randomly selecting elements from x repeat obtain a sample x from x find the winner neuron the one which is most similar to x determine a subset of neurons close to the winner migrate the closest neuron and its neighbors towards x modify the learning factor and radius as per the schedule until no noticeable changes are observed end algorithm although the som has demonstrated an ability to solve problems over a wide spectrum it possesses some fundamental drawbacks one of these drawbacks is that the user must specify the lattice a priori which has the effect that he must run the ann a number of times to obtain a suitable configuration other handicaps involve the size of the maps where a lesser number of neurons often represent the data inaccurately the approaches attempt to render the topology more flexible so as to represent complicated data distributions in a better way to make the process faster by for instance speeding up the task of determining the bmu there are a vast number of domain fields where the som has demonstrated to be useful a compendium with all the articles that take advantage of the properties of the som is surveyed in these survey papers classify the publications related to the som according to their year of release the report includes the bibliography published between the year and while the report includes the analogous papers published between and further additional recent references including the related work up to the year have been collected in a technical report the more recent literature reports a host of application domains including medical image processing human eye detection handwriting recognition image segmentation information retrieval object tracking etc soms although an important number of variants of the original som have been presented through the years we focus our attention on a specific family of enhancements in which the neurons are using a tree topology the vq tsvq algorithm is a som variant whose topology is defined a priori and which is static the training first takes place at highest levels of the tree the tsvq incorporates the concept of a frozen node which implies that after a node is trained for a certain amount of time it becomes static the algorithm then allows subsequent units the direct children to be trained the strategy utilizes a heuristic search algorithm for rapidly identifying a bmu it starts from the root and recursively traverses the path towards the leaves if the unit currently being analyzed is frozen the algorithm identifies the child which is closest to the stimulus and performs a recursive call the algorithm terminates when the node currently being analyzed is not a frozen node it is currently being trained and is returned as the bmu koikkalainen and oja in the same paper refine the idea of the tsvq by defining the tssom which inherits all the properties of the tsvq but redefines the search procedure and boa in the case of the tssom som layers of different dimensions are arranged in a pyramidal shape which can be perceived as a som with different degrees of granularity it differs from the tsvq in the sense that once the bmu is found the direct proximity is examined to check for the bmu on the other hand the boa differs in that instead of considering only the bmu its direct neighbors in the pyramid will also be considered the tree algorithm sota is a dynamically growing som which according to their authors take some analogies from the growing cell structures gcs the sota utilizes a binary tree as the underlying structure and similarly to other strategies the tssom and the evolving tree et explained below it considers the migration of the neurons only if they correspond to leaf nodes within the tree structure its boa depends on the neural tree and is defined for two cases the most general case occurs when the parent of the bmu is not the root a situation in which the boa is composed by the bmu its sibling and its parent node otherwise the boa constitutes the bmu only the sota triggers a growing mechanism that utilizes a qe to determine the node to be split into two new descendants in the authors presented a som called the growing hierarchical som ghsom in which each node corresponds to an independent som the expansion of the structure is dual the first type of adaptation is conceived by inserting new rows or columns to the som grid that is currently being trained while the second type is implemented by adding layers to the hierarchical structure both types of dynamism depend on the verification of qe measures the sotm is a som which is also inspired by the adaptive resonance theory art in the sotm when the input is within a threshold distance from the bmu the latter is migrated otherwise a new neuron is added to the tree thus in the sotm the subset of neurons to be migrated depends only on the distance in the feature space and not in the neural distance as most of the som families in the authors have proposed a nn called the evolving tree et which takes advantage of a procedure adapted from the one utilized by the tsvq to identify the bmu in o log time where v is the set of neurons the et adds neurons dynamically and incorporates the concept of a frozen neuron explained above which is a node that does not participate in the training process and which is thus removed from the boa similar to the tsvq the training phase terminates when all the nodes become frozen the topology oriented som ttosom which is central to this paper is a som in which each node can possess an arbitrary number of children furthermore it is assumed that the user has the ability to such a tree whose topological configuration is preserved through the training process the ttosom uses a particular boa that includes nodes leaf and ones that are within a certain neural distance the radius an interesting property displayed by this strategy is its ability to reproduce the results obtained by kohonen when the nodes of the som are arranged linearly in a list in this case the ttosom is able to adapt this grid to a or object in the same way as the som algorithm does this was a phenomenon that was not possessed by prior hierarchical networks reported in the additionally if the original topology of the tree followed the overall shape of the data distribution the results reported in and also depicted in the motivational section showed that is also possible to obtain a symmetric topology for the codebook vectors in a more recent work the authors have enhanced the ttosom to perform classification in a fashion the method presented in first learns the data distribution in an unsupervised manner once labeled instances become available the clusters are labeled using the evidence according to the results presented in the number of neurons required to accurately predict the category the som possesses the ability to learn the data distribution by utilizing a unidimensional topology the neighbors are defined along a grid in each direction further when this is the case one can encounter that the unidimensional topology forms a peano curve the ttosom also possesses this interesting property when the tree topology is linear the details of how this is achieved is presented in detail in including the explanation of how other techniques fail to achieve this task of novel data are only a small portion of the cardinality of the input set merging ads and ttosom adaptive data structures adss for bsts one of the primary goals of the area of ads is to achieve an optimal arrangement of the elements placed at the nodes of the structure as the number of iterations increases this reorganization can be perceived to be both automatic and adaptive such that on convergence the ds tends towards an optimal configuration with a minimum average access time in most cases the most probable element will be positioned at the root head of the tree ds while the rest of the tree is recursively positioned in the same manner the solution to obtain the optimal bst is well known when the access probabilities of the nodes are known a priori however our research concentrates on the case when these access probabilities are not known a priori in this setting one effective solution is due to cheetham et al and uses the concept of conrot which reorganizes the bst so as to asymptotically produce the optimal form additionally unlike most of the algorithms that are otherwise reported in the literature this move is not done on every data access operation it is performed if and only if the overall weighted path length wpl of the resulting bst decreases a bst may be used to store records whose keys are members of an ordered set a an the records are stored in such a way that a traversal of the tree will yield the records in an ascending order if we are given a and the set of access probabilities q qn the problem of constructing efficient bsts has been extensively studied the optimal algorithm due to knuth uses dynamic programming and produces the optimal bst using o n time and space in this paper we consider the scenario in which q the access probability vector is not known a priori we seek a scheme which dynamically rearranges itself and asymptotically generates a tree which minimizes the access cost of the keys the primitive tree restructuring operation used in most bst schemes is the well known operation of rotation we describe this operation as follows suppose that there exists a node i in a bst and that it has a parent node j a left child il and a right child ir the function p i j relates node i with its parent j if it exists also let b i k relate node i with its sibling k the node if it exists that shares the same parent as i consider the case when i is itself a left child see figure a rotation is performed on node i as follows j now becomes the right child ir becomes the left child of node j and all the other nodes remain in their same relative positions see figure the case when node i is a right child is treated in a symmetric manner this operation has the effect of raising or promoting a specified node in the tree structure while preserving the lexicographic order of the elements refer again to a few tree reorganizing which use this operation have been presented in the literature among which are the and the simple exchange rules in the heuristic each time a record is accessed rotations are performed on it in an upwards direction until it becomes the this review is necessary brief a more detailed version is found in a the tree before a rotation is performed the contents of the nodes are their data values which in this case are the characters a b c d e b the tree after a rotation is performed on node i figure the bst before and after a rotation is performed root of the tree on the other hand the simple exchange rule rotates the accessed element one level towards the root sleator and tarjan introduced a technique which also moves the accessed record up to the root of the tree using a restructuring operation called splaying which actually is a generalization of the rotation their structure called the splay tree was shown to have an amortized time complexity of o log n for a complete set of tree operations which included insertion deletion access split and join the literature also records various schemes which adaptively restructure the tree with the aid of additional memory locations prominent among them is the monotonic tree mt and mehlhorn s dt the mt is a dynamic version of a tree structuring method originally suggested by knuth in spite of all their advantages all of the schemes mentioned above have drawbacks some of which are more serious than others the schemes have one major disadvantage which is that both the and splaying rules always move the accessed record up to the root of the tree this means that if a arrangement is reached a single access of a record will disarrange the tree along the entire access path as the element is moved upwards to the root as opposed to these schemes the mt rule does not move the accessed element to the root every time but as reported in in practice it does not perform well the weakness of the mt lies in the fact that it considers only the frequency counts for the records which leads to the undesirable property that a single rotation may move a subtree with a relatively large probability weight downwards thus increasing the cost of the tree this paper uses a particular heuristic namely the conditional rotations for a bst which has been shown to reorganize a bst so as to asymptotically arrive at an optimal form in its optimized version the scheme referred to algorithm requires the maintenance of a single memory location per record which keeps track of the number of accesses to the subtree rooted at that record the algorithm specifies how an accessed element can be rotated towards the root of the tree so as to minimize the overall cost of the entire tree finally unlike most of the algorithms that are currently in the literature this move is not done on every data access operation it is performed if and only if the overall wpl of the resulting bst decreases in essence algorithm attempts to minimize the wpl by incorporating the statistical information about the accesses to the various nodes and subtrees rooted at the corresponding nodes the basic condition for the rotation of a node is that the wpl of the entire tree must decrease as a result of a single rotation this is achieved by a conditional rotation to define the concept of a conditional rotation we define n as the total number of accesses to the subtree rooted at node i one of the biggest advantages of the heuristic is that it only requires the maintenance and processing of the values stored at a specific node and its direct neighbors its parent and both children if they exist algorithm formally given in algorithm describes the process of the conditional rotations for a bst the algorithm receives two parameters the first of which corresponds to a pointer to the root of the tree and the second which corresponds to the key to be searched which is assumed to be present in the tree when a node access is requested the algorithm seeks for the node from the root down towards the leaves algorithm j ki input i j a pointer to the root of a binary search tree t ii ki a search key assumed to be in t output i the restructured tree t ii a pointer to the record i containing ki method if ki kj then if j true then j else j end if if then j j p j end if return record j else if ki kj then j ki else j ki end if end if end algorithm the first task accomplished by the algorithm is the updating of the counter for the present node along the path traversed after that the next step consists of determining whether or not the node with the requested key has been found when this occurs the quantities defined by equations and are computed to determine the value of a quantity referred to as where j when j is the left child of its parent p j and j when j is a right descendant of p j when is less than zero an upward rotation is performed the authors of have shown that this single rotation leads to a decrease in the overall wpl of the entire tree this occur in line of the algorithm in which the method is invoked the parameter to this method is a pointer to the node j the method does the necessary operations required to rotate the node upwards which means that if the node j is the left child of the parent then this is equivalent to performing a right rotation over p j the parent of analogously when j is the right child of its parent the parent of j is instead once the rotation takes place it is necessary to update the corresponding counters fortunately this task only involve the updating of for the rotated node and the counter of its parent i the last part of the algorithm namely lines deals with the further search for the key which in this case is achieved recursively the reader will observe that all the tasks invoked in the algorithm are performed in constant time and in the worst case the recursive call is done from the root down to the leaves leading to a o h running complexity where h is the height of the tree the ttosom with conditional rotations ttoconrot this section concentrates on the details of the integration between the fields of ads and the som and in particular the ttosom although merging ads and the som is relevant to a wide spectrum of dss we focus our scope by considering only structures more specifically we shall concentrate on the integration of the heuristic into a ttosom both of which were explained in the preceding sections we can conceptually distinguish our method namely the topology oriented som with conditional rotations ttoconrot from its components and properties in terms of components we detect five elements first of all the ttoconrot has a set of neurons which like all methods represents the data space in a condensed manner secondly the ttoconrot possesses a connection between the neurons where the neighbor of any specific neuron is based on a nearness measure that is the third and fourth components involve the migration of the neurons similar to the reported families of soms a subset of neurons closest to the winning neuron are moved towards the sample point using a vector quantization vq rule however unlike the reported families of soms the identity of the neurons that are moved is based on the proximity and not on the proximity finally the ttoconrot possesses mutating operations namely the conditional rotations with respect to the properties of the ttoconrot we mention the following first of all it is adaptive with regard to the migration of the points secondly it is also adaptive with regard to the identity of the neurons moved thirdly the distribution of the neurons in the feature space mimics the distribution of the sample points finally by virtue of the conditional rotations the entire tree is optimized with regard to the overall accesses which is a unique phenomenon when compared to the reported family of soms as far as we know as mentioned in the introductory section the general dynamic adaptation of som lattices reported in the literature considers essentially adding and in some cases deleting however the concept of modifying the underlying structure s shape itself has been unrecorded our hypothesis is that this is advantageous by means of a repositioning of the nodes and the consequent edges as seen when one performs rotations on a bst in other words we place our emphasis on the which occurs as a result of restructuring the ds representing the som in this case as alluded to earlier the restructuring process is done between the connections of the neurons so as to attain an asymptotically optimal configuration where nodes that are accessed more frequently will tend to be placed close to the root we thus obtain a new species of soms which is by performing rotations conditionally locally and in a constant number of steps the primary goal of the field of ads is to have the structure and its elements attain an optimal configuration as the number of iterations increases particularly among the adss that use trees as the underlying topology the common goal is to minimize the overall access cost and this roughly means that one places the most frequently accessed nodes close to the root which is also what moves towards although such an adaptation can be made on any som paradigm the conrot is relevant to a tree structure and thus to the ttosom this further implies that some specific must be applied to achieve the integration between the two paradigms we start by defining a binary search tree som bstsom as a special instantiation of a som which uses a bst as the underlying topology an adaptive bstsom abstsom is a further refinement of the bstsom which during the training process employs a technique that automatically modifies the configuration of the tree the goal of this adaptation is to facilitate and enhance the search process this assertion must be viewed from the perspective that for a som neurons that represent areas with a higher density will be queried more often every abstsom is characterized by the following properties first it is adaptive where by virtue of the bst representation this adaptation is done by means of rotations rather than by merely deleting or adding nodes second the neural network corresponds to a bst the goal is that the nn maintains the essential stochastic and topological properties of the som neural distance as in the case of the ttosom the neural distance dn between two neurons depends on the number of unweighted connections that separate them in the tree it is consequently the number of edges in the shortest path that connects the two given nodes more explicitly the distance between two nodes in the tree is defined as the minimum number of edges required to go from one to the other in the case of trees the fact that there is only a single path connecting two nodes implies the uniqueness of the shortest path and permits the efficient calculation of the distance between them by a node traversal algorithm note however that in the case of the ttosom since the tree itself was static the distances can be a priori simplifying the computational process the situation changes when the tree is dynamically modified as we shall explain below the implications of having the tree which describes the som to be dynamic are first of all the siblings of any given node may change at every time instant secondly the parents and ancestors of the node under consideration could also change at every instant but most importantly the structure of the tree itself could change implying that nodes that were neighbors at any time instant may not continue to be neighbors at the next indeed in the extreme case if a node was migrated to become the root the fact that it had a parent at a previous time instant is irrelevant at the next this of course changes the entire landscape rendering the resultant som to be unique and distinct from the an example will clarify this consider figure which illustrates the computation of the neural distance for various scenarios first in figure we present the scenario when the node accessed is b observe that the distances are depicted with dotted arrows with an adjacent numeric index specifying the current distance from node b in the example prior to an access nodes h c and e are all at a distance of from node b even though they are at different levels in the tree the reader should be aware that nodes may also be involved in the calculation as in the case of node figures and show the process when node b is queried which in turn triggers a rotation of node b upwards observe that the rotation itself only requires local modifications leaving the rest of the tree untouched for the sake of simplicity and explicitness unmodified areas of the tree are represented by dashed lines finally figure depicts the configuration of the tree after the rotation is performed at this time instant c and e are both at distance of from b which means that they have increased their distance to b by unity moreover although node h has changed its position its distance to b remains unmodified clearly the original distances are not necessarily preserved as a consequence of the rotation generally speaking there are four regions of the tree that remain unchanged these are namely the portion of the tree above the parent of the node being rotated the portion of tree rooted at the right child of the node being rotated the portion of tree rooted at the left child of the node being rotated and the portion of tree rooted at the sibling of the node being rotated even though these four regions remain unmodified the neural distance in these regions are affected because the rotation could lead to a modification of the distances to the nodes another consequence of this operation that is worth mentioning is the following the distance between any two given nodes that belong to the same unmodified region of the tree is preserved after a rotation is performed the proof of this assertion is obvious inasmuch as the fact remains that every path between nodes in any unmodified remains with the same this property is interesting because it has the potential to accelerate the computation of the respective neural distances a b c d figure example of the neural distance before and after a rotation in figure nodes h c and e are equidistant from b even though they are at different levels in the tree figures and show the process of rotating node a upwards finally figure depicts the state of the tree after the rotation when only c and e are equidistant from b and their distance to b has increased by unity on the other hand although h has changed its position its distance to b remains the same the bubble of activity a concept closely related to the neural distance is the one referred to as the bubble of activity boa which is the subset of nodes within a distance of r away from the node currently examined those nodes are in essence those which are to be migrated toward the signal presented to the network this concept is valid for all nns and in particular for the ttosom we shall now consider how this bubble is modified in the context of rotations the concept of the bubble involves the consideration of a quantity the radius which establishes how big the boa is and which therefore has a direct impact on the number of nodes to be considered the boa can be formally defined as b vi t r vi v t r where vi is the node currently being examined and v is an arbitrary node in the tree t whose nodes are v note that b vi t vi b vi t i b vi t i and b vi t v which generalizes the special case when the tree is a simple directed path to clarify how the bubble changes in the context of rotations we first describe the context when the tree is static as presented in the function ttosom calculate neighborhood see algorithm specifies the steps involved in the calculation of the subset of neurons that are part of the neighborhood of the bmu this computation involves a collection of parameters including b the current subset of neurons in the proximity of the neuron being examined v the bmu itself and r in the current radius of the neighborhood when the function is invoked for the first time the set b contains only the bmu marked as the current node and algorithm ttosom calculate neighborhood b v r input i b the set of the nodes in the bubble of activity identified so far ii v the node from where the bubble of activity is calculated iii r the current radius of the bubble of activity output i the set of nodes in the bubble of activity method if r then return else for all child do if child b then b b child ttosom calculate neighborhood b child r end if end for parent if parent null and parent b then b b parent ttosom calculate neighborhood b parent r end if end if end algorithm through a recursive call b will end up storing the entire set of units within a radius r of the bmu the tree is recursively traversed for all the direct topological neighbors of the current node in the direction of the direct parent and children every time a new neuron is identified as part of the neighborhood it is added to b and a recursive call is made with the radius decremented by one marking the recently added neuron as the current node the question of whether or not a neuron should be part of the current bubble depends on the number of connections that separate the nodes rather than the distance that separate the networks in the solution space for instance the euclidean distance figure depicts how the boa differs from the one defined by the ttosom as a result of applying a rotation figure shows the boa around the node b using the same configuration of the tree as in figure before the rotation takes place here the boa when r involves the nodes b a d f and when r the nodes contained in the bubble are b a d f c e h subsequently considering a radius equal to the resulting boa contains the nodes b a d f c e h g i finally the r case leads to a boa which includes the whole set of nodes now observe the case presented in figure which corresponds to the boa around b after the rotation upwards has been effected the same configuration of the tree used in figure in this case when the radius is unity nodes b a f are the only nodes within the bubble which is different from the corresponding bubble before the rotation is invoked similarly when r we obtain a set different from the analogous case which in this case is b a f d h note that coincidentally for the case of a radius equal to the bubbles are identical before and after the rotation they invoke the nodes b a d f g i trivially again when r the boa invokes the entire tree this fact will ensure that the algorithm reaches the base case when r a before b after figure the boa associated with the ttosom before and after a rotation is invoked at node b as explained equation describes the criteria for a boa calculated on a static tree it happens that as a result of the conditional rotations the tree will be dynamically adapted and so the entire phenomenon has to be consequently the boa around a particular node becomes a function of time and to reflect this fact equation should be reformulated as b vi t r t vi v t t r where t is the discrete time index the algorithm to obtain the boa for a specific node in such a setting is identical to algorithm except that the input tree itself dynamically changes further even though the formal notation includes the time parameter t it happens that in practice the latter is needed only if the requires a history of the boa for any or all the nodes storing the history of boas will require the maintenance of a ds that will primarily store the changes made to the tree itself although storing the history of changes made to the tree can be done optimally the question of explicitly storing the entire history of the boas for all the nodes in the tree remains open enforcing the bst property the heuristic requires that the tree should possess the bst property let x be a node in a bst if y is a node in the left subtree of x then key y key x further if y is a node in the right subtree of x then key x key y to satisfy the bst property first of all we see that the tree must be as a general ttosom utilizes an arbitrary number of children per node one possibility is to bound the value of the branching factor to be in other words the tree trained by the ttosom is restricted to contain at most two children per node additionally the tree must implicitly involve a comparison operator between the two children so as to discern between the branches and thus perform the search process this comparison can be achieved by defining a unique key that must be maintained for each node in the tree and which will in turn allow a of course this is a severe constraint but we are forced to require this because the phenomenon of achieving conditional rotations for arbitrary trees is unsolved this research however is currently being undertaken lexicographical arrangement of the nodes this leads to a different but closely related concept which concerns the preservation of the topology of the som during the training process the configuration of the tree will change as the tree evolves positioning nodes that are accessed more often closer to the root this ordering will hopefully be preserved by the rotations a particularly interesting case occurs when the imposed tree corresponds to a list of neurons a tree if the ttosom is trained using such a tree where each node has at most two children then the adaptive process will alter the original list the rotations will then modify the original configuration generating a new state where the nodes might have one or two children each in this case the consequence of incorporating enhancements to the ttosom will imply that the results obtained will be significantly different from those shown in as shown in an optimal arrangement of the nodes of the tree can be obtained using the probabilities of accesses if these probabilities are not known a priori then the heuristic offers a solution which involves a decision of whether or not to perform a single rotation towards the root it happens that the concept of the just accessed node in the is compatible with the corresponding bmu defined for the cl model in cl a neuron may be accessed more often than others and some techniques take advantage of this phenomenon through the inclusion of strategies that add or delete nodes the implicitly stores the information acquired by the currently accessed node by incrementing a counter for that node this is in a distant sense akin to the concept of a bmu counter which adds or delete nodes in competitive networks during the training phase when a neuron is a frequent winner of the cl it gains prominence in the sense that it can represent more points from the original data set this phenomenon is registered by increasing the bmu counter for that neuron we propose that during the training phase we can verify if it is worth modifying the configuration of the tree by moving this neuron one level up towards the root as per the algorithm and consequently explicitly recording the relevant role of the particular node with respect to its nearby neurons achieves this by performing a local movement of the node where only its direct parent and children are aware of the neuron promotion neural promotion is the process by which a neuron is relocated in a more privileged in the network with respect to the other neurons in the nn thus while all all neurons are born equal their importance in the society of neurons is determined by what they represent this is achieved by an explicit advancement of its rank or position given this premise the nodes in the tree will be adapted in such a way that neurons that have been bmus more frequently will tend to move towards the root if an only if a reduction in the overall wpl is obtained as a consequence of such a promotion the properties of guarantee this once the som and bst are tied together in a symbiotic manner where one enhances the other and vice versa the adaptation can be achieved by affecting the configuration of the bst this task will be performed every time a training step of the som is performed clearly it is our task to achieve an integration of the as far as we know we are not aware of any research which deals with the issue of neural promotion thus we believe that this concept itself is pioneering bst and the som and figure depicts the main architecture used to accomplish this it transforms the structure of the som by modifying the configuration of the bst that in turn holds the structure of the neurons figure architectural view of an adaptive som as this work constitutes the first attempt to constraint a som using a bst our focus is placed on the of the nodes in this sense the unique identifiers of the nodes are employed to maintain the bst structure and to promote nodes that are frequently accessed towards the root we are currently examining ways to enhance this technique so as to improve the time required to identify the bmu as well initialization initialization in the case of the ttosom is accomplished in two main steps which involve defining the initial value of each neuron and the connections among them the initialization of the codebook vectors are performed in the same manner as in the basic ttosom the neurons can assume a starting value arbitrarily for instance by placing them on randomly selected input samples on the other hand a major enhancement with respect to the basic ttosom lays in the way the neurons are linked together the basic definition of the ttosom utilizes connections that remain static through time the beauty of such an arrangement is that it is capable of reflecting the user s perspective at the time of describing the topology and it is able to preserve this configuration until the algorithm reaches convergence the inclusion of the rotations renders this dynamic the required local information in our proposed approach the codebooks of the som correspond to the nodes of a bst apart from the information regarding the codebooks themselves in the feature space each neuron requires the maintenance of additional fields to achieve the adaptation besides this each node inherits the properties of a bst node and it thus includes a pointer to the left and right children as well as to make the implementation easier a pointer to its parent each node also contains a label which is able to uniquely identify the neuron when it is in the company of other neurons this identification index constitutes the lexicographical key used to sort the nodes of the tree and remains static as time proceeds figure depicts all the fields included in a neuron of a som figure fields included in a som neuron the neural state the different states that a neuron may assume during its lifetime are illustrated in figure at first when the node is created it is assigned a unique identifier and the rest of the data fields are populated with their initial values here the codebook vector assumes a starting value in the feature space and the pointers are configured so as to appropriately link the neuron with the rest of the neurons in the tree in a bst configuration next during the most significant portion of the algorithm the nn enters a main loop where training is effected this training phase involves adjusting the codebooks and may also trigger optional modules that affect the neuron once the bmu is identified the neuron might assume the restructured state which means that a restructuring technique such as the conrot algorithm will be applied alternatively the neuron might be ready to accept queries be part of the cl process in the mapping mode additionally an option that we are currently investigating involves the case when a neuron is no longer necessary and may thus be eliminated from the main neural structure we refer to this state as the deleted state and it is depicted using dashed lines finally we foresee an alternative state referred to as the frozen state in which the neuron does not participate in the cl during the training mode although it may continue to be part of the overall nn structure figure possible states that a neuron may assume the training step of the ttoconrot the training module of the ttoconrot is responsible of determining the bmu performing restructuring calculating the boa and migrating the neurons within the boa basically what has to be done is to integrate the conrot algorithm into the sequence of steps responsible for the training phase of the ttosom algorithm describes the details of how this integration is accomplished line performs the first task of the algorithm which involves determining the bmu after that line invokes the conrot procedure the rationale for following this sequence of steps is that the parameters needed to perform the conditional rotation as specified in includes the key of the element queried which in the present context corresponds to the identity of the bmu at this stage of the algorithm the bmu may be rotated or not depending on the optimizing criterion given by equations and and the boa is determined after this restructuring is done these are performed in lines and of the algorithm respectively finally lines to are responsible for the neural migration itself and oversee the movement of the neurons within the boa towards the input sample algorithm train x p input i x a sample signal ii p the pointer to the tree method v ttosom find bmu x p p b v ttosom calculate neighborhood b v radius for all b b do update rule x end for end algorithm alternative restructuring techniques even though we have explained the advantages of the conrot algorithm the architecture that we are proposing allows the inclusion of alternative restructuring modules other than the conrot potential candidates which can be used to perform the adaptation are the ones mentioned in section and include the splay and the mt algorithms among others experimental results to illustrate the capabilities of our method the experiments reported in the present work are primarily focused in lower dimensional feature spaces this will help the reader in geometrically visualizing the results we have obtained however it is important to remark that the algorithm is also capable of solving problems in higher dimensions although a graphical representation of the results will not be as illustrative we know that as per the results obtained in the ttosom is capable of inferring the distribution and structure of the data however in this present setting we are interested in investigating the effects of applying the neural rotation as part of the training process to render the results comparable the experiments in this section use the same schedule for the learning rate and radius no particular refinement of the parameters has been done for any specific data set additionally the parameters follow a rather slow decrement of the decay parameters allowing us to understand how the prototype vectors are moved as convergence takes place when solving practical problems we recommend a further refinement of the parameters so as to increase the speed of the convergence process ttoconrot s structure learning capabilities we shall describe the performance of ttoconrot with data sets in and dimensions as well as experiments in the multidimensional domain the specific advantages of the algorithm for various scenarios will also be highlighted one dimensional objects since our entire learning paradigm assumes that the data has a model our first attempt was to see how the philosophy is relevant to a unidimensional object a curve which really possesses a linear topology thus as a prima facie case we tested the strength of the ttoconrot to infer the properties of data sets generated from linear functions in the plane figure shows different snapshots of how the ttoconrot learns the data generated from a curve random initialization was used by uniformly drawing points from the unit square observe that the original data points do not lie in the curve our aim here was to show how our algorithm could learn the structure of the data using arbitrary initial and values for the codebook vectors figures and depict the middle phase of the training process where the edges connecting the neurons are omitted for simplicity it is interesting to see how after a few hundred training steps the original chaotic placement of the neurons are rearranged so as to fall within the line described by the data points the final configuration is shown in figure the reader should observe that after convergence has been achieved the neurons are placed almost equidistantly along the curve even though the codebooks are not sorted in and increasing numerical order the hidden tree and its root denoted by two concentric squares are configured in such a way that nodes that are queried more frequently will tend to be closer to the root in this sense the algorithm is not only capturing the essence of the topological properties of the data set but at the same time rearranging the internal order of the neurons according to their importance in terms of their probabilities of access two dimensional data points to demonstrate the power of including ads in soms we shall now consider the same data sets studied in first we consider the data generated from a distribution as shown in figures in this case the initial tree topology is unidirectional a list although realistically this is quite inadvisable considering the true unknown topology of the distribution in other words we assume that the user has no a priori information about the data distribution thus for the initialization phase a tree is employed as the tree structure and the respective keys are assigned in an increasing order observe that in this way we are providing minimal information to the algorithm the root of the tree is marked with two concentric squares the neuron labeled with the index in figure also with regards to the feature space the prototype vectors are initially randomly placed in the first iteration the linear topology is lost which is attributable to the randomness of the data points as the prototypes are migrated a after iterations b after iterations c after iterations d after iterations figure a tree a list topology learns a curve for the sake of simplicity the edges are ommitted and reallocated see figures and the tree is modified as a consequence of the rotations such a transformation is completely novel to the field of soms finally figure depicts the case after convergence has taken place here the tree nodes are uniformly distributed over the entire triangular domain the bst property is still preserved and further rotations are still possible if the training process continues this experiment serves as an excellent example to show the differences between our current method and the original ttosom algorithm where the same data set with similar settings was utilized in the case of the ttoconrot the points effectively represent the entire data set however the reader must observe that we do not have to provide the algorithm with any particular a priori information about the structure of the data distribution this is learned during the training process as shown in figure thus the specification of the initial tree topology representing his perspective of the data space required by the ttosom is no longer mandatory and an alternative specification which only requires the number of nodes in the initial tree is sufficient a second experiment involves a gaussian distribution here a gaussian ellipsoid is learned using the ttoconrot algorithm the convergence of the entire training execution phase is displayed in figure this experiment considers a complete bst of depth containing nodes for simplicity the labels of the nodes have been removed in figure the tree structure generated by the neurons suggest an ellipsoidal structure for the data distribution this experiment is a good example to show how the nodes close to the root represent dense areas of the ellipsoid and at the same time those node that are far from the root in tree space occupy regions with low density in the extremes of the ellipse the ttoconrot infers this structure without receiving any a priori information about the distribution or its structure the experiment shown in figures considers data generated from an irregular shape with a concave surface again as in the case of the experiments described earlier the original tree includes neurons arranged unidirectionally as in a list as a result of the training the distribution is learned and the a after iterations b after iterations c after iterations d after iterations figure a tree a list topology learns a triangular distribution the ds is so that nodes accessed more frequently are moved closer to the root conditionally the bst property is also preserved figure a tree learns a gaussian distribution the neurons that are accessed more frequently are promoted closer to the root tree is adapted accordingly as illustrated in figure observe that the random initialization is performed by randomly selecting points from the unit square and this points thus do not necessarily fall within the concave boundaries although this initialization scheme is responsible of placing codebook vectors outside of the irregular shape the reader should observe that in a few training steps they are repositioned inside the contour it is important to indicate that even though after the convergence of the algorithm a line connecting two points passes outside the overall unknown shape one must take into account that the ttoconrot tree attempts to mimic the stochastic properties in terms of access probabilities when the user desires the topological mimicry in terms of skeletal structure we recommend the use of the ttosom instead the final distribution of the points is quite amazing a after iterations b after iterations c after iterations d after iterations figure a tree a list topology learns different distributions from a concave object using the ttoconrot algorithm the set of parameters is the same as in the other examples three dimensional data points we will now explain the results obtained when applying the algorithm with and without conrot to do this we opt to consider objects the experiments utilize the data generated from the contour of the unit sphere it also initially involves an chain of neurons additionally in order to show the power of the algorithm both cases initialize the codebooks by randomly drawing points from the unit cube which thus initially places the points outside the sphere itself figure presents the case when the basic tto algorithm without conrot learns the unit sphere without performing conditional rotations the illustration presented in figure show the state of the neurons before the first iteration is completed here as shown the codebooks lie inside the unit cube although some of the neurons are positioned outside the boundary of the respective circumscribed sphere which is the one we want to learn secondly figures and depict intermediate steps of the learning phase as the algorithm processes the information provided by the sample points and the neurons are repositioned the chain of neurons is constantly twisted so as to adequately represent the entire manifold finally figure illustrates the case when the convergence is reached in this case the list of neurons is evenly distributed over the sphere preserving the original properties of the object and also presenting a shape which reminds the viewer of the peano curve a complimentary set of experiments which involved the learning of the same unit sphere where the tto scheme was augmented by conditional rotations conrot was also conducted figure shows the initialization of the codebooks here the starting positions of the neurons fall within the unit cube as in the case displayed in figure figures and show snapshots after and iterations respectively in this case the tree configuration obtained in the intermediate phases differ significantly from those obtained by the corresponding configurations shown in figure those that involved no rotations in this case the list rearranges itself as per conrot modifying the original chain structure to yield a a after iterations b after iter c after iter d after iter figure a tree a list topology learns a sphere distribution when the algorithm does not utilize any conditional rotation balanced tree finally from the results obtained after convergence and illustrated in figure it is possible to compare both scenarios in both cases we see that the tree is accurately learned however in the first case the structure of the nodes is maintained as a list throughout the learning phase while in the case when conrot is applied the configuration of the tree is constantly revised promoting those neurons that are queried more frequently additionally the experiments show us how the dimensionality reduction property evidenced in the traditional som is also present in the ttoconrot here an object in the domain is successfully learned by our algorithm and the properties of the original manifold are captured from the perspective of a tree a after iterations b after iter c after iter d after iter figure a tree a list topology learns a sphere distribution multidimensional data points the well known iris dataset was chosen for showing the power of our scheme in a scenario when the dimensionality is increased this data set gives the measurements in centimeters of the variables which are the sepal length sepal width petal length and petal width respectively for flowers from each of species of the iris family the species are the iris setosa versicolor and virginica in this set of experiments the iris data set was learned under three different configurations using a fixed schedule for the learning rate and radius but with a distinct tree configuration the results of the experiments are depicted in figure and involve a complete binary tree of depth and respectively taking into account that the dataset possesses a high dimensionality we present the projection in the space to facilitate the visualization we also removed the labels from the nodes in figure to improve understandability a using nodes b using nodes c using nodes figure three different experiments where the ttoconrot effectively captures the fundamental structure of the iris dataset projection of the data is shown the experiment utilizes a underlying tree topology of a complete binary tree with different levels of depth by this we attempt to show examples of how exactly the same parameters of the ttoconrot can be utilized to learn the structure from data belonging to the and also spaces after executing the each of the main branches of the tree were migrated towards the center of mass of the cloud of points in the belonging to each of the three categories of flowers respectively since the ttoconrot is an unsupervised learning algorithm it performs learning without knowing the true labels of the samples however when these labels are available one can use them to evaluate the quality of the tree to do so each sample is assigned to its closest neuron and tagging the neuron with the class which is most frequent table presents the evaluation for the tree in figure assigned to neuron table cluster to class evaluation for the tree in figure using the simple voting scheme explained above it is possible to see from table that only instances are incorrectly classified of the instances are correctly classified additionally observe that node contains all the instances corresponding to the class it is well known that the class is linearly separable from the other two classes and our algorithm was able to discover this without providing it with the labels we find this result quite fascinating the experimental results shown in table not only demonstrate the potential capabilities of the ttoconrot for performing clustering but also suggest the possibilities of using it for pattern classification according to there are several reasons for performing pattern classification using an unsupervised approach we are currently investigating such a classification strategy skeletonization in general the main objective of skeletonization consists of generating a simpler representation of the shape of an object the authors of refer to skeletonization in the plane as the process by which a shape is transformed into a one similar to a stick figure the applications of skeletonization are diverse including the fields of computer vision and pattern recognition as explained in the traditional methods for skeletonization assume the connectivity of the data points and when this is not the case more sophisticated methods are required previous efforts involving som variants to achieve skeletonization has been proposed we remark that the ttosom is the only one which uses a structure the ttosom assumed that the shape of the object is not known a priori rather this is learned by accessing a single point of the entire shape at any time instant our results reported in confirm that this is actually possible and we now focus on how the conditional rotations will affect such a skeletonization figure shows how the ttoconrot learned the skeleton of different objects in the and the domain in all the cases the same schedule of parameters were used and only the number of neurons employed was chosen proportionally to the number of data points contained in the respective data sets it is important to remark that we did not invoke any of the edges minimum spanning tree and that the skeleton observed was exactly what our bstsom learned firstly figures illustrate the shapes of the silhouette of a human a rhinoceros a representation of a head and a representation of a woman the figures also show the trees learned from the respective data sets additionally figures display only the data points which in our opinion are capable of representing the fundamental structure of the four objects in a way effectively as a final comment we stress that all the shapes employed in the experiments involve the learning of the external structure of the objects for the case of solid objects if the internal data points are also provided the ttoconrot is able to give an approximation of the a representation in which the skeleton is built inside the solid object theoretical analysis according to kiviluoto there are three different criteria for evaluating the quality of a map the first criterion indicates how continuous the mapping is implying that input signals that are close in the input space should be mapped to codebooks that are close in the output space as well a second criterion involves the resolution of the mapping maps with high resolution possess the additional property that input signals that are distant in the input space should be represented by distant codebooks in the output space a third criterion imposed on the accuracy of the mapping is aimed to reflect the probability distribution of the input set there exist a variety of measures for quantifying the quality of the topology preservation the author of surveys a number of relevant measures for the quality of maps and these include the quantization error the topographic product the topographic error and the trustworthiness and neighborhood preservation although we are currently investigating how the quality of any som not just our scheme can be quantified using these metrics the following arguments are pertinent a b c d e f g h figure ttoconrot effectively captures the fundamental structure of four objects in a way figures show the silhouette of a human a rhinoceros a representation of a head and a representation of a woman as well as the respective trees learned figures show only the respective data points the ordering of the weights with respect to the position of the neurons of the som has been proved for unidimensional topologies extending these results to higher dimensional configurations or topologies leads to numerous unresolved problems first of all the question of what one means by ordering in higher dimensional spaces has to be defined further the issue of the absorbing nature of the ordered state is open budinich in explains intuitively the problems related to the ordering of neurons in higher dimensional configurations huang et al introduce a definition of the ordering and show that even though the position of the codebook vectors of the som have been ordered there is still the possibility that a sequence of stimuli will cause their disarrangement some statistical indexes of correlation between the measures of the weights and distances of the related positions have been introduced in with regard to the topographic product the authors of have shown the power of the metric by applying it on different artificial and data sets and also compared it with different measures to quantify the topology their study concentrates on the traditional som implying that the topologies evaluated were of a linear nature with the consequential extension to and by means of grids only in haykin mention that the topographic product may be employed to compare the quality of different maps even when these maps possess different dimensionality however he also noted that this measurement is only possible when the dimensionality of the topological structure is the same as the dimensionality of the feature space further topologies were not considered in their study to be more precise most of the effort towards determining the concept of topology preservation for dimensions greater than unity are specifically focused on the som and do not define how a treelike topology should be measured nor how to define the order in topologies which are not thus we believe that even the tools to analyze the ttoconrot are currently not available the experimental results obtained in our paper suggest that the ttoconrot is able to train the nn so as to preserve the stimuli however in order to quantify the quality of this topology the matter of defining a concept of ordering on structure has yet to be resolved although this issue is of great interest to us this rather ambitious task lies beyond the scope of our present manuscript conclusions and discussions concluding remarks in this paper we have proposed a novel integration between the areas of adaptive data structures adss and the maps soms in particular we have shown how a som can be adaptively transformed by the employment of an underlying binary search tree bst structure and subsequently restructured using rotations that are performed conditionally these rotations on the nodes of the tree are local can be done in constant time and performed so as to decrease the weighted path length wpl of the entire tree one of the main advantages of the algorithm is that the user does not need to have a priori knowledge about the topology of the input data set instead our proposed method namely the ttosom with conditional rotations ttoconrot infers the topological properties of the stochastic distribution and at the same time attempt to build the best bst that represents the data set incorporating the data structure s constraints in this ways has not being achieved by any of the related approaches included in the our premise is that regions of the that are accessed more often should be promoted to preferential spots in the tree representation which yields to an improved stochastic representation as our experimental results suggest the ttoconrot tree is indeed able to absorb the stochastic properties of the input manifold it is also possible to obtain a tree configuration that can learn both the stochastic properties in terms of access probabilities and at the same time preserve the topological properties in terms of its skeletal structure discussions and future work as explained in section the work associated with measuring the topology preservation of the som including the proof of its convergence for the unidimensional case has been performed for the traditional som only the questions are unanswered for how a topology should be measured and for defining the order in topologies which are not thus we believe that even the tools for formally analyzing the ttoconrot are currently not available the experimental results obtained in our paper suggest that the ttoconrot is able to train the neural network nn so as to preserve the stimuli for which the concept of ordering on structures has yet to be resolved even though our principal goal was to obtain a more accurate representation of the stochastic distribution our results also suggest that the special configuration of the tree obtained by the ttoconrot can be further exploited so as to improve the time required for identifying the best matching unit bmu the includes different strategies that expand trees by inserting nodes which can be a single neuron or a that essentially are based on a quantization error qe measure in some of these strategies the error measure is based on the hits the number of times a neuron has been selected as the bmu which is in principle the same type of counter utilized by the conditional rotations conrot our strategy ttoconrot which asymptotically positions frequently accessed nodes close to the root might incorporate a module that taking advantage of the optimal tree and the bmu counters already present in the ttoconrot splits the node at the root level thus the splitting operation will occur without the necessity of searching for the node with the largest qe under the assumption that a higher number of hits indicates that the degree of granularity of a particular neuron is lacking refinement the concept of using the root of the tree for growing a som is itself pioneering as far as we know and the design and implementation details of this are currently being investigated references and landis an algorithm for the organization of information sov math dokl akram khalid and khan identification and classification of microaneurysms for early detection of diabetic retinopathy pattern recognition alahakoon halgamuge and srinivasan dynamic maps with controlled growth for knowledge discovery ieee transactions on neural networks allen and munro binary search trees acm arsuaga uriarte and topology preservation in som international journal of applied mathematics and computer sciences astudillo self organizing maps constrained by data structures phd thesis carleton university astudillo and oommen on using adaptive binary search trees to enhance self organizing maps in nicholson and li editors australasian joint conference on artificial intelligence ai pages astudillo and oommen imposing topologies onto self organizing maps information sciences astudillo and oommen on achieving pattern recognition by utilizing soms pattern recognition bauer herrmann and villmann neural maps and topographic vector quantization neural networks bauer and pawelzik quantifying the neighborhood preservation of feature maps neural networks july bitner heuristics that dynamically organize data structures siam j blackmore visualizing structure with the incremental grid growing neural network master s thesis university of texas at austin budinich on the ordering conditions for maps neural computation carpenter and grossberg the art of adaptive pattern recognition by a neural network computer cheetham oommen and ng adaptive structuring of binary search trees using conditional rotations ieee trans on knowl and data conti and de giovanni on the mathematical treatment of self organization extension of some classical results in artificial neural networks icann international conference volume pages cormen leiserson rivest and stein introduction to algorithms second edition july datta parui and b chaudhuri skeletal shape extraction from dot patterns by selforganization pattern recognition proceedings of the international conference on aug deng image collection summarization and comparison using maps pattern recognition dittenbach merkl and rauber the growing hierarchical map in neural networks ijcnn proceedings of the international joint conference on volume pages dopazo and carazo phylogenetic reconstruction using an unsupervised growing neural network that adopts the topology of a phylogenetic tree journal of molecular evolution february duda hart and stork pattern classification edition fritzke growing cell structures a network for unsupervised and supervised learning neural networks fritzke growing grid a network with constant neighborhood range and adaptation strength neural processing letters fritzke a growing neural gas network learns topologies in tesauro touretzky and leen editors advances in neural information processing systems pages cambridge ma mit press guan trees and forests a powerful tool in pattern clustering and recognition in image analysis and recognition third international conference iciar de varzim portugal september proceedings part i pages i haykin neural networks and learning machines prentice hall edition edition huang babri and li ordering of maps in cases neural computation kang and kim multiple people tracking using competitive condensation pattern recognition kaplan handbook of data structures and applications chapter persistent data structures pages chapman and kaski kangas and kohonen bibliography of map som papers neural computing surveys khosravi and safabakhsh human eye sclera detection and tracking using a modified timeadaptive map pattern recognition kiviluoto topology preservation in maps in ieee neural networks council editor proceedings of international conference on neural networks icnn volume pages new jersey usa knuth the art of computer programming volume ed sorting and searching addison wesley longman publishing redwood city ca usa kohonen maps new york secaucus nj usa koikkalainen and oja hierarchical feature maps ijcnn international joint conference on neural networks june lai efficient maintenance of binary search trees phd thesis university of waterloo waterloo canada liang fairhurst and guest no titlea synthesised word approach to word retrieval in handwritten documents pattern recognition martinetz and schulten a network learns topologies in in proceedings of international conference on articial neural networks volume i pages amsterdam mehlhorn dynamic binary search siam journal on computing merkl dittenbach and rauber adaptive hierarchical incremental grid growing an architecture for data visualization in in proceedings of the workshop on maps advances in maps pages miikkulainen script recognition with hierarchical feature maps connection science ogniewicz and hierarchic voronoi skeletons pattern recognition oja kaski and kohonen bibliography of map som papers addendum neural computing surveys pakkanen iivarinen and oja the evolving tree a novel network for data analysis neural processing letters december peano sur une courbe qui remplit toute une aire plane mathematische annalen honkela and kohonen bibliography of map som papers addendum technical report helsinki university of technology department of information and computer science espoo finland december survey and comparison of quality measures for maps in georg and andreas rauber editors proceedings of the fifth workshop on data analysis wda pages sliezsky dom tatry slovakia june elfa academic press rauber merkl and dittenbach the growing hierarchical map exploratory analysis of data ieee transactions on neural networks rojas neural networks a systematic introduction new york new york ny usa samsonova kok and ijzerman treesom cluster analysis in the map neural networks advances in self organising maps wsom singh cherkassky and papanikolopoulos maps for the skeletonization of sparse shapes neural networks ieee transactions on jan sleator and tarjan binary search trees acm venna and kaski neighborhood preservation in nonlinear projection methods an experimental study in georg dorffner horst bischof and kurt hornik editors icann volume of lecture notes in computer science pages springer yao mignotte collet galerne and burel unsupervised segmentation using a selforganizing map and a noise model estimation in sonar imagery pattern recognition
9
robust satisfaction of temporal logic specifications via reinforcement learning oct austin derya zhaodan mac and calin we consider the problem of steering a system with unknown stochastic dynamics to satisfy a rich temporallylayered task given as a signal temporal logic formula we represent the system as a markov decision process in which the states are built from a partition of the statespace and the transition probabilities are unknown we present provably convergent reinforcement learning algorithms to maximize the probability of satisfying a given formula and to maximize the average expected robustness a measure of how strongly the formula is satisfied we demonstrate via a pair of robot navigation simulation case studies that reinforcement learning with robustness maximization performs better than probability maximization in terms of both probability of satisfaction and expected robustness i ntroduction we consider the problem of controlling a system with unknown stochastic dynamics a black box to achieve a complex task an example is controlling a noisy aerial vehicle with partially known dynamics to visit a set of regions in some desired order while avoiding hazardous areas we consider tasks given as temporal logic tl formulae an extension of first order boolean logic that can be used to reason about how the state of a system evolves over time when a stochastic dynamical model is known there exist algorithms to find control policies for maximizing the probability of achieving a given tl specification by planning over stochastic abstractions however only a handful of papers have considered the problem of enforcing tl specifications to a system with unknown dynamics passive and active reinforcement learning has been used to find a policy that maximizes the probability of satisfying a given linear temporal logic formula in this paper in contrast to the above works on reinforcement learning which use propositional temporal logic we use signal temporal logic stl a rich predicate logic that can be used to describe tasks involving bounds on physical parameters and time intervals an example of such a this work was partially supported at boston university by onr grant number and by the nsf under grant numbers and author is with mechanical engineering and electrical engineering georgia institute of technology atlanta ga usa austinjones authors are with mechanical engineering boston university boston ma usa cbelta daksaray author is with mechanical and aerospace engineering university of california davis davis ca usa zdkong author is with aeronautics and astronautics stanford university stanford ca usa schwager author is with systems engineering boston university boston ma usa property is within seconds a region in which y is less than is reached and regions in which y is larger than are avoided for stl admits a continuous measure called robustness degree that quantifies how strongly a given sample path exhibits an stl property as a real number rather than just providing a yes or no answer this measure enables the use of continuous optimization methods to solve inference or formal synthesis problems involving stl one of the difficulties in solving problems with tl formulae is the of their satisfaction for instance if the specification requires visiting region a before region b whether or not the system should steer towards region b depends on whether or not it has previously visited region a for linear temporal logic ltl formulae with semantics this can be broken by translating the formula to a deterministic rabin automaton dra a model that automatically takes care of the in the case of stl such a construction is difficult due to the timebounded semantics we circumvent this problem by defining a fragment of stl such that the progress towards satisfaction is checked with some finite number of state measurements we thus define an mdp called the whose states correspond to the history of the system the inputs to the are a finite collection of control actions we use a reinforcement learning strategy called in which a policy is constructed by taking actions observing outcomes and reinforcing actions that improve a given reward our algorithms either maximize the probability of satisfying a given stl formula or maximize the expected robustness with respect to the given stl formula these procedures provably converge to the optimal policy for each case furthermore we propose that maximizing expected robustness is typically more effective than maximizing probability of satisfaction we prove that in certain cases the policy that maximizes expected robustness also maximizes the probability of satisfaction however if the given specification is not satisfiable the probability maximization will return an arbitrary policy while the robustness maximization will return a policy that gets as close to satisfying the policy as possible finally we demonstrate through simulation case studies that the policy that maximizes expected robustness in some cases gives better performance in terms of both probability of satisfaction and expected robustness when fewer training episodes are available ii s ignal t emporal l ogic stl stl is defined with respect to continuously valued signals let f a b denote the set of mappings from a to b and define a signal as a member of f n rn for a signal s we denote st as the value of s at time t and as the sequence of values moreover we denote s t as the suffix from time t s t st t in this paper the desired mission specification is described by an stl fragment with the following syntax f t t f s a b where t is a finite time bound and are stl formulae a and b are constants and f s d is a predicate where s is a signal f f rn r is a function and d r is a constant the boolean operators and are negation not and conjunction and respectively the other boolean operators are defined as usual the temporal operators f g and u stand for finally eventually globally always and until respectively note that in this paper we use a version of stl rather than the typical formulation the semantics of stl is recursively defined as s t f s d s t s t s t g a b iff iff iff iff s t f a b iff s t a b iff f st d s t and s t s t or s t s t t a t b t a t b s t t a t b s t t t and s t in plain english f a b means within a and b time units in the future is true g a b means for all times between a and b time units in the future is true and a b means there exists a time c between a and b time units in the future such that is true until c and is true at stl is equipped with a robustness degree also called degree of satisfaction that quantifies how well a given signal s satisfies a given formula the robustness is calculated recursively according to the quantitative semantics r s f s d t r s t r s t r s g a b t d f st min r s t r s t max r s t r s t min r s t t r s t r s a b t supt min r s t inft t t r s t r s f a b t similar to let hrz denote the horizon length of an stl formula the horizon length is the required number of samples to resolve any future or past requirements of the horizon length can be computed recursively as hrz p hrz hrz hrz hrz a b hrz max hrz hrz max hrz hrz max hrz b hrz b where are stl formulae example consider the robot navigation problem illustrated in figure a the specification is visit regions a or b and visit regions c or d every time a mission t horizon of let s t x t y t where x and y are the and components of the signal this task can be formulated in stl as g f x x y y x x y y x x y y x x y y figure a shows two trajectories of the system beginning at the initial location of r and ending in region c that each satisfies the inner specification given in note that barely satisfies as it only slightly penetrates region a while appears to satisfy it strongly as it passes through the center of region a and the center of region the robustness degrees confirm this r while r the horizon length of the inner specification of is hrz max max max iii m odels for r einforcement l earning for a system with unknown and stochastic dynamics a critical problem is how to synthesize control to achieve a desired behavior a typical approach is to discretize the state and action spaces of the system and then use a reinforcement learning strategy by learning how to take actions through trial and error interactions with an unknown environment in this section we present models of systems that are amenable for reinforcement learning to enforce temporal logic specifications we start with a discussion on the widely used ltl before introducing the particular model that we will use for reinforcement learning with stl max t we use r s to denote r s if r s is large and positive then s would have to change by a large deviation in order to violate similarly if r s is large in absolute value and negative then s strongly violates reinforcement learning with ltl one approach to the problem of enforcing ltl satisfaction in a stochastic system is to partition the statespace and design control primitives that can nominally drive the system from one region to another these controllers the stochastic dynamical model of the system and the quotient obtained from the partition are used to construct a markob decision process mdp called a bounded parameter mdp or bmdp whose transition probabilities are these bmdps can then be composed with a dra constructed from a given ltl formula to form a product bmdp dynamic programming dp can then be applied over this product mdp to generate a policy that maximizes the probability of satisfaction other approaches to this problem include aggregating the states of a given quotient until an mdp can be constructed such that the transition probability can be considered constant with bounded error the optimal policy can be computed over the resulting mdp using dp or approximate dp methods thus even when the stochastic dynamics of a system are known and the logic that encodes constraints has timeabstract semantics the problem of constructing an abstraction of the system that is amenable to control policy synthesis is difficult and computationally intensive reinforcement learning methods for enforcing ltl constraints make the assumption that the underlying model under control is an mdp implicitly these procedures compute a frequentist approximation of the transition probabilities that asymptotically approaches the true unknown value as the number of observed sample paths increases since this algorithm doesn t explicitly rely on any a priori knowledge of the transition probability it could be applied to an abstraction of a system that is built from a propositionpreserving partition in this case the uncertainty on the motion described by intervals in the bmdp that is reduced via computation would instead be described by complete ignorance that is reduced via learning the resulting policy would map regions of the statespace to discrete actions that will optimally drive the state of the system to satisfy the given ltl specification different partitions will result in different policies in the next section we extend the above observation to derive a discrete model that is amenable for reinforcement learning for stl formulae reinforcement learning with stl in order to reduce the search space of the problem we partition the statespace of the system to form the quotient graph g e where is a set of discrete states corresponding to the regions of the statespace and e corresponds to the set of edges an edge between two states and exists in e if and only if and are neighbors share a boundary in the partition in our case since stl has semantics we can not use an automaton with a acceptance condition a dra to check its satisfaction in general whether or not a given trajectory t satisfies an stl formula would be determined by directly using the qualitative semantics the stl fragment consists of a with horizon length hrz that is modified by either a f t or g t temporal operator this means that in order to update at time t whether or not the given formula has been satisfied or violated we can use the previous state values t for this reason we choose to learn policies over an mdp with finite memory called a whose states correspond to sequences of length of regions in the defined partition example cont d let the robot evolve according to the dubins dynamics xt t cos t yt t sin t where xt and yt are the x and y coordinates of the robot at time t v is its forward speed t is a time interval and the robot s orientation is given by t the control primitives in this case are given by act up down le f t right which correspond to the directions on the grid each noisy control primitive induces a distribution with support where is the orientation where the robot is facing the desired cell when a motion primitive is enacted the robot rotates to an angle t drawn from the distribution and moves along that direction for t time units the partition of the statespace and the induced quotient g are shown in figures b and c respectively a state i j in the quotient figure c represents the region in the partition of the statespace figure b with the point i j in the lower left hand corner definition given a quotient of a system g e and a finite set of actions act a decision process is a tuple hs act pi where s is the set of finite states where is the empty string each state s corresponds to a or shorter path in g shorter paths of length n representing the case in which the system has not yet evolved for time steps have prepended n times p s act s is a probabilistic transition relation p a can be positive only if the first states of are equal to the last states of and there exists an edge in g between the final state of and the final state of we denote the state of the at time t as definition given a trajectory t of the original system we define its induced trace in the as tr t t that is corresponds to the previous regions of the statespace that the state has resided in from time t to time the construction of a from a given quotient and set of actions is straightforward the details are omitted due to length constraints we make the following key assumptions on the quotient and the resulting the defined control actions act will drive the system either to a point in the current region or to a point in a neighboring region of the partition no regions are skipped the transition relation p is markovian for every state there exists a continuous set of sample paths t whose traces could be that state the dynamics of the underlying system produces an unknown distribution p t t since the robustness degree is a function of sample paths of length and an stl formula we can define a distribution p r t t c b y m y m c b a d a d r r x m x m a b c fig a example of robot navigation problem b partitioned space c subsection of the quotient example cont d figure shows a portion of the constructed from figure the states in are labeled with the corresponding sample paths of length in g the green and blue s in the states in correspond to green and blue regions from figure iv p roblem f ormulation in this paper we address the following two problems problem maximizing probability of satisfaction let be a as described in the previous section given an stl formula with syntax find a policy f s n act such that arg max t t problem maximizing average robustness let be as defined in problem given an stl formula with syntax f s act such that find a policy s act arg max t r t s act furthermore if same probability of satisfaction as is not satisfiable any arbitrary policy could be a solution to problem as all policies will result in a satisfaction probability of if is unsatisfiable problem yields a solution that attempts to get as close as possible to satisfying the formula as the optimal solution will have an average robustness value that is least negative the forms of the objective functions differ for the two different types of formula f t and g t case consider an stl formula f t in this case the objective function in can be rewritten as t t t and the objective function in can be rewritten as t max r t t case now consider an stl formula g t the objective function in can be rewritten as t t t and the objective function in can be rewritten as t min t fig part of the constructed from the robot navigation mdp shown in figure problems and are two alternate solutions to enforce a given stl specification the policy found by problem maximizes the chance that will be satisfied while drives the system the policy found by problem to satisfy as strongly as possible on average problems similar to have already been considered in the literature however problem is a novel formulation that provides some advantages over problem as we show achieves the in section v for some special systems r t m aximizing e xpected robustness vs m aximizing p robability of s atisfaction here we demonstrate that the solution to subsumes the solution to for a certain class of systems due to space limitations we only consider formulae of the type f t let act be a for simplicity we make the following assumption on assumption for every state either every trajectory t whose trace is satisfies denoted or every trajectory that passes through the sequence of regions associated with does not satisfy denoted assumption can be enforced in practice during partitioning we define the set a definition the signed graph distance of a s to a set x s is l x min j d x j i i j min l x where l is the length of the shortest path from to we also make the following two assumptions assumption for any signal t such that tr t let r t be bounded from below by rmin and from above by rmax assumption let pr r st t s for any two states d a d a j rmin rmax and over m as now we define the policies mr h i arg max t t s act arg max t h s act i max r t proposition if assumptions and hold then the maximizes the expected probability of satisfacpolicy tion proof given any policy its associated reachability probability can be defined as arg min d a let i be the indicator function such that i b is if b is true and if b is false by definition the expected probability of satisfaction for a given policy is eps e i k t i a also the expected robustness of policy becomes er e max r r t pr max r x t max r x dx rmin pr t r pr max r x t k max r x dx rmin rmin pr t r rmax rmin x x dx rmin r rmax x rmin x dx rmin since rmin is constant maximizing is equivalent to r max x dx r x dx let p be the satisfaction probability such that p then we can rewrite the objective in as p arg min d a a j r rmax x dx p arg min d a r x dx a now j arg min d a a r rmax x dx arg min d a rmin x dx a thus any policy increasing j also leads to an increase in since increasing j is equivalent to increasing er then we can conclude that the policy that maximizes the robustness also achieves the maximum satisfaction probability vi c ontrol s ynthesis to m aximize robustness a policy generation through since we do not know the dynamics of the system under control we can not a priori predict how a given control action will affect the evolution of the system and hence its progress towards a given specification thus we use the paradigm of reinforcement learning to learn policies to solve problems and in reinforcement learning the system takes actions and records the rewards associated with the pair these rewards are then used to update a feedback policy that maximizes the expected gathered reward in our cases the rewards that we collect over are related to whether or not is satisfied problem or how robustly is problem our solutions to these problems rely on a formulation let r a be the reward collected when action a act was taken in state s define the function q s act n as q a t r a max e r r a max q t for an optimization problem with a cumulative objective function of the form r al t the optimal policy f s act can be found by t t arg maxq a t t applying the update rule convergence of batch at t t given a formula of the form f t and an objective of maximizing the expected robustness problem we will show that applying algorithm converges to the optimal solution the other three cases discussed in section iv can be proven similarly the following analysis is based on the optimal q function derived from is qt at t t r at max qt where will cause qt converges to q as t goes to infinity batch we can not reformulate problems and into the form see section iv thus we propose an alternate formulation called batch to solve these problems instead of updating the after each action is taken we wait until an entire episode s t is completed before updating the the batch procedure is summarized in algorithm algorithm the batch q learning algorithm function batchqlearn sys probtype nep q randominitialization initializepolicy q for n to nep do s t sys q updateqfunction q t probtype updatepolicy q return q algorithm function used to update q function used in algorithm function updateqfunction q t probtype for n t to do if probtype is maximumprobability then qtmp t n max i n t n else qtmp t n max r n t n qnew t n qtmp t n t n return qnew the q function is initialized to random values and is computed from the initial q values then for nep episodes the system is simulated using randomization is used to encourage exploration of the policy space the observed trajectory is then used to update the q function according to algorithm the new value of the q function is used to update the policy for compactness algorithm as written only covers the case f t the case in which g t can be addressed similarly a t k p a max r max b t t this gives the following convergence result proposition the rule given by at t t qk at t t max r max b t t converges to the optimal q function if the sequence is such that and proof sketch the proof of proposition relies primarily on proposition once this is established the rest of the proof varies only slightly from the presentation in note that in this case k ranges over the number of episodes and t ranges over the time coordinate of the signal proposition the optimal given by is a fixed point of the contraction mapping h where hq a t t p a max r max q b t t proof by if h is a contraction mapping then is a fixed point of consider max p a max r a max b t t max r b t t define t max b t wolog let t t t t define r max r t t max r t t there exist possibilities for the value of r r t t t t r t t r t t r t t t t t t r r trained performance trained performance trained performance trained performance robustness count count count a robustness robustness b b y y y x c x d thus this means that r hence a p a r a p a therefore h is a contraction mapping vii c ase s tudy we implemented the learning algorithm algorithm and applied it to two case studies that adapt the robot navigation model from example for each case study we solved problems and and compared the performance of the resulting policies all simulations were implemented in matlab and performed on a pc with a ghz processor and gb ram a case study reachability first we consider a simple reachability problem the given stl specification is f f g x c fig comparison of policies for case study histogram of robustness values for trained policies for solution to a problem and b problem trajectory generated from policies for solution to c problem and d problem example trajectory example trajectory robustness a example trajectory example trajectory y count x d fig comparison of policies for case study the subplots have the same meaning as in figure generated from the system simulated using each of the trained policies after learning has completed without the randomization that is used during the learning phase note that both trained policies satisfied the specification with probability the performance of the two algorithms are very similar as the mean robustness is with standard deviation for probability maximization and and for robustness maximization in the second row we see trajectories simulated by each of the trained policies the similarity of the solutions in this case study is not surprising if the state of the system is deep within a or b then the probability that it will remain inside that region in the next time steps satisfy is higher than if it is at the edge of the region trajectories that remain deeper in the interior of region a or b also have a high robustness value thus for this particular problem there is an inherent coupling between the policies that satisfy the formula with high probability and those that satisfy the formula as robustly as possible on average b case study repeated satisfaction where is the stl subformula corresponding to being in a blue region in plain english can be stated as within time units reach a blue region and then don t revisit a blue region for time the results from applying algorithm are summarized in figure we used the parameters nep and t where t is the probability at iteration t of selecting an action at random constructing the took algorithm took to solve problem and to solve problem the two approaches perform very similarly in the first row we show a histogram of the robustness of trials although the conditions and are technically required k to prove convergence in practice these conditions can be relaxed without having adverse effects on learning performance in this second case study we look at a problem involving repeatedly satisfying a condition finitely many times the specification of interest is g f f in plain english is ensure that every time units over a unit interval a green region and a blue region is results from this case study are shown in figure we used the same parameters as listed in section except ne p and t constructing the took applying algorithm took for problem and for problem in the first row we see that the solution to problem satisfies the formula with probability while the solution to problem satisfies the formula with probability at first this seems counterintuitive as proposition indicates that a policy that maximizes probability would achieve a probability of satisfaction at least as high as the policy that maximizes the expected robustness however this is only guaranteed with an infinite number of learning trials the performance in terms of robustness is obviously better for the robustness maximization mean standard deviation than for the probability maximization mean standard deviation in the second row we see that the maximum robustness policy enforces convergence to a cycle between two regions while the maximum probability policy deviates from this cycle the discrepancy between the two solutions can be explained by what happens when trajectories that almost satisfy occur if a trajectory that almost oscillates between a blue and green region every four seconds is encountered when solving problem it collects reward on the other hand when solving problem the policy that produces the almost oscillatory trajectory will be reinforced much more strongly as the resulting robustness is less negative however since the robustness degree gives partial credit for trajectories that are close to satisfying the policy the reinforcement learning algorithm performs a directed search to find policies that satisfy the formula since probability maximization gives no partial credit the reinforcement learning algorithm is essentially performing a random search until it encounters a trajectory that satisfies the given formula therefore if the family of policies that satisfy the formula with positive probability is small it will on average take the algorithm solving problem a longer time to converge to a solution that enforces formula satisfaction viii c onclusions and f uture w ork in this paper we presented a new reinforcement learning paradigm to enforce temporal logic specifications when the dynamics of the system are a priori unknown in contrast to existing works on this topic we use a logic signal temporal logic whose formulation is directly related to a system s statespace we present a novel convergent algorithm that uses the robustness degree a continuous measure of how well a trajectory satisfies a formula to enforce the given specification in certain cases robustness maximization subsumes the established paradigm of probability maximization and in certain cases robustness maximization performs better in terms of both probability and robustness under partial training future research includes formally connecting our approach to abstractions of linear stochastic systems r eferences abate d innocenzo and di benedetto approximate abstractions of stochastic hybrid systems automatic control ieee transactions on nov baier and katoen principles of model checking volume mit press cambridge brazdil chatterjee chmelik forejt kretinsky kwiatkowska parker and ujma verification of markov decision processes using learning algorithms in cassez and raskin editors automated technology for verification and analysis volume of lecture notes in computer science pages springer international publishing ding smith belta and rus optimal control of markov decision processes with linear temporal logic constraints ieee transactions on automatic control ding wang lahijanian paschalidis and belta temporal logic motion control using methods in robotics and automation icra ieee international conference on pages may dokhanchi hoxha and fainekos monitoring for temporal logic robustness in runtime verification pages springer and maler robust satisfaction of temporal logic over signals formal modeling and analysis of timed systems pages fainekos and pappas robustness of temporal logic specifications for signals theoretical computer science fu and topcu probably approximately correct mdp learning and control with temporal logic constraints corr jin donze deshmukh and seshia mining requirements from control models in proceedings of the international conference on hybrid systems computation and contro pages jones kong and belta anomaly detection in systems a formal methods approach in ieee conference on decision and control cdc pages julius and pappas approximations of stochastic hybrid systems automatic control ieee transactions on june kamgarpour ding summers abate lygeros and tomlin discrete time stochastic hybrid dynamic games verification and controller synthesis in proceedings of the ieee conference on decision and control and european control conference pages kong jones medina ayala aydin gol and belta temporal logic inference for classification and prediction from data in proceedings of the international conference on hybrid systems computation and control pages acm lahijanian andersson and belta temporal logic motion planning and control with probabilistic satisfaction guarantees robotics ieee transactions on april lahijanian andersson and belta approximate markovian abstractions for linear stochastic systems in proc of the ieee conference on decision and control pages maui hi usa lahijanian andersson and belta formal verification and synthesis for stochastic systems ieee transactions on automatic control luna lahijanian moll and kavraki asymptotically optimal stochastic motion planning with temporal goals in workshop on the algorithmic foundations of robotics istanbul turkey melo convergence of a simple proof http raman donze maasoumy murray sangiovannivincentelli and seshia model predictive control with signal temporal logic specifications in proceedings of ieee conference on decision and control cdc pages sadigh kim coogan sastry and seshia a learning based approach to control synthesis of markov decision processes for linear temporal logic specifications corr sutton and barto reinforcement learning an introduction volume mit press cambridge svorenova chmelik chatterjee and belta temporal logic control for stochastic linear systems using abstraction refinement of probabilistic games in hybrid systems computation and control hscc volume to appear tsitsiklis asynchronous stochastic approximation and qlearning machine learning
3
batched qr and svd algorithms on gpus with applications in hierarchical matrix compression jul wajih halim george hatem and david abstract we present high performance implementations of the qr and the singular value decomposition of a batch of small matrices hosted on the gpu with applications in the compression of hierarchical matrices the jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the svd of low rank blocks using randomized methods we implement multiple kernels based on the level of the gpu memory hierarchy in which the matrices can reside and show substantial speedups against streamed cusolver svds the resulting batched routine is a key component of hierarchical matrix compression opening up opportunities to perform arithmetic efficiently on gpus introduction the singular value decomposition svd is a factorization of a general m n matrix a of the form a u u is an m m orthonormal matrix whose columns ui are called the left singular vectors is an m n diagonal matrix whose diagonal entries are called the singular values and are sorted in decreasing order v is an n n orthonormal matrix whose columns vi are called the right singular vectors when m n we can compute a reduced form a where is an m n matrix and is an n n diagonal matrix one can easily obtain the full form from the reduced one by extending with m n orthogonal vectors and with an m n zero block row without any loss of generality we will focus on the reduced svd of real matrices in our discussions the svd of a matrix is a crucial component in many applications in signal processing and statistics as well as matrix compression where truncating the n k singular values that are smaller than some threshold gives us a approximation of the matrix a this matrix is the unique minimizer of the function fk b in the context of hierarchical matrix operations effective compression relies on the ability to perform the computation of large batches of independent svds of small matrices of low numerical rank randomized methods are well suited for computing a truncated svd of these types of matrices and are built on three computational kernels the qr factorization multiplications and svds of smaller k k matrices motivated by this task we discuss the implementation of high performance batched qr and svd kernels on the gpu focusing on the more challenging svd tasks the remainder of this paper is organized as follows section presents different algorithms used to compute the qr factorization and the svd as well as some considerations when optimizing extreme computing research center ecrc king abdullah university of science and technology kaust thuwal saudi arabia department of computer science american university of beirut aub beirut lebanon addresses batched qr and svd algorithms algorithm householder qr procedure qr a q r q r i a for i do v house r i r i t r q q i t for gpus section discusses the batched qr factorization and compares its performance with existing libraries sections and discuss the various implementations of the svd based on the level of the memory hierarchy in which the matrices can reside specifically section describes the implementation for very small matrix sizes that can fit in registers section describes the implementation for matrices that can reside in shared memory and section describes the block jacobi implementation for larger matrix sizes that must reside in global memory section details the implementation of the batched randomized svd routine we then discuss some details of the application to hierarchical matrix compression in section we conclude and discuss future work in section background in this section we give a review of the most common algorithms used to compute the qr factorization and the svd of a matrix as well as discuss some considerations when optimizing on the gpu qr factorization the qr factorization decomposes an m n matrix a into the product of an orthogonal m m matrix q and an upper triangular m n matrix r we can also compute a reduced form of the decomposition where q is an m n matrix and r is n n upper triangular the most common qr algorithm is based on transforming a into an upper triangular matrix using a series of orthogonal transformations generated using householder reflectors other algorithms such as the or modified can produce the qr factorization by orthogonalizing a column with all previous columns however these methods are less stable than the householder orthogonalization and the orthogonality of the resulting q factor suffers with the condition number of the matrix another method is based on givens rotations where entries in the subdiagonal part of the matrix are zeroed out to form the triangular factor and the rotations are accumulated to form the orthogonal factor this method is very stable and has more parallelism than the householder method however it is more expensive doing about more work and it is more challenging to extract the parallelism efficiently on the gpu for our implementation we rely on the householder method due to its numerical stability and simplicity the method is described in in algorithm svd algorithms most implementations of the svd are based on the approach popularized by trefethen et al where the matrix a first undergoes bidiagonalization of the form a qu bqtv where qu and qv are orthonormal matrices and b is a bidiagonal matrix the matrix b is then diagonalized using some variant of the qr algorithm the divide and conquer method or a combination of both to produce a decomposition b ub the complete svd is then determined batched qr and svd algorithms as a qu ub qv vb t during the backward transformation these methods require significant algorithmic and programming effort to become robust and efficient while still suffering from a loss of relative accuracy an alternative is the jacobi method where all n pairs of columns are repeatedly orthogonalized in sweeps using plane rotations until all columns are mutually orthogonal when the process converges all columns are mutually orthogonal up to machine precision the left singular vectors are the normalized columns of the modified matrix with the singular values as the norms of those columns the right singular vectors can be computed either by accumulating the rotations or by solving a system of equations our application does not need the right vectors so we omit the details of computing them algorithm describes the jacobi method since each pair of columns can be orthogonalized independently the method is also easily parallelized the simplicity and inherent parallelism of the method make it an attractive first choice for an implementation on the gpu gpu optimization considerations gpu kernels are launched by specifying a grid configuration which lets us organize threads into blocks and blocks into a grid launching a gpu kernel causes a short stall as much as microseconds as the kernel is prepared for execution this kernel launch overhead prevents kernels that complete their work faster than the overhead from executing in parallel essentially serializing them to overcome this limitation when processing small workloads the work is batched into a single kernel call when possible all operations can then be executed in parallel without incurring the kernel launch overhead with the grid configuration used to determine thread work assignment a warp is a group of threads threads in current generation gpus such as the nvidia within a block that executes a single instruction in lockstep without requiring any explicit synchronization the occupancy of a kernel tells us the ratio of active warps to the maximum number of warps that a multiprocessor can host this metric is dependent on the amount of resources that a kernel uses such as register and shared memory usage and kernel launch configuration as well as the compute capability of the card for more details while not a requirement for good performance it is generally a good idea to aim for high occupancy memory on the gpu is organized into a hierarchy of memory spaces as shown in figure at the bottom we have global memory which is accessible by all threads and is the most plentiful but the slowest memory the next space of interest is the shared memory which is accessible only by threads within the same block and is configurable with the cache to be at most per thread block on current generation gpus shared memory is very fast and acts as a programmer controllable cache finally we have the registers which are local to the threads registers are the fastest of all memory but the total number of registers usable by a thread without performance implications is limited if a kernel needs more registers than the limit then registers are spilled to local memory which is in the slow but cached global memory making good use of the faster memories and avoiding excessive algorithm jacobi svd while not converged do for each pair of columns aij ai aj do g atij aij r rot g aij aij r batched qr and svd algorithms registers shared memory cache cache cache global memory figure the memory hierarchy of a modern gpu accesses to the slower ones is key to good performance on the gpu as such it is common to use blocking techniques in many algorithms where a block of data is brought in from global memory and processed in one of the faster memories related work batched gpu routines for lu cholesky and qr factorizations have been developed in using a block recursive approach which increases data reuse and leads to very good performance for relatively large matrix sizes gpu routines optimized for computing the qr decomposition of very tall and skinny matrices are presented in where they develop an efficient transpose computation that is employed with some minor changes in this work hybrid algorithms for batched svd using jacobi and bidiagonalization methods are introduced in where pair generation for the jacobi method and the solver phase of the bidiagonalization are handled on the cpu the work in employs the power method to construct a rank approximation for filters in convolutional neural networks routines to handle the svd of many matrices on gpus is presented in where each thread within a warp computes the svd of a single matrix batched qr decomposition in this section we discuss implementation details of our batched qr kernel and compare it with other implementations from the magma and cublas libraries implementation one benefit of the householder algorithm is that the application of reflectors to the trailing matrix line of the algorithm can be blocked together and expressed as a multiplication level blas instead of multiple multiplications level blas the increased arithmetic intensity typically allows performance to improve when the trailing matrix is large however for small matrix blocks the overhead of generating the blocked reflectors from their vector form as well as the lower performance of the multiplication for small matrices hinder performance we can obtain better performance by applying multiple reflectors in their vector form and performing the transpose multiplication efficiently within a thread block first we perform the regular factorization on a column block p called a panel the entire panel is stored in registers with each thread storing one row of the panel and the transpose product is computed using a series of reductions using shared memory and warp shuffles which batched qr and svd algorithms registers shu exor lane lane lane lane lane lane lane lane lane lane lane lane lane lane lane lane warp figure left matrix rows allocated to thread registers in a warp right parallel warp reduction using shuffles within registers allow threads within a warp to read each other s registers figure shows the data layout for a theoretical warp of size with columns in registers and a warp reduction using shuffles once we factor the panel we can apply the reflectors to the trailing in a separate kernel that is optimized for performing the core product in the update in this second kernel we load both the factored panel p and a panel mi of the trailing m to registers and apply the reflectors one at a time updating the trailing panel in registers let us take an example of a trailing panel mi for each reflector we compute the product mit v by flattening the product into a reduction of a vector in shared memory that has been padded to avoid bank conflicts the reduction can then be serialized until it reaches a size of where a partial reduction to a vector of size can take place in steps this final vector is the product mit v which can then be quickly applied to the registers storing mi this process is repeated for each trailing panel within the same kernel to maximize the use of the reflectors which have been stored in registers figure shows one step of a panel factorization and the application of its reflectors to the trailing submatrix since threads are limited to per block on current architectures we use the approach developed in to factorize larger matrices we first factorize panels up to the thread block limit in a single kernel call the panels below the first are then factorized by first loading the triangular factor into shared memory and then proceeding with the panel factorization as before taking the triangular portion into consideration when computing reflectors and updates to keep occupancy up for the small matrices on devices where the resident block limit could be reached before the thread limit we assign multiple operations to a single thread block for a batch of n matrices of dimensions m n kernels can be launched using thread blocks of size m b where each thread block handles b operations performance figures and show the performance of our batched qr for square and rectangular matrices with a panel width of tuned for the gpu we compare against the vendor implementation in cublas as well as the high performance library magma we can see that our proposed version performs well for rectangular matrices with column size of and starts losing ground against magma for the larger square matrix sizes where the blocked algorithm starts to batched qr and svd algorithms r v p m figure one step of the qr factorization where a panel p is factored to produce a triangular factor r and reflectors v which are used to update the trailing submatrix our qr dp magma qr dp cublas qr dp our qr sp magma qr sp cublas qr sp our qr dp magma qr dp cublas qr dp our qr sp magma qr sp cublas qr sp matrix size a batched qr kernel performance for square matrices matrix rows b batched qr kernel performance for rectangular matrices with a fixed column size of figure comparing batched qr kernels for matrices of varying size on a gpu in single and double precision show its performance benefits a nested implementation where our kernel can be used to factor relatively large panels in a blocked algorithm will likely show some additional performance improvements for the large square matrices but we leave that as future work register memory jacobi in this section we will discuss the first batched svd kernel where the matrix data is hosted in registers and analyze the performance of the resulting kernel implementation in this implementation to avoid repeated global memory accesses we attempt to fit the matrix in register memory using the same layout as the panel in the qr factorization one row per batched qr and svd algorithms dp performance sp performance occupancy dp occupancy sp occupancy matrix size a kernel performance in and achieved occupancy matrix size b the effect of increasing the matrix size on the occupancy of the register kernel figure performance of the batched register memory svd on a gpu for matrices of varying size in single and double precision arithmetics thread however the number of registers that a thread uses has an impact on occupancy which can potentially lead to lower performance in addition once the register count exceeds the limit set by the gpu s compute capability the registers spill into local memory which resides in cached slow global memory since we store an entire matrix row in the registers of one thread we use the serial jacobi algorithm to compute the svd where column pairs are processed by the threads one at a time the bulk of the work lies in the computation of the gram matrix g atij aij line of algorithm and in the update of the columns line since the gram matrix is symmetric this boils down to three dot products which are executed as parallel reductions within the warp using warp shuffles the computation of the rotation matrix as well as the convergence test is performed redundantly in each thread finally the column update is done in parallel by each thread on its own register data as with the qr kernel we keep occupancy up for the smaller matrix sizes by assigning multiple svd operations to a single block of threads with each operation assigned to a warp to avoid unnecessary synchronizations performance we generate batches of test matrices with varying condition numbers using the latms lapack routine and calculate performance based on the total number of rotations needed for convergence figures and show the performance on a gpu of the batched svd kernel and the effect increased register usage has on occupancy profiling the kernel we see that the gram matrix computation takes about cycles column rotations take about cycles and the redundantly computed convergence test and rotation matrices dominate at cycles the fact that the redundant portion of the computation dominates means that it is preferable to assign as few threads as possible when processing column pairs due to the low occupancy for the larger matrix sizes and the register spills to local memory for matrices larger than it is obvious that the register approach will not suffice for larger matrix sizes this leads us to our next implementation based on the slower but more shared memory warp warp step warp batched qr and svd algorithms warp step step step step step step figure distribution of column pairs to warps at each step of a sweep shared memory jacobi while the register based svd performs well for very small matrix sizes we need a kernel that can handle larger sizes and maintain reasonably high occupancy this leads us to building a kernel based on shared memory the next level of the gpu memory hierarchy this section discusses the implementation details of this kernel and analyze its performance when compared with the register kernel implementation in this version the matrix is stored entirely in shared memory which is limited to at most kb per thread block on current generation gpus using the same thread assignment as the register based kernel would lead to very poor occupancy due to the high shared memory consumption where potentially only a few warps will be active in a multiprocessor instead we exploit the inherent parallelism of the jacobi to assign a warp to a pair of columns there are warps processing an matrix stored in shared memory there are a total of n pairs of columns so we must generate all pairings in n steps with each step processing pairs in parallel there are many ways of generating these pairs including round robin and ring ordering we implement the round robin ordering using shared memory to keep track of the column indexes of the pairs with the first warp in the block responsible for updating the index list after each step figure shows this ordering for a matrix with columns when the number of matrix rows exceeds the size of the warp the assignment no longer allows us to use fast warp reductions which would force us to use even more resources as the reductions would now have to be done in shared memory instead we assign multiple rows to a thread serializing a portion of the reduction over those rows until warp reductions can be used this follows our observation in section to assign as few threads as possible to process column pairs frees up valuable resources and increases the overall performance of the reduction row padding is used to keep the rows at multiples of the warp size and column padding is used to keep the number of columns even kernels can then be launched using threads to process each matrix figures and show examples of the thread allocation and reductions for a matrix using a theoretical warp size of batched qr and svd algorithms shared memory serial reduction shufflexor lane lane lane lane lane lane lane lane lane lane lane lane lane lane lane warp warp warp warp a matrix columns assigned in pairs to multiple warps and stored in shared memory lane b parallel reduction of a column of data in shared memory using register shuffles after an initial serial reduction step figure shared memory kernel implementation details performance figures and show the performance of the parallel shared svd kernel compared to the serial register svd kernel on a gpu we can see the improved growth in performance in the shared memory kernel due to the greater occupancy as well as the absence of any local memory transactions looking at the double precision occupancy we notice two dips in occupancy at matrix sizes and as the number of resident blocks become limited by the limits of the device dropping to and then resident blocks performance increases steadily from there as we increase the number of threads assigned to the operation until we reach a matrix size of where we reach the block limit of threads to handle larger sizes we must use a blocked version of the algorithm or the randomized svd as we see in sections and respectively global memory block jacobi when we can no longer store the entire matrix in shared memory we have to operate on the matrix in the slower global memory instead of repeatedly reading and updating the columns one at a time block algorithms that facilitate cache reuse have been developed the main benefit of the block jacobi algorithm is its high degree of parallelism however since we implement a batched routine for independent operations we will use the serial block jacobi algorithm for individual matrices and rely on the parallelism of the batch processing the parallel version where multiple blocks are processed simultaneously can still be used when the batch size is very small but we will focus on the serial version in this section we will discuss the implementation details for two global memory block jacobi algorithms that differ only in the way block columns are orthogonalized and compare their performance with parallel streamed calls to the cusolver library routines gram matrix block jacobi svd the block jacobi algorithm is very similar to the vector algorithm orthogonalizing pairs of blocks columns instead of vectors the first method of orthogonalizing pairs of block columns is based p on the svd of their gram matrix during the sweep each pair of m k block columns ai and batched qr and svd algorithms reg dp occupancy reg sp occupancy dp register kernel dp smem kernel sp register kernel sp smem kernel occupancy smem dp occupancy smem sp occupancy matrix size a shared memory kernel performance in compared to the register kernel matrix size b comparison of the occupancy achieved by the register and shared memory kernels figure performance of the batched shared memory svd on a gpu for matrices of varying size in single and double precision arithmetics p aj p p p t p p p t p aij p singular vectors of gij or p updating apij uij ij is orthogonalized by forming a gram matrix gij ai aj ai aj aij and generating a block rotation matrix p uij computed as the left equivalently its eigenvectors since it is symmetric positive definite orthogonalizes the block columns since we have t p t uij ij ij t p p t apij apij uij uij p p gij uij p where is a diagonal matrix of the singular values of gij orthogonalizing all pairs of block columns until the entire matrix is orthogonal will give us the left singular vectors as the normalized columns and the singular values as the corresponding column norms if the right singular vectors are needed we can accumulate the action of the block rotation matrices on the identity matrix for our batched implementation we use highly optimized batched syrk and gemm routines from magma to compute g and to apply the block rotations while the svd is computed by our shared memory batched kernel since different matrices will converge in different numbers of sweeps we keep track of the convergence of each operation l by computing the norm el of the entries of g scaled by its diagonal entries while this term is an inexact approximation of the terms of the full matrix in each sweep it is still a good indication of convergence and will cost us at most an extra cheap sweep since the final sweep will not actually perform any rotations within the svd of the entire batched operation will then converge when e max el where is our convergence tolerance this gives us the gram matrix path of the batched block jacobi algorithm to compute the svd of a batch of matrices in global memory it is worth noting that the computation of the gram matrix can be optimized by taking advantage of the special structure of g but since the bulk of the computation is in the svd of g it will not result in any significant performance gains direct block jacobi svd the gram matrix method is an indirect way of orthogonalizing block columns and may fail to converge if the matrix is very matrices can be handled by directly batched qr and svd algorithms algorithm batched block jacobi svd while e do el for each pair of block columns aij ai aj do if method gram then g batchsyrk aij else aij g batchqr aij el max el scaledoffdiag g u batchsvd g aij batchgemm aij u e max el orthogonalizing the columns using their svd since the block columns are rectangular we first compute their qr decomposition followed by the svd of the triangular factor overwriting the block column apij by the orthogonal factor q and multiplying it by the left singular vectors of r scaled by the singular values will give us the new block column ij t p p p pt apij qpij rij qpij uij vijp ij vij if the right singular vectors are needed we can accumulate the action of vijp on the identity matrix for our batched implementation we use the batch qr routine developed in section and gemm routines from magma to multiply the orthogonal factor by the left singular vectors while the svd is computed by our shared memory batched kernel the same convergence test used in the gram matrix method can be used on the triangular factor since the triangular factor should be close to a diagonal matrix if a pair of block columns are orthogonal this gives us the direct path of the batched block jacobi algorithm to compute the svd of a batch of matrices in global memory performance figures and show the profiling of the different computational kernels involved in the batched block algorithms with a block width of specifically percentages of total execution time for determining convergence and memory operations matrix multiplications qr decompositions and the svd of the gram matrix for the gram matrix approach the svd is the most costly phase even for the larger operations while the qr and svd decompositions take almost the same time for the larger matrices in the direct approach figure shows the performance of the batched block jacobi svd of matrices using both methods and figure compares the performance of our batched svd routine with a batched routine that uses the cusolver svd routine using concurrent streams on a gpu increasing the number of streams for cusolver showed little to no performance benefits highlighting the performance limitations of routines that are bound by kernel launch overhead the matrices are generated randomly using the latms lapack routine with a condition number of the gram matrix approach fails to converge in single precision for these types of matrices whereas the direct approach always converges however the gram matrix approach performs better when it is applicable for the larger matrices due to the strong performance of the multiplcations the performance of the block algorithm can be improved by preprocessing the matrix using qr and lq decompositions to decrease the number of sweeps required for convergence as well as by adaptively selecting pairs of block columns based on the batched qr and svd algorithms misc gemm svd misc gemm svd qr total time total time computed offdiagonal norms of their gram matrices these changes are beyond the scope of this paper and will be the focus of future work matrix size matrix size a gram matrix batched block jacobi svd profile b direct batched block jacobi svd profile figure profile of the different phases of the block jacobi svd for matrices of varying size on a gpu in double precision single precision exhibits similar behavior randomized svd as mentioned in section we are often interested in a approximation of a matrix a we can compute this approximation by first determining the singular value decomposition of the full m n matrix a and then truncating the n k smallest singular values with their corresponding singular vectors however when the matrix has low numerical rank k we can obtain the approximation using very fast randomization methods this section will discuss some details dp gram dp direct sp direct time streamed dp cusolver streamed sp cusolver batched dp direct batched sp direct batched dp gram matrix size a batched block jacobi svd performance matrix size b comparison between streamed cusolver and the batched block jacobi figure batched block jacobi performance for matrices of varying size on a gpu in single and double precision arithmetics batched qr and svd algorithms algorithm batched randomized svd procedure rsvd a k p m n size a rand n k p y batchgemm a q ry batchqr y b batchgemm qt a qb rb batchqr b t t ur s vr batchsvd rb u batchgemm q ur v batchgemm qb vr of the algorithm and compare its performance with the full svd using our block jacobi kernel implementation when the singular values of a matrix decay rapidly we can compute an approximate svd using a simple two phase randomization method the first phase determines an approximate orthogonal basis q for the columns of a ensuring that a qqt a when the numerical rank k of a is low we can be sure that q has a small number of columns as well in we see that by drawing k p sample vectors y aw from random input vectors w we can obtain a reliable approximate basis for a which can then be orthogonalized this boils down to computing a matrix y where is a n k p random gaussian sampling matrix and then computing the qr decomposition of y qry where q is the desired approximate orthogonal basis the second phase uses the fact that a qqt a to compute a matrix b qt a so that we now have a qb forming the svd of b ub sv t we finalize our approximation a qub sv t u sv t for the wide k p n matrix b we can first compute a qr decomposition of its transpose followed by the svd of the upper triangular factor algorithm shows that the core computations for the randomized method are multiplications qr decompositions and the singular value decompositions of small matrices using the batched routines from the previous sections it is straightforward to form the required randomized batched svd more robust randomized svd algorithms would employ randomized subspace iteration methods to obtain a better basis q for the columns of a and rely on these same core kernels but will not be further discussed here performance figure shows the profiling of the different kernels used in the randomized batched routine for determining the top singular values and vectors of randomly generated low rank matrices using the latms lapack routine the miscellaneous portion includes random number generation using the curand library s default random number generator and a gaussian distribution batched transpose operations and memory operations we can see that the performance of all kernels play almost equally important roles in the performance of the randomized routine as the matrix size grows while keeping the computed rank the same figure shows the performance of the batched batched qr and svd algorithms randomized svd of operations and figure compares the runtimes of the direct block onesided jacobi routine with the randomized svd on a gpu for the same set of matrices showing that significant time savings can be achieved even for relatively small blocks total time misc gemm svd qr matrix size figure profile of the different phases of the batched randomized svd for matrices of varying size on a gpu in double precision single precision exhibits similar behavior application to hierarchical matrix compression as an application of the batched kernels presented we consider the problem of hierarchical matrices this is a problem of significant importance for building hierarchical matrix algorithms and in fact was our primary motivation for the development of the batched kernels hierarchical matrices have received substantial attention in recent years because of their ability to store and perform algebraic operations in near linear complexity rather than the o and o that regular dense matrices require the effectiveness of hierarchical matrices comes from dp randomized svd sp randomized svd time dp randomized svd dp direct block svd sp randomized svd sp direct block svd matrix size a batched randomized svd performance matrix size b comparison between the batched block jacobi and the batched randomized svd figure batched randomized svd performance for matrices of varying size on a gpu in single and double precision for the first singular values and vectors batched qr and svd algorithms t a basis tree u of an leaf nodes are stored explicitly whereas inner nodes are represented implicitly using the transfer matrices b leaves of matrix tree for a simple hierarchical matrix red blocks represent dense leaves and green blocks are low rank leaves figure the basis tree and matrix tree leaves of a simple the fact they can approximate a matrix by a quad of blocks where many of the blocks in the regions have a rapidly decaying spectrum and can therefore be by numerically low rank representations it is these low rank representations at different levels of the hierarchical tree that reduce the memory footprint and operations complexity of the associated matrix algorithms hackbush shows that many of the large dense matrices that appear in scientific computing such as from the discretization of integral operators schur complements of discretized pde operators and covariance matrices can be well approximated by these hierarchical representations reviewing and analyzing hierarchical matrix algorithms is beyond the scope of this paper here we focus on the narrow task of compressing hierarchical matrices this compression task may be viewed as a generalization of the compression low rank approximation of large dense matrices to the case of hierarchical matrices for large dense matrices one way to perform the compression is to generate a single exact or approximate svd u t and truncate the spectrum to the desired tolerance to produce a truncated or compressed representation t for hierarchical matrices the equivalent operations involve batched svds on small blocks with one batched kernel call per level of the tree in the hierarchical representation the size of the batch in every such call is the number of nodes at the corresponding level in the tree compression algorithms with controllable accuracy are important practically because it is often the case that the hierarchical matrices generated by analytical methods can be compressed with no significant loss of accuracy even more importantly when performing matrix operations such as additiona and multiplication the apparent ranks of the blocks often grow and have to be recompressed regularly during the operations to prevent superlinear growth in memory requirements representation for our application we use the memory efficient variant of hierarchical matrices which exhibit linear complexity in time and space for many of its core operations in the format a hierarchical matrix is actually represented by three trees batched qr and svd algorithms row and column basis column trees u and v that organize the row and column indices of the matrix hierarchically each node represents a set of basis vectors for the row and column spaces of the blocks of a nodes at the leaves of the tree store these vectors explicitly while inner nodes store only transfer matrices that allow us to implicitly represent basis vectors in terms of their children a basis tree with this relationship of the nodes is called a nested basis for example in a binary row basis tree u with transfer matrices e we can explicitly compute the basis vectors for a node i with children and at level l as l l ui figure shows an example of a binary basis tree a matrix tree for the hierarchical blocking of a formed by a dual traversal of the nodes of the two basis trees a leaf is determined when the block is either small enough and stored as an m m dense matrix or when a low rank approximation of the block meets a specified accuracy tolerance for the latter case the node is stored as a kl kl coupling matrix s at each level l of the tree where kl is the rank at level the block ats of the matrix where t is the index set of a node in the row basis tree u and s is the index set of a node in the column basis v is then approximated as ats ut sts vst figure shows the leaves of the matrix quadtree of a simple hierarchical matrix for the case of symmetric matrices the u and v trees are identical our numerical results below are from a symmetric covariance matrix compression the compression of a symmetric ah represented by the two trees u with its transfer e with its transfer matrices e e in matrices e and s involves generating a new optimal basis tree u a truncation phase and a new se that expresses the contents of the matrix blocks in this new basis in a projection phase e e e we present a version of the truncation algorithm that generates a memory efficient basis u from a representation of the matrix in a given u e basis more sophisticated algebraic compression algorithms that involve the use of s in the truncation phase in order to generate a more efficient basis will be the subject of future work the truncation phase computes the svd of the nodes of the basis tree u level by level with e we have an explicit all nodes in a level being processed in parallel to produce the new basis u representation of the basis vectors at the leaves so we can compute the svd of all leaf nodes in parallel with our batched kernels and truncate the singular vectors whose singular values are lower than our relative compression threshold truncating the node to the relative threshold using the e svd will give us an approximation of the leaf such that with the new leaf nodes we t fd u d and d is the leaf level can compute projection matrices in a tree t where each node i tid u i i sweeping up the tree we process the inner nodes while preserving the nested basis property using the relationship of a node i with children and at level l we have l l l l l l e e u e u t e u el e l t ei u u after forming the t e matrices using batched multiplication we compute their svd t e qsw t using the batched svd kernel and truncate as we did for the leaves to form the batched qr and svd algorithms g truncated t e matrices as el t g e i sei w fi t t ei q i el e e l the block rows of q e are the new transfer matrices at level l of our compressed nested where e basis and t are the projection matrices for level l the key computations involved in this truncation phase consist then of one batched svd involving the leaves of the tree followed by a sequence of batched svds one per level of the tree involving the transfer matrices and data from the lower levels the projection phase consists of transforming the coupling matrices in the matrix tree using the generated projection matrices of the truncation phase for each coupling matrix sts we compute a new coupling matrix sets tt sts tst using batched multiplications this phase of the operation consumes much less time than the truncation phase on gpus because of substantial efficiencies in executing regular arithmetically intensive operations on them results as an illustration of the effectiveness of the algebraic compression procedure we generate covariance matrices of various sizes for a spatial gaussian process with n observation points placed on a random perturbation of a regular discretization of the unit square and an isotropic exponential kernel with correlation length of hierarchical representations of the formally dense n n covariance matrices are formed analytically by first clustering the points in a using a mean split giving us the hierarchical index sets of the basis tree the basis vectors and transfer nodes are generated using chebyshev interpolation the matrix tree is constructed using a dual traversal of the basis tree and the coupling matrices are generated by evaluating the kernel at the interpolation points the approximation error of the constructed matrix is then controlled by varying the number of interpolation points and by varying the leaf admissibility condition during the dual tree traversal an approximation error of has been used in the following tests e h has been used to maintain the accuracy of and a relative truncation error h the compressed matrices figure shows the memory consumption before and after compression of hierachical covariance matrices with leaf size and initial rank corresponding to an chebyshev grid the dense part remains untouched while the low rank part of the representation sees a substantial decrease in memory consumption after compression with minimal loss of accuracy figure shows the expected asymptotic linear growth in time of the compression algorithm and shows the effect of using the randomized svd with samples instead of the full svd as computed by the shared memory kernel figure shows another example where the admissibility condition is weakened to generate a coarser matrix tree with an increased rank of corresponding to an chebyshev grid and the randomized svd with samples also reduces compression time when compared to the full svd using the direct block jacobi kernels conclusions and future work in this paper we described the implementation of efficient batched kernels for the qr decomposition and randomized singular value decomposition of low rank matrices hosted on the gpu our batched qr kernel provides significant performance improvements for small matrices over existing state of the art libraries and our batched svd routines are the first of their kind on the gpu with performance exceeding on a batch of matrices of size in batched qr and svd algorithms dp dense portion dp original low rank dp compressed low rank sp dense portion sp original low rank sp compressed low rank dp full svd sp full svd compression time s memory consumption gb dp randomized svd sp randomized svd problem size a memory savings problem size b compression time using randomized svd with samples and the full svd using the shared memory kernel figure compression results for sample covariance matrices generated from spatial statistics on a gpu in single and double precision using a relative frobenius norm threshold of and initial rank dp full svd sp full svd dp randomized svd sp randomized svd compression time s problem size figure compression time for a coarser matrix tree with initial rank comparing the randomized svd with samples and the full svd precision we illustrated the power of these kernels on a problem involving the algebraic compression of hierarchical matrices stored entirely in gpu memory and demonstrated a compression algorithm yielding significant memory savings on practical problems in the future we plan to investigate alternatives to the jacobi algorithm for the svd of the small blocks in the randomized algorithm and improve the performance of the blocked algorithms using preconditioning and adaptive block column pair selection we also plan to develop a suite of hierarchical matrix operations suited for execution on modern gpu and manycore architectures batched qr and svd algorithms acknowledgments we thank the nvidia corporation for providing access to the gpu used in this work references halko martinsson and j tropp finding structure with randomness probabilistic algorithms for constructing approximate matrix decompositions siam review vol no pp golub and van loan matrix computations johns hopkins university press trefethen and bau numerical linear algebra society for industrial and applied mathematics demmel and veselic jacobi s method is more accurate than qr siam journal on matrix analysis and applications vol no pp haidar dong tomov luszczek and dongarra a framework for batched and factorization algorithms applied to block householder in isc ser lecture notes in computer science kunkel and ludwig vol springer pp haidar dong luszczek tomov and dongarra optimization for performance and energy for batched matrix computations on gpus in proceedings of the workshop on general purpose processing using gpus ser new york ny usa acm pp wilt the cuda handbook a comprehensive guide to gpu programming pearson education volkov better performance at lower occupancy proceedings of the gpu technology conference gtc vol charara keyes and ltaief batched triangular dense linear algebra kernels for very small matrix sizes on gpus submitted to acm transactions on mathematical software online available http anderson ballard demmel and keutzer qr decomposition for gpus in parallel distributed processing symposium ipdps ieee international may pp kotas and barhen singular value decomposition utilizing parallel algorithms on graphical processors in oceans kona sept pp kang and lee improving performance of convolutional neural networks by separable filters on gpu berlin heidelberg springer berlin heidelberg pp badolato paula and farias many svds on gpu for image mosaic assemble in international symposium on computer architecture and high performance computing workshop oct pp tomov nath ltaief and dongarra dense linear algebra solvers for multicore with gpu accelerators in proc of the ieee ipdps atlanta ga ieee computer society april pp doi nvidia cublas library user guide http nvidia online available http cheng grossman and mckercher professional cuda c programming ser wiley kurzak ltaief dongarra and badia scheduling dense linear algebra operations on multicore processors concurrency and computation practice and experience vol no pp online available http b zhou and brent on parallel implementation of the jacobi algorithm for singular value decompositions in parallel and distributed processing proceedings euromicro workshop on jan pp zhou and brent a parallel ring ordering algorithm for efficient jacobi svd computations journal of parallel and distributed computing vol no pp and svd algorithms for distributed memory systems i hypercubes and rings parallel algorithms and applications vol no pp svd algorithms for distributed memory systems ii meshes parallel algorithms and applications vol no pp and new dynamic orderings for the parallel svd algorithm parallel processing letters vol no nvidia cusolver library user guide http nvidia online available http batched qr and svd algorithms and efficient in the parallel svd algorithm parallel vol no pp online available http hackbusch and khoromskij a sparse arithmetic part ii application to problems computing vol no pp hackbusch khoromskij and sauter on in lectures on applied mathematics bungartz hoppe and zenger eds springer berlin heidelberg pp hackbusch a sparse matrix arithmetic based on part i introduction to computing vol no pp hierarchical matrices algorithms and analysis ser springer series in computational mathematics berlin springer vol and garcke approximating gaussian processes with in european conference on machine learning springer pp grasedyck and hackbusch construction and arithmetics of computing vol no pp
8
analytical and simplified models for dynamic analysis of short skew bridges under moving loads feb nguyena goicoleaa a group of computational mechanic school of civil engineering upm spain abstract skew bridges are common in highways and railway lines when non perpendicular crossings are encountered the structural effect of skewness is an additional torsion on the bridge deck which may have a considerable effect making its analysis and design more complex in this paper an analytical model following beam theory is firstly derived in order to evaluate the dynamic response of skew bridges under moving loads following a simplified model is also considered which includes only vertical beam bending the natural frequencies eigenmodes and orthogonality relationships are determined from the boundary conditions the dynamic response is determined in time domain by using the exact integration both models are validated through some numerical examples by comparing with the results obtained by fe models a parametric study is performed with the simplified model in order to identify parameters that significantly influence the vertical dynamic response of the skew bridge under traffic loads the results show that the grade of skewness has an important influence on the vertical displacement but hardly on the vertical acceleration of the bridge the torsional stiffness really has effect on the vertical displacement when the skew angle is large the span length reduces the skewness effect on the dynamic behavior of the skew bridge keywords skew bridge bridge modelling modal analysis moving load corresponding author email addresses khanh nguyen goicolea preprint submitted to engineering structures february introduction skew bridges are common in highways and railway lines when non perpendicular crossings are encountered the structural effect of the skewness is an additional torsion on the bridge deck which may have a considerable effect making its analysis and design more complex a large research effort using the analytical numerical as well as experimental approaches have been made during the last decades in order to better understand the behavior of this type of bridge under the static and dynamic loadings special attention is given in researches related to the highway skew bridge subjected to earthquake loadings in fact the first work on this subject was reported in by ghobarah and tso in which a solution based on the beam model capable of capturing both flexural and torsional modes was proposed to study the dynamic response of the skewed highway bridges with intermediate supports maragakis and jennings obtained the earthquake response of the skew bridge modelling the bridge deck as a rigid body using the finite element fe models the socalled stick model is firstly introduced by wakefield et al the stick model consists of a beam element representing the bridge deck rigid or flexible beam elements for the and an array of translational and rotational springs for the substructure of the bridge this type of model is then successfully used in the later works despite its simplicity the stick model can provide reasonably good approximations for the preliminary assessment more sophisticated models using the shell and beam elements are also proposed to study this subject regarding the behavior of the skew bridges under the traffic loads the most of the work about this subject has been performed on the fe models using the combination of shell and beam elements and assisted by experimental testing the fe models give a good approximation but require the end user more effort to introduce information in modelling the structure such as element types and sizes dimension material properties connection types etc therefore its use is limited in determined case studies and challenged for a parametric study as monte carlo simulations or large number of case studies a possible alternative is to develop an analytical solution that is able to capture the behavior of the skew bridge and to give a sufficient accuracy the advantage of the analytical solution is that the data input is much simpler general information of structure such as mass span length flexural and torsional stiffness and therefore its use is more easy for the end user and of course is able for parametric study in this context the main objective of this work is to derive an analytical solution based on the beam theory for the skew bridge under the moving loads after that a simplified model is proposed in order to assimilate the effect of the skewness of the support on the vertical vibration of the bridge an exact integration in the time domain is used to solve the differential equations both models are validated through some numerical examples by comparing with the results obtained by fe models a parametric study is performed with the simplified model in order to identify parameters that significantly influence the vertical dynamic response of the skew bridge under traffic loads formulation of problem a skew bridge as shown in fig is considered to study in this work the line of abutment support forms with the orthogonal line of the centreline an angle defined as angle of skewness the length of bridge is taken as the length the bridge is idealized using following assumptions the bridge deck is modelled as beam supported at the ends and has a linear elastic behavior the bridge deck is very stiff in the horizontal xy plane so the flexural deflection in y direction will be neglected the bending stiffness ei torsional stiffness gj and mass per unit length m are constant over the length warping and distortion effects in the torsion of the bridge deck is small enough to be neglected y x longitudinal axis deck width b abutment m r ei gj bridge l figure a skew bridge in plane view and bridge model s sketch with these assumptions the bending of the bridge in xz plane and its twisting about the x axis are the principal types of deformation of the bridge deck the governing equations of motion for transverse and torsional vibration under transverse and torsional loads are p x t gj mt x t ei where r is the radius of gyration u x t and x t are the transverse deflection and torsional rotation of the bridge deck p x t and mt x t are the transverse and torsional loads applied on the bridge at distance x and at time t respectively the external damping mechanism is introduced by the familiar term and is assumed to be proportional to the mass c natural frequencies and mode shapes using the modal superposition technique the solution for free vibrations of the bridge deck can be decoupled into an infinite set of modal generalized coordinates and mode shapes as u x t x t x x qn t x pn t x in which x and x the nth flexural and torsional mode shape and qn t and pn t are the generalized flexural and torsional coordinates at nth mode shape and are assumed to be t the governing equations for free vibrations can be rewritten for each mode of vibration as x x ei x x gj the solutions of the above equations can found in many textbooks on dynamic and can be expressed in the following form x sin x cos x sinh x cosh x x sin x cos x where and are six constants that are determined by the boundary conditions the boundary conditions for the problem are shown in fig the bridge is at the ends by abutments therefore at the support lines there are not the vertical displacement u t u l t rotation about the x axis t l t and bending moment in x axis t l t using the change of coordinates as shown in fig the following relationships are obtained figure coordinate systems cos sin mx sin my cos gj sin ei cos hence the boundary conditions for the problem can be written as l d sin dx d l sin l cos dx d gj sin ei cos dx dx d gj l sin ei l cos dx dx cos from these six conditions a homogeneous system of equations is obtained as ax where x t is the vector of six constants to be determined and the matrix a is expressed as sin cos sinh cosh cos sin cosh sinh sin cos sinh cosh sin cos cos with sin cos and sin the eigenvalues are calculated by solving det a it is noted that the determinant of the matrix p a can be expressed in a function of unique variable the extraction of the eigenvalues can be performed by using any symbolic matical program maple or matlab in fact in this study the symbolic calculation implemented in matlab is used to extract the values of for desired modes used in the dynamic calculation the eigenvector corresponding to the nht mode is obtained by applying singular value decomposition to the matrix a orthogonality relationship in order to apply the modal superposition technique for solving the forced vibration problems in the skew bridges it is necessary to determine the orthogonality relationship between the mode shapes on the basis of the equations these equations can be reformulated by multiplying both sides of these by an arbitrary mode x and x respectively and integrating with respect to x over the length l one obtains z l z x x dx l x x z l z l cos cos cos sin x x dx x x dx by means of using the integration by parts of the side of the equations twice for eq and once for eq and applying the boundary conditions derived for the problem gives ei z l x x dx gj tan l l gj tan l l z z l x x dx z l l x x dx x x dx interchanging the indices n by m in the equation and subtracting from its original form which gives the following relations for any n m gj tan l l l l z l x x dx gj tan l l l l z l x x dx next subtracting the equation from the equation gives rise to m z l x x dx mr z l x x dx due to the fact that and rl x x dx and rl x x dx for any n m the condition established in eq will be fulfilled when m z l z l x x dx x x dx which corresponds to the orthogonality relationship of the skew bridge vibration induced by a moving load and a convoy of moving loads once the natural frequencies and the associated mode shapes are found and the orthogonality relationship between the modes is known it is possible to apply the modal superposition technique for obtaining the response of the skew bridge due to a moving load the vertical load and the twisting moment apply on the bridge deck can be determined as p x t p x vt mt x t p x vt l cot p x vt e k where p is the magnitude of the moving load is the dirac delta function k and e is the load eccentricity respect to the mass centre of the bridge deck section the first part of the right side of eq is due to the skewness of the bridge and the second part is due to the load eccentricity using the modal superposition technique and applying the orthogonality relationship the differential equations in the generalized coordinates are uncoupled t t qn t r l t pn t r l p vt x dx p l cot p e vt x dx k cot in order to solve the differential equations several techniques can be applied in this work the solution of eq is obtained by using the integration method based on the interpolation of excitation which has the advantage that it gives an exact solution and a highly efficient numerical procedure the solution of eq at time i can be determined as awi b cqi and its velocity is given by wi b c qi i h vt p l cot rl where w qn pn t q r l p e vt t x dx mr x dx cot n n and a b c d are the coefficients that depend on the structure parameters and on the time step detail formulations can be found in appendix a a a moving load b a convoy of moving loads figure moving loads for the case that the bridge is forced by a convoy of moving loads as shown in fig b the uncoupled differential equations in the generalized coordinates for each mode of vibration n are given as np x pk vt dk rl x dx np x vt dk pk l cot t pn t pk e rl k mr x dx t t qn t where np is the number of moving loads dk is the distance between the first load and the k th load pk is the magnitude of the k th load and the solution of eq is obtained in similar way as in the case of a moving load attention needs to be paid in the determination of the modal loads in the right side of eq for the loads that do not enter the bridge vt dk or leave the bridge vt dk l the modal loads associated with those loads are zero a simplified model in this part of the work a simplified model is developed in order to assimilate the effect of the skewness of the support on the vertical vibration of the skew bridges it is well known that the skewness of the supports causes the torsional moment on the bridge even for the vertical centric loads those torsional moments in turn have a certain influence on the bending moment in particular a negative bending moment is introduced at the supports as shown in fig making that for the purpose of vertical flexure the skew beam behaves like as an beam or in other words as a beam with rotational support with stiffness as shown in fig it is noted that the negative bending moments at the supports change with the load position on the bridge therefore the stiffness of the rotational support is also changed and can be different at different supports in order to simplify the calculation the stiffness of the rotational support are considered the same in both supports with this assumption the stiffness of the rotational support can be determined as l in additions to the previously adopted assumptions the following additional assumptions are used for the simplified model only the vertical vibration is taken into account in the model the load eccentricity is not considered the bridge deck is modelled by the beam theory figure a diagram of bending moment of a skew bridge under a static load b simplified model adopted for skew bridge natural frequencies and mode shapes the governing equation for the free vibration of the simplified model is similar to eq the solution of this equation is given in the determination of the frequencies and its correspondent mode shapes is solving the homogeneous system of equations bj where j t is a vector containing the four mode shape coefficients b is the characteristic matrix that can be determined by applying the boundary conditions for the simplified model proposed in this study the boundary conditions are there is not vertical displacement at the supports u t u l t l equilibrium of moments at the supports ei ei l l l l therefore the characteristic matrix b is obtained as in which cos cos sin sinh cosh sin cos cosh sinh sinh cosh the procedure to obtain the eigenvalues and eigenvector is similar to the previously described in section orthogonality relationship similar to the analysis in section the equation can be rewritten using the boundary conditions of the simplified model as ei z l x x l l z l x x dx interchanging the indices n and m in eq and subtracting the resulting equation from its original form gives z z l x x dx or m m l x x dx which is the orthogonality relationship between the mode shapes for the simplified model by a moving load and a convoy of moving loads the dynamic response of the bridge under moving loads is obtained by using the same way described for the analytical model in section the only difference is that the torsional response is eliminated in the calculation numerical validations two numerical examples are used in order to validate the proposed models the results obtained by proposed models are compared with those obtained by finite element fe simulations for each example a fe model is developed in the program feap built with beam element stick model a moving load or a convoy of moving loads is applied to the nodal forces along the centreline axis using amplitude functions the dynamic responses in fe models are obtained by solving in the time domain using the modal superposition technique with a time step of for all examples the first five modes of vibration are considered in the calculation and a constant damping ratio is assumed for all considered modes attention should be paid to select the total number of modes of vibration considered in the fe models since the first five modes of vibration obtained by fe model are not always corresponding to the first five modes obtained by analytical and simplified models a b figure cross sections a for example b for example example a skew slab bridge under a moving load a skew slab bridge is considered in this example the skew angle of the bridge is the bridge is the cross section of the bridge is shown in fig a and the following geometric and mechanical characteristics are used in the calculation elastic modulus e is with poisson coefficient properties of the cross section i j m and r damping ratio is the bridge is subjected to the action of a moving load of kn with a constant speed of the frequencies of the first five modes considered in the calculation are extracted and listed in table for all models it can be noted that there is a very good agreement in the natural frequency between the analytical simplified and fe models in fact the maximum difference in the frequency between models does not exceed the similar agreement is also observed with the dynamic responses in terms of vertical displacement and acceleration at the for three models as shown in fig from this result it can be remarked that the proposed simplified model is enable to simulate the vertical dynamic response of the skew bridge table frequencies of first five modes of vibration of different models in hz modes anal model simpl model fe model description st mode in fe model mode in fe model mode in fe model mode in fe model mode in fe model example a skew bridge under a convoy of moving loads this example attempts to simulate the dynamic response of a railway bridge under an hslm train which is the desired application of the proposed displacement mm anal model simpl model fe model times s a acceleration anal model simpl model fe model times s b figure dynamic responses at the under a moving load a displacement b acceleration analytical and simplified methods presented in this paper the studied bridge is a typical bridge designed for and has a cross section as shown in fig b a skew angle of is considered the bridge is the geometric and mechanical properties of the bridge s cross section used in the calculation are elastic modulus e with poisson coefficient of i j m and r damping ratio is the train consists of intermediate coaches a power coach and a end coach on either sides of the train in total the train has axles with a load of the dynamic analysis are carried out for different train speeds ranging from to in increment of the vertical displacement and acceleration at the are obtained and compared between the models the envelope of maximum vertical displacement and acceleration are also depicted for all models in order to validate the proposed analytical and simplified model presented in this paper table gives the natural frequencies of the first five modes of vibration considered in the calculation it is known that for the bridge the train velocities of resonance can be estimated using the following formula vi d with i i where is the fundamental frequency d is the regular distance between load axles and is m for the train according with this eq the first three resonance peaks occur at train velocities of almost and the dynamic response at the train speed of is shown in fig it can be observed that at this train speed near the second critical speed the responses are amplified by each axle passing the bridge the envelope curves for the maximum vertical displacement and acceleration at the are shown in fig it can be noted in fig that in the considered range of train velocities two peaks of response both displacement and acceleration occur at speeds of and which are closed to the predicted critical trains therefore it can be remarked that the estimation of the train velocities of resonance proposed by is still valid for the skew bridge furthermore from both figs and it can be concluded that the results obtained using the analytical and simplified model agree well with the ones obtained using the fe model it should be noted that the time consumed for the calculation using the analytical or simplified model is approximately times faster than the ones using the fe model the cpu time required for completing a analysis using the analytical model was s while s was the time for fe model in a standard pc equipped with intel xeon processor of ghz and gb of ram table frequencies of first five modes of vibration of different models in hz modes anal model simpl model fe model description mode in fe model mode in fe model mode in fe model mode in fe model mode in fe model parametric study in this part of the paper three parametric studies are performed using the simplified model in order to identify parameters that influence significantly the vertical dynamic response of the skew bridge under the moving loads in each study the value of the studied parameter are changed the dynamic responses under the train corresponding to each value of the parameter are obtained and depicted in function of the studied parameter the basic properties of the skew bridge in example are adopted in this section effect of skew angle figure shows how the maximum dynamic responses vary with the skew angle when the bridge is forced by the train it can be observed from fig that the skewness has an important influence on the maximum vertical displacement at the of the bridge in general the displacement decreases as the skew angle increases a sharp change in slope can be observed at the skew angle of from this value of the skew angle the displacement decreases more quickly furthermore the changing in the train velocity of resonance is also observed when the skewness is changed in fact the train velocity anal model simpl model fe model displacement mm times s a anal model simpl model fe model acceleration times s b figure dynamic responses at the under the train at velocity of a displacement b acceleration of resonance increases as the skewness increases regarding the maximum acceleration at the the skew angle does not has pronounced influence on it the acceleration hardly increases when the skew angle grows effect of torsional to flexural stiffness ratio for this study the torsional stiffness gj is changed with respect to the flexural stiffness ei such that the ratio between gj and ei varies in a range from to figure shows the variation of the maximum dynamic responses at the as a function of the torsional to flexural stiffness ratio it can be observed that the maximum vertical displacement increases slightly displacement mm anal model simpl model fe model velocity a acceleration anal model simpl model fe model velocity b figure envelope of the maximum response at the under the train a displacement b acceleration as the ratio increases while the maximum acceleration is barely changed it should be noted that the skew angle used for this study is constant and is this skew angle is in a range from to in which the skewness has small influence on the dynamic response of the bridge as mentioned in the preceding section and shown in fig a as a result of this the torsional stiffness does not have a pronounced influence in the vertical deflection for small skew angles for larger skew angle for example the torsional stiffness has a noticeable effect on the maximum vertical displacement as shown in fig a the maximum acceleration is almost completely unaffected by the torsional stiffness for acceleration displacement mm m k y ocit skew l e angle v m k y ocit skew l e angle v a b figure effect of skewness on the dynamic responses a displacement b acceleration both skew angles selected see fig b and b displacement mm acceleration km y ocit el v a km y ocit l ve b figure effect of torsional to flexural stiffness ratio on the dynamic responses for a skew angle of a displacement b acceleration effect of the span length in this part of the paper the influence of the span length on the dynamic response of the skew bridge is carried out the span length is changed from m to m in increment of in order to obtain a consistent comparison between the results obtained from the parametric study the cross section of the bridge is redesigned for each span length using the design criteria that the ratio between the depth of the cross section h and the span length acceleration displacement mm km y ocit el v a m k y ocit l e v b figure effect of torsional to flexural stiffness ratio on the maximum dynamic responses for a skew angle of a displacement b acceleration l is constant and is this ratio is usually applied in the railway bridge design the depth of the cross section will be changed with the bridge s length the other dimensions of the cross section are considered as unmodified the basic properties of the cross section needed for the parametric study are listed in table table principal properties of the bridge for the parametric study l m h m ei gj m the first natural frequency corresponding to each span length is obtained and depicted in fig a for different skew angles varying from to and the variation of magnitude of the first natural frequency between the skew angle of and for each span length is also obtained and shown in fig b it can be observed that the variation of frequency for each span length is generated by the skewness effect this variation is greater when the span length is shorter and decreases almost linearly with span length therefore it can be remarked that the span length decreases the skewness effect on the bridge in term of the natural frequency variation first natural frequency hz span length m a span length m b figure influence of the span length on the natural frequency of the skew bridge a first natural frequency b variation of frequency it is well known that the dynamic response of a bridge under the traffic loads depends on the properties of the vehicle traveling on the bridge and on the proper characteristics of the bridge in this parametric study the traffic loads are unmodified but the characteristics of the bridge are changed with the span length therefore the comparison of dynamic responses in term of displacement and acceleration in at the determined train velocity is not consistent for a consistent comparison the peak corresponding to the second train velocity of resonance for each span length is compared in particular the dynamic amplification factor daf of the vertical displacement and the maximum vertical acceleration at are used to compare and are depicted in fig it can be observed that the daf decreases as the span length increases there is not a reduction of variation of magnitude of daf of the displacement for different skew angles when the span length increases however the reduction of variation of magnitude of the maximum acceleration can be observed for different skew angles for which it can be remarked that the span length reduces the skewness effect on the dynamic response of the bridge in term of the vertical acceleration acceleration daf span length m a span length m b figure maximum dynamic responses at of the skew bridge at the peak corresponding to the second velocity of resonance for different skew angles a dynamic amplification factor of displacement b acceleration conclusions in this paper an analytical model for determining the dynamic response of the skew bridge under the moving loads is presented and a simplified model is also proposed the modal superposition technique is used in both models to decompose the differential equation of motions the natural frequencies and mode shapes and the orthogonality relationship are determined from the boundary conditions the modal equations are solved by the exact integration and therefore the both models are highly accurate robust and computationally efficient the proposed models have been validated with results obtained from the fe models using the same modal superposition method furthermore from the results obtained in this paper the following conclusions are made the estimation of the train velocities of resonance proposed by is still valid for the skew bridge the grade of skewness of the bridge plays important role in the dynamic behavior of the bridge in term of the vertical displacement the maximum vertical displacement decreases as the skew angle increases the vibration of bridge in term of the vertical acceleration is hardly affected by the skewness there is a critical skew angle from which the effect of the skewness is more noticeable for the cross section used in the parametric study the critical skew angle is the torsional stiffness really has important influence on the vibration of the bridge in term of the vertical displacement when the skew angle is larger than the critical skew angle the vertical acceleration is unaffected by the torsional stiffness the span length reduces the skewness effect on the dynamic behavior of the skew bridge in term of the natural frequency and acceleration appendix parameters for the exact integration p sin cos b sin cos sin sin cos p sin sin b p c cos sin cos where p p n sin cos acknowledgement the authors are grateful to the support of mineco of spanish government through the project edinpf ref and to the support provided by the technical university of madrid spain references kollbrunner basler torsion in strucutres an engineering approach berlin manterola bridges design calculation and construction in spanish colegio de ingenieros de caminos canales y puertos madrid spain a ghobarah tso seismic analysis of skewed highway bridges with intermediate supports earthquake engineering structural dynamics maragakis jennings analytical models for the rigid body motions earthquake engineering structural dynamics january wakefield nazmy billington analysis of seismic failure in skew rc bridge journal of structural engineering meng lui seismic analysis and assessment of a skew highway bridge engineering structures meng lui liu dynamic response of skew highway bridges journal of earthquake engineering nielson desroches analytical seismic fragility curves for typical bridges in the central and southeastern united states earthquake spectra pekcan seismic response of skewed rc bridges earthquake engineering and engineering vibration kaviani zareian taciroglu seismic behavior of reinforced concrete bridges with abutments engineering structures yang werner desroches seismic fragility analysis of skewed bridges in the central southeastern united states engineering structures meng lui refined stick model for dynamic analysis of skew highway bridges journal of bridge engineering nouri ahmadi influence of skew angle on continuous composite girder bridge journal of bridge engineering deng phares greimann shryack hoffman behavior of curved and skewed bridges with integral abutments journal of constructional steel research mallick raychowdhury seismic analysis of highway skew bridges with nonlinear interaction transportation geotechnics bishara liu skew composite bridges journal of structural engineering helba j kennedy skew composite bridges ultimate load canadian journal of civil engineering khaloo mirzabozorg load distribution factors in simply supported skew bridges journal of bridge engineering menassa mabsout tarhini frederick influence of skew angle on reinforced concrete slab bridges journal of bridge engineering ashebo chan yu evaluation of dynamic loads on a skew box girder continuous bridge part i field test and modal analysis engineering structures he sheng scanlon linzell yu skewed concrete box girder bridge static and dynamic testing and analysis engineering structures chopra dynamics of structures theory and applications to earthquake engineering edition prentice hall taylor element analysis program url http cen en actions on structures part traffic loads on bridges rue de stassart brussels
5
efficient pac learning from the crowd apr pranjal avrim nika yishay abstract in recent years crowdsourcing has become the method of choice for gathering labeled training data for learning algorithms standard approaches to crowdsourcing view the process of acquiring labeled data separately from the process of learning a classifier from the gathered data this can give rise to computational and statistical challenges for example in most cases there are no known computationally efficient learning algorithms that are robust to the high level of noise that exists in crowdsourced data and efforts to eliminate noise through voting often require a large number of queries per example in this paper we show how by interleaving the process of labeling and learning we can attain computational efficiency with much less overhead in the labeling cost in particular we consider the realizable setting where there exists a true target function in f and consider a pool of labelers when a noticeable fraction of the labelers are perfect and the rest behave arbitrarily we show that any f that can be efficiently learned in the traditional realizable pac model can be learned in a computationally efficient manner by querying the crowd despite high amounts of noise in the responses moreover we show that this can be done while each labeler only labels a constant number of examples and the number of labels requested per example on average is a constant when no perfect labelers exist a related task is to find a set of the labelers which are good but not perfect we show that we can identify all good labelers when at least the majority of labelers are good introduction over the last decade research in machine learning and ai has seen tremendous growth partly due to the ease with which we can collect and annotate massive amounts of data across various domains this rate of data annotation has been made possible due to crowdsourcing tools such as amazon mechanical turktm that facilitate individuals participation in a labeling task in the context of classification a crowdsourced model uses a large pool of workers to gather labels for a given training data set that will be used for the purpose of learning a good classifier such learning environments that involve the crowd give rise to a multitude of design choices that do not appear in traditional learning environments these include how does the goal of learning from the crowd differs from the goal of annotating data by the crowd what challenges does the high amount of noise typically found in curated data sets wais et kittur et ipeirotis et pose to the learning algorithms how do learning and labeling processes interplay how many labels are we willing to take per example and how much load can a labeler handle rutgers university carnegie mellon university avrim supported in part by nsf grants and this work was done in part while the author was visiting the simons institute for the theory of computing carnegie mellon university nhaghtal supported in part by nsf grants and and a microsoft research fellowship this work was done in part while the author was visiting the simons institute for the theory of computing blavatnik school of computer science university mansour this work was done while the author was at microsoft research herzliya supported in part by a grant from the science foundation isf by a grant from united binational science foundation bsf and by the israeli centers of research excellence program center no in recent years there have been many exciting works addressing various theoretical aspects of these and other questions slivkins and vaughan such as reducing noise in crowdsourced data dekel and shamir task assignment badanidiyuru et et in online or offline settings karger et and the role of incentives ho et in this paper we focus on one such aspect namely how to efficiently learn and generalize from the crowd with minimal cost the standard approach is to view the process of acquiring labeled data through crowdsourcing and the process of learning a classifier in isolation in other words a typical learning process involves collecting data labeled by many labelers via a crowdsourcing platform followed by running a passive learning algorithm to extract a good hypothesis from the labeled data as a result approaches to crowdsourcing focus on getting high quality labels per example and not so much on the task further down in the pipeline naive techniques such as taking majority votes to obtain almost perfect labels have a cost per labeled example that scales with the data size namely log m queries per label where m is the training data size and is the desired failure probability this is undesirable in many scenarios when data size is large furthermore if only a small fraction of the labelers in the crowd are perfect such approaches will inevitably fail an alternative is to feed the noisy labeled data to existing passive learning algorithms however we currently lack computationally efficient pac learning algorithms that are provably robust to high amounts of noise that exists in crowdsourced data hence separating the learning process from the data annotation process results in high labeling costs or suboptimal learning algorithms in light of the above we initiate the study of designing efficient pac learning algorithms in a crowdsourced setting where learning and acquiring labels are done in tandem we consider a natural model of crowdsourcing and ask the fundamental question of whether efficient learning with little overhead in labeling cost is possible in this scenario we focus on the classical pac setting of valiant where there exists a true target classifier f f and the goal is to learn f from a finite training set generated from the underlying distribution we assume that one has access to a large pool of labelers that can provide noisy labels for the training set we seek algorithms that run in polynomial time and produce a hypothesis with small error we are especially interested in settings where there are computationally efficient algorithms for learning f in the consistency model the realizable pac setting additionally we also want our algorithms to make as few label queries as possible ideally requesting a total number of labels that is within a constant factor of the amount of labeled data needed in the realizable pac setting we call this o overhead or cost per labeled example furthermore in a realistic scenario each labeler can only provide labels for a constant number of examples hence we can not ask too many queries to a single labeler we call the number of queries asked to a particular labeler the load of that labeler perhaps surprisingly we show that when a noticeable fraction of the labelers in our pool are perfect all of the above objectives can be achieved simultaneously that is if f can be efficiently pac learned in the realizable pac model then it can be efficiently pac learned in the noisy crowdsourcing model with a constant cost per labeled example in other words the ratio of the number of label requests in the noisy crowdsourcing model to the number of labeled examples needed in the traditional pac model with a perfect labeler is a constant and does not increase with the size of the data set additionally each labeler is asked to label only a constant number of examples o load per labeler our results also answer an open question of dekel and shamir regarding the possibility of efficient noise robust pac learning by performing labeling and learning simultaneously when no perfect labelers exist a related task is to find a set of the labelers which are good but not perfect we show that we can identify the set of all good labelers when at least the majority of labelers are good overview of results we study various versions of the model described above in the most basic setting we assume that a large percentage say of the labelers are perfect they always label according to the target function f the remaining of the labelers could behave arbitrarily and we make no assumptions on them since the perfect labelers are in strong majority a straightforward approach is to label each example with the majority vote over a few randomly chosen labelers to produce the correct label on every instance with high probability however such an approach leads to a query bound of o log m per labeled example where m is the size of the training set and is the acceptable probability of failure in other words the cost per labeled example is o log m and scales with the size of the data set another easy approach is to pick a few labelers at random and ask them to label all the examples here the cost per labeled example is a constant but the approach is infeasible in a crowdsourcing environment since it requires a single or a constant number of labelers to label the entire data set yet another approach is to label each example with the majority vote of o log labelers while the labeled sample set created in this way only has error of it is still unsuitable for being used with pac learning algorithms as they are not robust to even small amounts of noise if the noise is heterogeneous so the computational challenges still persist nevertheless we introduce an algorithm that performs efficient learning with o cost per labeled example and o load per labeler theorem informal let f be a hypothesis class that can be pac learned in polynomial time to error with probability using samples then f can be learned in polynomial time using o samples in a crowdsourced setting with o cost per labeled example provided a fraction of the labelers are perfect furthermore every labeler is asked to label only example notice that the above theorem immediately implies that each example is queried only o times on average as opposed to the data size dependent o log m cost incurred by the naive majority vote style procedures we next extend our result to the setting where the fraction of perfect labelers is significant but might be less than say here we again show that f can be efficiently pac learned using o queries provided we have access to an expert that can correctly label a constant number of examples we call such queries that are made to an expert golden queries when the fraction of perfect labelers is close to say we show that just one golden query is enough to learn more generally when the fraction of the perfect labelers is some we show that o golden queries is sufficient to learn a classifier efficiently we describe our results in terms of but we are particularly interested in regimes where theorem informal let f be a hypothesis class that can be pac learned in polynomial time to error with probability using samples then f can be learned in polynomial time using o samples in a crowdsourced setting with o cost per labeled example provided more than an fraction of the labelers are perfect for some constant furthermore every labeler is asked to label only o examples and the algorithm uses at most golden queries the above two theorems highlight the importance of incorporating the structure of the crowd in algorithm design being oblivious to the labelers will result in noise models that are notoriously hard for instance if one were to assume that each example is labeled by a single random labeler drawn from the crowd one would recover the malicious misclassification noise of rivest and sloan getting computationally efficient learning algorithms even for very simple hypothesis classes has been a long standing open problem in this space our results highlight that by incorporating the structure of the crowd one can efficiently learn any hypothesis class with a small overhead finally we study the scenario when none of the labelers are perfect here we assume that the majority of the labelers are good that is they provide labels according to functions that are all to the target function in this scenario generating a hypothesis of low error is as hard as agnostic nonetheless we show that one can detect all of the good labelers using expected o log n queries per labeler where n is the target number of labelers desired in the pool theorem informal assume we have a target set of n labelers that are partitioned into two sets good and bad furthermore assume that there are at least good labelers who always provide labels according to this can happen for instance when all the labelers label according to a single function f that is from f functions that are to a target function f the set of bad labelers always provide labels according to functions that are at least away from the target then there is a polynomial time algorithm that identifies with probability at least all the good labelers and none of the bad labelers using expected o log queries per labeler related work crowdsourcing has received significant attention in the machine learning community as mentioned in the introduction crowdsourcing platforms require one to address several questions that are not present in traditional modes of learning the work of dekel and shamir shows how to use crowdsourcing to reduce the noise in a training set before feeding it to a learning algorithm our results answer an open question in their work by showing that performing data labeling and learning in tandem can lead to significant benefits a large body of work in crowdsourcing has focused on the problem of task assignment here workers arrive in an online fashion and a requester has to choose to assign specific tasks to specific workers additionally workers might have different abilities and might charge differently for the same task the goal from the requester s point of view is to finish multiple tasks within a given budget while maintaining a certain minimum quality ho et et there is also significant work on dynamic procurement where the focus is on assigning prices to the given tasks so as to provide incentive to the crowd to perform as many of them as possible within a given budget badanidiyuru et singla and krause unlike our setting the goal in these works is not to obtain a generalization guarantee or learn a function but rather to complete as many tasks as possible within the budget the work of karger et al also studies the problem of task assignment in offline and online settings in the offline setting the authors provide an algorithm based on belief propagation that infers the correct answers for each task by pooling together the answers from each worker they show that their approach performs better than simply taking majority votes unlike our setting their goal is to get an approximately correct set of answers for the given data set and not to generalize from the answers furthermore their model assumes that each labeler makes an error at random independently with a certain probability we on the other hand make no assumptions on the nature of the bad labelers another related model is the recent work of steinhardt et al here the authors look at the problem of extracting top rated items by a group of labelers among whom a constant fraction are consistent with the true ratings of the items the authors use ideas from matrix completion to design an algorithm that can recover the top rated items with an fraction of the noise provided every labeler rates items and one has access to ratings from a trusted expert their model is incomparable to ours since their goal is to recover the top rated items and not to learn a hypothesis that generalizes to a test set our results also shed insights into the notorious problem of pac learning with noise despite decades of research into pac learning noise tolerant polynomial time learning algorithms remain elusive there has been substantial work on pac learning under realistic noise models such as the massart noise or the tsybakov noise models bousquet et however computationally efficient algorithms for such models are known in very restricted cases awasthi et in contrast we show that by using the structure of the crowd one can indeed design polynomial time pac learning algorithms even when the noise is of the type mentioned above more generally interactive models of learning have been studied in the machine learning community cohn et dasgupta balcan et koltchinskii hanneke zhang and chaudhuri yan et we describe some of these works in appendix a model and notations let x be an instance space and y be the set of possible labels a hypothesis is a function f x y that maps an instance x x to its classification y we consider the realizable setting where there is a distribution over x y and a true target function in hypothesis class more formally we consider a distribution d over x y and an unknown hypothesis f f where errd f we denote the marginal of d over x by the error of a hypothesis f with respect to distribution d is defined as errd f pr x f x f x f x in order to achieve our goal of learning f well with respect to distribution d we consider having access to a large pool of labelers some of whom label according to f and some who do not formally labeler i is defined by its corresponding classification function gi x y we say that gi is perfect if errd gi we consider a distribution p that is uniform over all labelers and let errd gi be the fraction of perfect labelers we allow an algorithm to query labelers on instances drawn from our goal is to design learning algorithms that efficiently learn a low error classifier while maintaining a small overhead in the number of labels we compare the computational and statistical aspects of our algorithms to their pac counterparts in the realizable setting in the traditional pac setting with a realizable distribution denotes the number of samples needed for learning that is is the total number of labeled samples drawn from the realizable distribution d needed to output a classifier f that has errd f with probability we know from the vc theory anthony and bartlett that for a hypothesis class f with d and no additional furthermore we assume that efficient algorithms assumptions on f o d ln ln for the realizable setting exist that is we consider an oracle of that for a set of labeled instances s returns a function f f that is consistent with the labels in s if one such function exists and outputs none otherwise given an algorithm in the noisy setting we define the average cost per labeled example of the algorithm denoted by to be the ratio of the number of label queries made by the algorithm to the number of labeled examples needed in the traditional realizable pac model the load of an algorithm denoted by is the maximum number of label queries that have to be answered by an individual labeler in other words is the maximum number of labels queried from one labeler when p has an infinitely large support when the number of labelers is fixed such as in section we define the load to simply be the number of queries answered by a single labeler moreover we allow an algorithm to directly query the target hypothesis f on a few o instances drawn from we call these golden queries and denote their total number by given a set of labelers l and an instance x x we define majl x to be the label assigned to x by the majority of labelers in moreover we denote by x the fraction of the labelers in l that agree with the label majl x given a set of classifiers h we denote by maj h the classifier that for each x returns prediction majh x given a distribution p over labelers and a set of labeled examples s we denote by the distribution p conditioned on labelers that agree with labeled samples x y we consider s to be small typically of size o note that we can draw a labeler from by first drawing a labeler according to p and querying it on all the labeled instances in therefore when p has infinitely large support the load of an algorithm is the maximum size of s that p is ever conditioned on the concepts of total number of queries and load may be seen as analogous to work and depth in parallel algorithms where work is the total number of operations performed by an algorithm and depth is the maximum number of operations that one processor has to perform in a system with infinitely many processors a baseline algorithm and a for improvement in this section we briefly describe a simple algorithm and the approach we use to improve over it consider a very simple baseline algorithm for the case of baseline draw a sample of size m from and label each x s by majl x where l p k for k o ln m is a set of randomly drawn labelers return classifier of s that is the baseline algorithm queries enough labelers on each sample such that with probability all the labels are correct then it learns a classifier using this labeled set it is clear that the performance of baseline is far from being desirable first this approach takes log more labels than it requires samples leading to an average cost per labeled example that increases with the size of the sample set moreover when perfect labelers form a small majority of the labelers o the number of labels needed to correctly label an instance increases drastically perhaps even more troubling is that if the perfect labelers are in minority s may be mislabeled and of s may return a classifier that has large error or no classifier at all in this work we improve over baseline in both aspects in section we improve the log average cost per labeled example by interleaving the two processes responsible for learning a classifier and querying labels in particular baseline first finds high quality labels labels that are correct with high probability and then learns a classifier that is consistent with those labeled samples however interleaving the process of learning and acquiring high quality labels can make both processes more efficient at a high level for a given classifier h that has a larger than desirable error one may be able to find regions where h performs particularly poorly that is the classifications provided by h may differ from the correct label of the instances in turn by focusing our effort for getting high quality labels on these regions we can output a correctly labeled sample set using less label queries overall these additional correctly labeled instances from regions where h performs poorly can help us improve the error rate of h in return in section we introduce an algorithm that draws on ideas from boosting and a probabilistic filtering approach that we develop in this work to facilitate interactions between learning and querying in section we remove the dependence of label complexity on using o golden queries at a high level instances where only a small majority of labelers agree are difficult to label using queries asked from labelers but these instances are great test cases that help us identify a large fraction of imperfect labelers that is we can first ask a golden query on one such instance to get its correct label and from then on only consider labelers that got this label correctly in other words we first test the labelers on one or very few tests questions if they pass the tests then we ask them real label queries for the remainder of the algorithm if not we never consider them again an interleaving algorithm in this section we improve over the average cost per labeled example of the baseline algorithm by interleaving the process of learning and acquiring high quality labels our algorithm facilitates the interactions between the learning process and the querying process using ideas from classical pac learning and adaptive techniques we develop in this work for ease of presentation we first consider the case where say and introduce an algorithm and techniques that work in this regime in section we show how our algorithm can be modified to work with any value of for convenience we assume in the analysis below that distribution d is over a discrete space this is in fact without loss of generality since using uniform convergence one can instead work with the uniform distribution over an unlabeled sample multiset of size o drawn from here we provide an overview of the techniques and ideas used in this algorithm boosting in general boosting algorithms schapire freund freund and schapire provide a mechanism for producing a classifier of error using learning algorithms that are only capable of producing classifiers with considerably larger error rates typically of error p for small in particular early work of schapire in this space shows how one can combine classifiers of error p to get a classifier of error o for any p theorem schapire for any p and distribution d consider three classifiers classifier such that errd p classifier such that p where dc di for distributions dc and di that denote distribution d conditioned on x x f x and x x f x respectively classifier such that p where is d conditioned on x x x then errd maj as opposed to the main motivation for boosting where the learner only has access to a learning algorithm of error p in our setting we can learn a classifier to any desired error rate p as long as we have a sample set of mp correctly labeled instances the larger the error rate p the smaller the total number of label queries needed for producing a correctly labeled set of the appropriate size we use this idea in algorithm in particular we learn classifiers of error o using sample sets of size o that are labeled by majority vote of o log labelers using fewer label queries overall than baseline probabilistic filtering given classifier the second step of the classical boosting algorithm requires distribution d to be reweighed based on the correctness of this step can be done by a filtering process as follows take a large set of labeled samples from d and divide them to two sets depending on whether or not the instances are mislabeled by distribution in which instances mislabeled by make up half of the weight can be simulated by picking each set with probability and taking an instance from that set uniformly at random to implement filtering in our setting however we would need to first get high quality labels for the set of instances used for simulating furthermore this sample set is typically large since at least mp random samples from d are needed to simulate that has half of its weight on the points that mislabels which is a p fraction of the total points in where p o getting high quality m labels for such a large sample set requires o ln label queries which is as large as the total number of labels queried by baseline algorithm f ilter s h let si and n log for x s do for t n do draw a random labeler i p and let yt gi x if t is odd and maj t h x then break end let si si x reaches this step when for all t maj t h x end return si in this work we introduce a probabilistic filtering approach called f ilter that only requires o label queries o cost per labeled example given classifier and an unlabeled sample set s f ilter s returns a set si s such that for any x s that is mislabeled by x si with probability at least moreover any x that is correctly labeled by is most likely not included in si this procedure is described in detail in algorithm here we provide a brief description of its working for any x s f ilter queries one labeler at a time drawn at random until the majority of the labels it has acquired so far agree with x at which point f ilter removes x from consideration on the other hand if the majority of the labels never agree with x f ilter adds x to the output set si consider x s that is correctly labeled by since each additional label agrees with x f x with probability with high probability the majority of the labels on x will agree with f x at some point in which case f ilter stops asking for more queries and removes x as we show in lemma this happens within o queries most of the time on the other hand for x that is mislabeled by h a labeler agrees with x with probability clearly for one set of random labelers snapshot of the labels queried by f the majority label agrees with x with a very small probability as we show in lemma even when considering the progression of all labels queried by f ilter throughout the process with probability the majority label never agrees with x therefore x is added to si with probability another key technique we use in this work is in short this means that as long as we have the correct label of the sampled points and we are in the realizable setting more samples never hurt the algorithm although this seems trivial at first it does play an important role in our approach in particular our probabilistic filtering procedure does not necessarily simulate but a distribution d such that x x for all x where and are the densities of and d respectively at a high level sampling m instances from d simulates a process that samples m instances from and then adds in some arbitrary instances this is formally stated below and is proved in appendix lemma given a hypothesis class f consider any two discrete distributions d and d such that for all x x c d x for an absolute constant c and both distributions are labeled according to f there exists a constant such that for any and with probability over a labeled sample set s of size drawn from d of s has error of at most with respect to distribution with these techniques at hand we present algorithm at a high level the algorithm proceeds in three phases one for each classifier used by theorem in phase the algorithm learns such that errd in phase the algorithm first filters a set of size o into the set si and takes an m additional set sc of samples then it queries o log labelers on each instance in si and sc to get their correct labels with high probability next it partitions these instances to two different sets based on whether or not made a mistake on them it then learns on a sample set w that is drawn by weighting these two sets equally as we show in lemma in phase the algorithm learns on a sample set drawn from conditioned on and disagreeing finally the algorithm returns maj case algorithm uses oracle of runs in time poly d ln and with cost per labeled probability returns f f with errd f using o log example golden queries and load note that when log the above cost per labeled sample is o theorem we start our analysis of algorithm by stating that c abel s labels s correctly with probability this is direct application of the hoeffding bound and its proof is omitted lemma for any unlabeled sample set s and s c abel s with probability for all x y s y f x note that as a direct consequence of the above lemma phase of algorithm achieves error of o lemma in algorithm with probability errd algorithm i nterleaving b oosting b y p robabilistic f iltering for input given a distribution a class of hypotheses f parameters and phase let c abel for a set of sample of size from let of phase let si f ilter for a set of samples of size drawn from let sc be a sample set of size drawn from let sall c abel si sc let wi x y sall y x and let wc sall wi draw a sample set w of size from a distribution that equally weights wi and wc let of w phase let c abel for a sample set of size drawn from conditioned on x x let of return maj c abel s for x s do let l p k for a set of k o log labelers drawn from p and s s x maj l x end return next we prove that f ilter removes instances that are correctly labeled by with good probability and retains instances that are mislabeled by with at least a constant probability lemma given any sample set s and classifier h for every x s if h x f x then x f ilter s h with probability if h x f x then x f ilter s h with probability proof for the first claim note that x si only if maj t h x for all t n consider t n time step since each random query agrees with f x h x with probability independently majority of n o log labels are correct with probability at least therefore the probability that the majority label disagrees with h x f x at every time step is at most in the second claim we are interested in the probability that there exists some t n for which maj t h x f x this is the same as the probability of return in biased random walks also called the probability of ruin in gambling feller where we are given a random walk that takes a step to the right with probability and takes a step to the left with the remaining probability and we are interested in the probability that this walk ever crosses the origin to the left while taking n or even infinitely the probability that many steps using the probability of return for random see theorem walks maj t f x ever is at most therefore for each x such that h x f x x si with probability at least in the remainder of the proof for ease of exposition we assume that not only errd as per lemma but in fact errd this assumption is not needed for the correctness of the results but it helps simplify the notation and analysis as a direct consequence of lemma and application of the chernoff bound we deduce that with high probability w i w c and si all have size the next lemma whose proof appears in appendix c formalizes this claim lemma with probability exp w i w c and si all have size the next lemma combines the probabilistic filtering and techniques to show that has the desired error o on lemma let dc and di denote distribution d when it is conditioned on x x f x and x x f x respectively and let di dc with probability proof consider distribution d that has equal probability on the distributions induced by wi and wc and let x denote the density of point x in this distribution relying on our technique see lemma it is sufficient to show that for any x x x for ease of presentation we assume that lemma holds with equality errd is exactly with probability let d x x dc x and di x be the density of instance x in distributions d dc and di respectively note that for any x such that x f x we have d x dc x similarly for any x such that x f x we have d x di x let nc x ni x mc x and mi x be the number of occurrences of x in the sets sc si wc and wi respectively for any x there are two cases if x f x then there exist absolute constants and according to lemma such that e nc x d x e mc x mc x d x e c m c m dc x x dc x m where the second and sixth transitions are by the sizes of wc and and the third transition is by the fact that if h x f x mc x nc x if x f x then there exist absolute constants and according to lemma such that e ni x e mi x mi x d x d x e x di x c d x i where the second and sixth transitions are by the sizes of wi and the third transition is by the fact that if h x f x mi x ni x and the fourth transition holds by part of lemma using the guarantees of lemma with probability the next claim shows that the probabilistic filtering step queries a few labels only at a high level this is achieved by showing that any instance x for which x f x contributes only o queries with high probability on the other hand instances that mislabeled may each get log queries but because there are only few such points the total number of queries these instances require is a lower order term lemma let s be a sample set drawn from distribution d and let h be such that errd h with probability exp f ilter s h makes o label queries proof using chernoff bound with probability exp the total number of points in s where h disagrees with f is o the number of queries spent on these points is at most o log o next we show that for each x such that h x f x the number of queries taken until a majority of them agree with h x is a constant let us first show that this is the case in expectation let ni be the expected number of labels queried until we have i more correct labels than incorrect ones then since with probability at least we receive one more correct label and stop and with probability we get a wrong label in which case we have to get two more correct labels in future moreover since we have to get one more correct label to move from to and then one more solving these we have that therefore the expected total number of queries is at most o next we show that this random variable is also let lx be a random variable that indicates the total number of queries on x before we have one more correct label than incorrect labels note that lx is an unbounded random variable therefore concentration bounds such as hoeffding or chernoff do not work here instead to show that lx is we prove that the bernstein inequality see theorem holds that is as we show in appendix d for any x the bernstein inequality is statisfied by the fact that for any i e lx e lx i i therefore over all instances p in s lx o with probability exp finally we have all of the ingredients needed for proving our main theorem proof of theorem we first discuss the number of label queries algorithm makes the total number of labels queried by phases and is attributed to the labels queried by c abel and c abel which is o m log m by lemma sc o almost surely therefore c abel si sc contributes o log labels moreover as we showed in lemma f ilter queries o labels surely so the total number of labels queried by algorithm is at most o log this leads to cost per labeled example log it remains to show that maj has error on since c abel and c abel return correctly labeled sets errd and where is distribution d conditioned on x x x as we showed in lemma with probability using the boosting technique of schapire described in theorem we conclude that maj has error on the general case of any in this section we extend algorithm to handle any value of that does not necessarily satisfy we show that by using o golden queries it is possible to efficiently learn any function class with a small overhead there are two key challenges that one needs to overcome when o first we can no longer assume that by taking the majority vote over a few random labelers we get the correct label of an instance therefore c abel s may return a highly noisy labeled sample set this is problematic since efficiently learning and using oracle of crucially depends on the correctness of the input labeled set second f ilter s no longer filters the instances correctly based on the classification error of in particular f ilter may retain a constant fraction of instances where is in fact correct and it may throw out instances where was incorrect with high probability therefore the guarantees of lemma fall apart immediately we overcome both of these challenges by using two key ideas outlined below pruning as we alluded to in section instances where only a small majority of labelers are in agreement are great for identifying and pruning away a noticeable fraction of the bad labelers we call these instances good test cases in particular if we ever encounter a good test case x we can ask a golden query y f x and from then on only consider the labelers who got this test correctly p x y note that if we make our golden queries when x at least an fraction of the labelers would be pruned this can be repeated at most o times before the number of good labelers form a strong majority in which case algorithm succeeds the natural question is how would we measure x using few label queries interestingly c abel s can be modified to detect such good test cases by measuring the empirical agreement rate on a set l of o log labelers this is shown in procedure p rune and abel as part algorithm that is if x we take majl x to be the label otherwise we test and prune the labelers and then restart the procedure this ensures that whenever we use a sample set that is labeled by p rune and abel we can be certain of the correctness of the labels this is stated in the following lemma and proved in appendix lemma for any unlabeled sample set s with probability either p rune and abel s prunes the set of labelers or s p rune and abel s is such that for all x y s y f x as an immediate result the first phase of algorithm succeeds in computing such that errd moreover every time p rune and abel prunes the set of labelers the total fraction of good labeler among all remaining labelers increase as we show after o prunings the set of good labelers is guaranteed to form a large majority in which case algorithm for the case of can be used this is stated in the next lemma and proved in appendix lemma for any with probability the total number of times that algorithm is restarted as a result of pruning is o robust the filtering step faces a completely different challenge any point that is a good test case can be filtered the wrong way however instances where still a strong majority of the labelers agree are not affected by this problem and will be filtered correctly therefore as a first step we ensure that the total number of good test cases that were not caught before f ilter starts is small for this purpose we start the algorithm by calling c abel on a sample of size o log and if no test points were found in this set then with high probability the total fraction of good test cases in the underlying distribution is at most since the fraction of good test cases is very small one can show that except for an fraction the noisy distribution constructed by the filtering process will for the purposes of boosting satisfy the conditions needed for the technique here we introduce a robust version of the technique to argue that the filtering step will indeed produce of error o lemma robust lemma given a hypothesis class f consider any two discrete distributions d and d such that except for an fraction of the mass under d we have that for all x x c d x for an absolute constant c and both distributions are labeled according to f there exists a constant such that for any and with probability over a labeled sample set s of size drawn from d of s has error of at most with respect to by combining these techniques at every execution of our algorithm we ensure that if a good test case is ever detected we prune a small fraction of the bad labelers and restart the algorithm and if it is never detected our algorithm returns a classifier of error theorem any suppose the fraction of the perfect labelers is and let for small enough constant c algorithm uses oracle of runs in time poly d ln uses a training set of size algorithm b oosting b y p robabilistic f iltering for any input given a distribution and p a class of hypothesis f parameters and phase if run algorithm and quit let for small enough c and draw of o log examples from the distribution p rune and abel phase let p rune and abel for a set of sample of size from let of phase let si f ilter for a set of samples of size drawn from let sc be a sample set of size drawn from let sall p rune and abel si sc let wi x y sall y x and let wc sall wi draw a sample set w of size from a distribution that equally weights wi and wc let of w phase let p rune and abel for a sample set of size drawn from d conditioned on x x let of return maj p rune and abel s for x s do let l p k for a set of k o log labelers drawn from p if x then get a golden query y f x restart algorithm with distribution p x and else s s x majl x end end return o size and with probability returns f f with errd f using o golden queries load of per labeler and a total number of queries o log log m log note that when log d the cost per labeled query is o and log proof sketch let b x x be the set of good test cases and let d b be the total density on such points note that if with high probability includes one such point in which case p rune and abel identifies it and prunes the set of labelers therefore we can assume that by lemma it is easy to see that phase and phase of algorithm succeed in producing and such that errd and it remains to show that phase of algorithm also produces such that consider the filtering step of phase first note that for any x b the guarantees of f ilter expressed in lemma still hold let d be the distribution that has equal probability on the distributions induced by wi and wc and is used for simulating similarly as in lemma one can show that for any x b x x since d b we have that b therefore d and satisfy the conditions of the robust lemma lemma where the fraction of bad points is at most hence we can argue that the remainder of the proof follows by using the boosting technique of schapire described in theorem no perfect labelers in this section we consider a scenario where our pool of labelers does not include any perfect labelers unfortunately learning f in this setting reduces to the notoriously difficult agnostic learning problem a related task is to find a set of the labelers which are good but not perfect in this section we show how to identify the set of all good labelers when at least the majority of the labelers are good we consider a setting where the fraction of the perfect labelers is arbitrarily small or we further assume that at least half of the labelers are good while others have considerably worst performance more formally we are given a set of labelers gn and a distribution d with an unknown target classifier f we assume that more than half of these labelers are good that is they have error of on distribution on the other hand the remaining labelers which we call bad have error rates on distribution we are interested in identifying all of the good labelers with high probability by querying the labelers on an unlabeled sample set drawn from this model presents an interesting community structure two good labelers agree on at least fraction of the data while a bad and a good labeler agree on at most of the data note that the rate of agreement between two bad labelers can be arbitrary this is due to the fact that there can be multiple bad labelers with the same classification function in which case they completely agree with each other or two bad labelers who disagree on the classification of every instance this structure serves as the basis of algorithm and its analysis here we provide an overview of its working and analysis algorithm g ood l abeler d etection input given n labelers parameters and let g n be a graph on n vertices with no edges take set q of ln n random pairs of nodes from for i j q do if disagree i j then add edge i j to g end let c be the set of connected components of g each with nodes s for i n c and c c do take one node j c if disagree i j add edge i j to end return the largest connected component of g disagree i j take set s ln samples from return gi x x theorem informal suppose that any good labeler i is such that errd gi furthermore assume that errd gj for any j n and let the number of good labelers be at least then algorithm returns the set of all good labeler with probability using an expected load of o ln per labeler we view the labelers as nodes in a graph that has no edges at the start of the algorithm in step the algorithm takes o n random pairs of labelers and estimates their level of disagreement by querying them on an unlabeled sample set of size o ln and measuring their empirical disagreement by an application of chernoff bound we know that with probability for any i j n d isagree i j pr gi x gj x therefore for any pair of good labelers i and j tested by the algorithm d isagree i j and for any pair of labelers i and j that one is good and the other is bad d isagree i j therefore the connected components of such a graph only include labelers from a single community next we show that at step of algorithm with probability there exists at least one connected component of size of good labelers to see this we first prove that for any two good labelers i and j the probability of i j existing is at least let vg be the set of nodes corresponding to good labelers for i j vg we have ln ln n ln pr i j g n n by the properties of random graphs with very high probability there is a component of size in a random graph whose edges exists with probability for janson et therefore with probability there is a component of size over the vertices in vg finally at step the algorithm considers smaller connected components and tests whether they join any of the bigger components by measuring the disagreement of two arbitrary labelers from these at this point all good labelers form one single connected component of size so the algorithm succeeds in identifying all good labelers next we briefly discuss the expected load per labeler in algorithm each labeler participates in o pairs of disagreement tests in expectation each requiring o ln queries so in expectation each labeler labels o ln instances references anthony and bartlett neural network learning theoretical foundations cambridge university press pranjal awasthi maria florina balcan nika haghtalab and ruth urner efficient learning of linear separators under bounded noise in proceedings of the conference on computational learning theory colt pages pranjal awasthi balcan nika haghtalab and hongyang zhang learning and compressed sensing under asymmetric noise in proceedings of the conference on computational learning theory colt pages ashwinkumar badanidiyuru robert kleinberg and yaron singer learning on a budget posted price mechanisms for online procurement in proceedings of conference on economics and computation ec pages acm ashwinkumar badanidiyuru robert kleinberg and aleksandrs slivkins bandits with knapsacks dynamic procurement for crowdsourcing in the workshop on social computing and user generated content with acm ec balcan beygelzimer and langford agnostic active learning in proceedings of conference on machine learning icml pages acm bousquet boucheron and lugosi theory of classification a survey of recent advances esaim probability and statistics cohn atlas and ladner improving generalization with active learning machine learning sanjoy dasgupta coarse sample complexity bounds for active learning in proceedings of the annual conference on neural information processing systems nips ofer dekel and ohad shamir vox populi collecting labels from a crowd in proceedings of the conference on computational learning theory colt willliam feller an introduction to probability theory and its applications volume john wiley sons yoav freund boosting a weak learning algorithm by majority in proceedings of the conference on computational learning theory colt volume pages yoav freund and robert e schapire a generalization of learning and an application to boosting in european conference on computational learning theory pages springer hanneke rates of convergence in active learning the annals of statistics ho shahin jabbari and jennifer wortman vaughan adaptive task assignment for crowdsourced classification proceedings of the international conference on machine learning icml panagiotis g ipeirotis foster provost and jing wang quality management on amazon mechanical turk in proceedings of the international conference on knowledge discovery and data mining kdd pages acm svante janson tomasz luczak and andrzej rucinski random graphs volume john wiley sons david r karger sewoong oh and devavrat shah iterative learning for reliable crowdsourcing systems in proceedings of the annual conference on neural information processing systems nips pages david r karger sewoong oh and devavrat shah task allocation for reliable crowdsourcing systems operations research aniket kittur ed h chi and bongwon suh crowdsourcing user studies with mechanical turk in proceedings of the sigchi conference on human factors in computing systems pages acm koltchinskii rademacher complexities and bounding the excess risk in active learning journal of machine learning research ronald l rivest and robert sloan a formal model of hierarchical information and computation robert e schapire the strength of weak learnability machine learning adish singla and andreas krause truthful incentives in crowdsourcing tasks using regret minimization mechanisms in proceedings of the international conference on world wide web pages acm aleksandrs slivkins and jennifer wortman vaughan online decision making in crowdsourcing markets theoretical challenges acm sigecom exchanges jacob steinhardt gregory valiant and moses charikar avoiding imposters and delinquents adversarial crowdsourcing and peer prediction in proceedings of the annual conference on neural information processing systems nips pages long sebastian stein alex rogers and nicholas r jennings efficient crowdsourcing of unknown experts using bounded bandits artificial intelligence valiant a theory of the learnable communications of the acm paul wais shivaram lingamneni duncan cook jason fennell benjamin goldenberg daniel lubarov david marin and hari simons towards building a workforce with mechanical turk presented at the nips workshop on computational social science and the wisdom of crowds pages songbai yan kamalika chaudhuri and tara javidi active learning from imperfect labelers in proceedings of the annual conference on neural information processing systems nips pages chicheng zhang and kamalika chaudhuri active learning from weak and strong labelers in proceedings of the annual conference on neural information processing systems nips pages a additional related work more generally interactive models of learning have been studied in the machine learning community the most popular among them is the area of active learning cohn et dasgupta balcan et koltchinskii hanneke in this model the learning algorithm can adaptively query for the labels of a few examples in the training set and use them to produce an accurate hypothesis the goal is to use as few label queries as possible the number of labeled queries used is called the label complexity of the algorithm it is known that certain hypothesis classes can be learned in this model using much fewer labeled queries than predicted by the vc theory in particular in many instances the label complexity scales only logarithmically in as opposed to linearly in however to achieve computational efficiency the algorithms in this model rely on the fact that one can get perfect labels for every example queried this would be hard to achieve in our model since in the worst case it would lead to each labeler answering log many queries in contrast we want to keep the query load of a labeler to a constant and hence the techniques developed for active learning are insufficient for our purposes furthermore in noisy settings most work on efficient active learning algorithms assumes the existence of an empirical risk minimizer erm oracle that can minimize training error even when the instances aren t labeled according to the target classifier however in most cases such an erm oracle is hard to implement and the improvements obtained in the label complexity are less drastic in such noisy scenarios another line of work initiated by zhang and chaudhuri models related notions of weak and strong labelers in the context of active learning the authors study scenarios where the label queries to the strong labeler can be reduced by querying the weak and potentially noisy labelers more often however as discussed above the model does not yield relevant algorithms for our setting as in the worst case one might end up querying for high quality labels leading to a prohibitively large load per labeler in our setting the work of yan et al studies a model of active learning where the labeler abstains from providing a label prediction more often on instances that are closer to the decision boundary the authors then show how to use the abstentions in order to approximate the decision boundary our setting is inherently different since we make no assumptions on the bad labelers b proof of lemma first notice that because d and d are both labeled according to f f for any f f we have x x x x x c d x x x c errd f f x x therefore if f then errd f let we have pr f errs f f pr f errs f errd f s s the claim follows by the fact that o c c proof of lemma let us first consider the expected size of sets si w i and w c using lemma we have o e similarly o e si e w i similarly o e si e w c the claim follows by the chernoff bound d remainder of the proof of lemma we prove that the bernstein inequality holds for the total number of queries made before their majority agrees with f x let lx be the random variable denoting the number of queries the algorithm makes on instance x for which h x f x consider the probability that lx for some that is maj t f x for the first time when t this is at most the probability that maj f x by chernoff bound we have that pr lx pr maj f x exp exp for each i we have i e lx e lx x x pr lx e lx i i x i x ki i where the last inequality is done by integration this satisfies the bernstein condition stated in theorem therefore x lx e lx o exp pr therefore the total number of queries over all points in x s where h x f x is at most o with very high probability e probability lemmas theorem probability of ruin feller consider a player who starts with i dollars against an adversary that has n dollars the player bets one dollar in each gamble which he wins with probability the probability that the player ends up with no money at any point in the game is p p theorem bernstein inequality let xn be independent random variables with expectation supposed that for some positive real number l and every k e xi k e xi k then v u n n x ux p e xi pr e xi exp for t xi f omitted proofs from section in this section we prove theorem and present the proofs that were omitted from section theorem restated suppose the fraction of the perfect labelers is and let algorithm uses oracle of runs in time poly d ln uses a training set of size o size and with probability returns f f with errd f using o golden queries load of per labeler and a total number of queries log log m log o note that when log d the cost per labeled query is o and log proof of lemma by chernoff bound with probability for every x s we have that x x where l is the set of labelers p rune and abel s queries on x hence if x is such that x then it will be identified and the set of labelers is pruned otherwise majl x agrees with the good labelers and x gets labeled correctly according to the target function proof of lemma recall that c for some small enough constant c each time p rune and abel s is called by hoeffding bound it is guaranteed that with probability for each x s x x where l is the set of labelers p rune and abel s queries on x hence when we issue a golden query for x such that x and prune away bad labelers we are guaranteed to remove at least an fraction of the labelers furthermore no good labeler is ever removed hence the fraction of good labelers increases from to so in o calls the fraction of the good labelers surpasses and we switch to using algorithm therefore with probability overall the total number of golden queries is o proof of lemma let b be the set of points that do not satisfy the condition that x c d x notice that because d and d are both labeled according to f f for any f f we have x x x x x x x x x c d x x x c errd f f therefore if f then errd f let we have pr f errs f f pr f errs f errd f s s the claim follows by the fact that o proof of theorem c recall that c for a small enough constant c let b x x be the set of good test cases and and let d b be the total density on such points note that if with high probability includes one such point in which case p rune and abel identifies it and prunes the set of labelers therefore we can assume that by lemma it is easy to see that errd we now analyze the filtering step of phase as in section our goal is to argue that consider distribution d that has equal probability on the distributions induced by wi and wc and let x denote the density of point x in this distribution we will show that for any x b we have that d x x since d b we have that b therefore d and satisfy the conditions of the robust lemma lemma where the fraction of bad points is at most hence we now show that for any x b x x the proof is identical to the one in lemma for ease of representation we assume that errd is exactly let d x x dc x and di x be the density of instance x in distributions d dc and di respectively note that for any x such that x f x we have d x dc x similarly for any x such that x f x we have d x di x let nc x ni x mc x and mi x be the number of occurrences of x in the sets sc si wc and wi respectively for any x there are two cases if x f x then there exist absolute constants and according to lemma such that d x mc x e mc x e nc x d x e c m c m dc x x dc x m where the second and sixth transitions are by the sizes of wc and and the third transition is by the fact that if h x f x mc x nc x if x f x then there exist absolute constants and according to lemma such that mi x e mi x e ni x d x d x e m m di x x c d x i where the second and sixth transitions are by the sizes of wi and the third transition is by the fact that if h x f x mi x ni x and the fourth transition holds by part of lemma finally we have that where is distribution d conditioned on x x x using the boosting technique of schapire describe in theorem we conclude that maj has error on the label complexity claim follows by the fact that we restart algorithm at most o times take an additional o log high quality labeled set and each run of algorithm uses the same label complexity as in theorem before getting restarted
8
automated identification of trampoline skills using computer vision extracted pose estimation paul connolly guenole silvestre and chris bleakley sep school of computer science university college dublin belfield dublin ireland abstract a novel method to identify trampoline skills using a single video camera is proposed herein conventional computer vision techniques are used for identification estimation and tracking of the gymnast s body in a video recording of the routine for each frame an open source convolutional neural network is used to estimate the pose of the athlete s body body orientation and joint angle estimates are extracted from these pose estimates the trajectories of these angle estimates over time are compared with those of labelled reference skills a nearest neighbour classifier utilising a mean squared error distance metric is used to identify the skill performed a dataset containing skill examples with distinct skills performed by adult male and female gymnasts was recorded and used for evaluation of the system the system was found to achieve a skill identification accuracy of for the dataset introduction originating in the trampolining became a competitive olympic sport in sydney in competition athletes perform a routine consisting of a series of skills performed over a number of jumps the skills are scored by human judges according to the trampoline code of points fig although more explicit and objective judging criteria have been introduced in recent years the scores awarded can still vary between judges leading to highly contentious final decisions eliminating human error by means of reliable automated judging of trampoline routines is desirable herein we describe a first step towards this goal a novel automated system for identification of trampoline skills using a single video camera identification of skills is necessary prior to judging since different skills are scored in different ways while still a challenging problem identification of trampoline skills from video has been enabled by recent advances in human pose estimation in andriluka et improved accuracy over approaches was achieved with the introduction of convolutional neural network convnet based estimation estimators such as this rely on new convnet algorithms coupled with recent gains in gpu performance in addition the introduction of larger more varied general pose datasets sapp and taskar johnson and everingham leveraging annotation has vastly increased the quantity of training data available to the best of the authors knowledge no previous work has been reported on identification of trampolining skills from video the closest previous work on identification of trampoline skills required the gymnast to wear a motion capture suit containing inertial sensors helten et wearing special suits is cumbersome and is not allowed in competition due to the strict rules regarding gymnast attire fig previous work on automated judging of rhythmic gymnastics from video was reported in et however their method differs from the method used in this work the algorithm proposed herein consists of a number of stages the bounding box of the gymnast is extracted using conventional image processing techniques the pose of the athlete is subsequently determined allowing body orientation and joint angles to be estimated the angle trajectories over time are compared with those obtained for reference skills the skill performed is identified as the nearest neighbour in the reference dataset based on a mean square error metric the system was evaluated using a large number of video recordings capturing the movements of male and female gymnasts performing trampoline routines a wide variety of skills lighting conditions and backgrounds were recorded the gymnasts did not wear special clothes or markers the camera was placed to the performance in the same position as a human judge the structure of the paper is as follows in section background information on trampolining is given in section further detail is provided on approaches to analysis of sporting movement and pose estimation using video recordings the proposed algorithm is described in section section discusses the experimental procedure and organisation of the dataset the experimental results and discussion are provided in section conclusions including future work follow in section background a trampoline routine consists of a sequence of high continuous rhythmic rotational jumps performed without hesitation or intermediate straight bounces the routine should show good form execution height and maintenance of height in all jumps so as to demonstrate control of the body during the flying phase a competition routine consists of such jumps referred to in this work as skills for simplicity a straight bounce is taken to be a skill a competitor can perform a variable number of straight bounces before the beginning of a routine so called while an optional straight bounce can be taken after completing a routine to control height before the gymnast is required to stop completely skills involve and landing in one of four positions feet seat front or back rotations about the body s longitudinal and lateral axes are referred to as twist and somersault rotations respectively skills combine these rotations with a body shape tuck pike straddle or straight these and landing positions and shapes are illustrated in figure the score for a performance is calculated as the sum of four metrics degree of difficulty tariff execution horizontal displacement and time of flight degree of difficulty is scored based on the difficulty of the skill performed for example a full somersault is awarded more points than a somersault the tariff assigned is found by a simple based on skill identification examples of tariff scores can be seen in table the execution score is allocated based on how well the skill was judged to be performed the horizontal displacement and the time of flight are measured electronically using force plates on the legs of the trampoline related work one of the problems with the capture of trampoline skills is the large performance space elite performers can reach up to in height tracking such a large volume is prohibitively difficult for many existing motion capture solutions including devices such as the microsoft kinect in helten et inertial sensors were used to measure body point acceleration and orientation the gymnast was required to wear a body suit containing ten inertial measurement units the sensor data streams were transformed into a feature sequence for classification for each skill a motion template was learned the feature sequence of the unknown trampoline motions were compared with a set of skill templates using a variant of dynamic time warping the best accuracy achieved was over skill types a survey of methods for general human motion representation segmentation and recognition can be found in weinland et in et judging of rhythmic gymnastics skills from video was investigated the movement of the gymnast was tracked using optical flow velocity field information was extracted across all frames of a skill and projected into a velocity covariance eigenspace similar movements were found to trace out unique but similar trajectories new video recordings were classified based on their distance from reference trajectories the system s specificity was approximately and the sensitivity was approximately for the skills considered a b c d e f g h figure and landing positions a feet b seat c front and d back trampoline shapes e tuck f pike g straddle and h straight human pose estimation is the process of estimating the configuration of the body typically from a single image robust pose estimation has proven to be a powerful starting point for obtaining pose estimates for human bodies an overview of the pose estimation problem and proposed methods can be found in sigal poppe methods have been successful for images in which all the limbs of the subject are visible however they are unsuitable for the view of a trampoline routine where are inherent convnet based systems do not assume a particular explicit body model since they learn the mapping between image and body pose these machine learning based techniques provide greater robustness to variations in clothing and accessories than approaches the mpii benchmark andriluka et has been used to access the accuracy of pose estimators the approach described in pishchulin et achieved an accuracy of whereas the convnet based method proposed in newell et achieved the work described herein differs from previous work in that the system performs skill identification for trampolining using a single monocular video camera the work takes advantage of recently developed high accuracy open source convnet based pose estimators the stacked hourglass network newell et and monocap zhou et methods were selected for estimation and filtering of the pose respectively in the stacked hourglass network pose estimates are provided by a convnet architecture where features are processed across all scales and consolidated to best capture the spatial relationships of the body parts repeated steps of pooling and upsampling in conjunction with intermediate supervision previous methods in monocap pose is estimated via an algorithm over a sequence of images with pose predictions conveniently the joint location uncertainties can be marginalized out during inference proposed algorithm the complete algorithm is illustrated in figure video is recorded and to reduce resolution and remove audio after the body extraction stage identifies and tracks the convex hull of the athlete over all video frames the video is segmented according to the detected bounces the feature extraction stage estimates the pose of the athlete and from this the body orientation and joint angles in each frame based on the extracted feature angles classification is performed to identify the skill in our experiments the accuracy segment bounces record footage label ground truth skill downsample video subtract background track gymnast dim blur background save video frames estimate pose filter pose temporally calculate angles identify skill annotation classification body extraction feature extraction calculate accuracy evaluation figure flow chart illustrating the proposed method of the system was evaluated by comparing the detected skills to manually marked ground truth the algorithm stages are explained in more detail in the following sections body extraction the top of the trampoline is identified based on its hue characteristics and is presented as a best guess on a user interface that allows the position to be fine tuned the gymnast is tracked by assuming that they are the largest moving object above the trampoline a background subtractor generates a foreground mask for each frame all static image components over multiple frames are taken to be part of the background the camera is assumed to be static without changing focus during the recording the foreground mask is eroded for one iteration and dilated for ten iterations with a kernel the largest segment of this morphed mask is taken to be the silhouette of the gymnast the method of moments is used to determine the centroid of this silhouette the video is segmented into individual skills based on the position of this centroid a peak detection algorithm identifies the local minima of the vertical position of the centroid these local minima are taken to indicate the start and end frames of a skill a threshold is applied to peaks between the local minima to identify the start and end jumps of the routine the convex hull of the silhouette is used to generate a bounding box for the athlete s image the bottom of the bounding box is compared to the position of the top of the trampoline to detect the contact phase of a bounce examples of the application of this method are shown in figure images of the body are saved for frames in which the athlete is not in contact with the trampoline the maximum size of the bounding box across all frames of the routine is found each image is squarely cropped to this size centred on the centroid of the gymnast based on the extracted foreground mask the background of each image is blurred and darkened this helps to reduce the number of incorrect pose estimates a b c d figure processed images a original frame b background model c foreground mask d body silhouette and convex hull after erosion and dilation i t r l elbow r l shoulder r l hip r l knee r l leg torso twist table feature angles by name and index feature extraction the stacked hourglass network and monocap are used for pose estimation and filtering respectively the pose estimator generates pose predictions for joint locations the pose estimator is then used to filter the pose predictions across the sequence of images from the smoothed pose the joint angles and orientation angles that represent the athlete s body position are calculated these feature angles are denoted as i for i m where m is the total number of feature angles each of the m feature angles is part of a time series t where t is the frame number t t the angles are listed in table and example trajectories can be seen in figure twist around the body s longitudinal axis is estimated from the distance between the pose points labelled as right and left shoulder the shoulder separation in the image is at a maximum when the gymnast s back or front is facing the camera and is approximately zero when sideways to the camera by finding the maximum separation over the whole routine the separation can be normalised to a value between in this way the angle does not depend on the size of the performer right angle deg torso with vertical left elbow shoulder hip knee leg with vertical twist angle torso twist time s figure the motion sequence of a tuck jump with the estimated angles shown beneath classification the m feature angle trajectories are compared to those in a labelled reference set by calculation of the mean squared error mse the observed skill is identified as equivalent to the reference giving the minimum mse the feature angle trajectories of the references ri t are aligned through by means of interpolation so as to have the same number of data points t as the observed angle trajectory i t mse t x m x r i t i t t m t i experimental procedure data acquisition the procedure for data collection was submitted to and approved by the ucd office of research ethics videos of routines were recorded at training sessions and competitions of the ucd trampoline club consent was sought from members of the ucd trampoline club prior to recording video for the purposes of the project routines were collected in ucd sports centre under normal sports hall lighting conditions the background was not modified typically consisting of a brick wall or nets the routines were recorded at a resolution of at frames per second fps using a consumer grade camera with a shutter speed of to reduce motion blur the camera was positioned at the typical location and viewing angle of the judging panel all bounces were within the field of view of the camera the video was subsequently downsampled to maintaining a aspect ratio and audio was removed these steps significantly reduced data file size and processing time while maintaining usable resolution the videos were manually annotated with labels by means of a custom built web interface datasets the resulting dataset consists of routines by adult athletes male and female totalling minutes of video this contained distinct skills and skill examples the names and distribution of these skills are summarised in table the accuracy of the identification algorithm was tested using cross validation the skills with fewer than examples were not included in the test leaving n distinct skills in each iteration of the evaluation a subset of examples of each skill were randomly selected from the database each subset was split evenly to give the number of reference examples s r and the number of test examples s t the total size of the reference set was n s r skill examples the test set was of the same size the average accuracy over iterations of the evaluation is reported herein results and discussion the average accuracy of the system was for the distinct skills listed as included in classification in table the confusion matrix for the experiment is shown in figure it was noted that subject identification can sometimes incorrectly focus on people in the background particularly during seat front and back landings when the gymnast becomes obscured by the trampoline bed this causes errors in trampoline contact detection resulting in frames without an obvious subject being presented to the pose estimator the resulting angles are not representative of the skill performed this can also cause errors in jump segmentation due to incorrect centroid extraction jump segmentation failed in cases significant confusion in skill identification occurs between fpf pike jumps shown in figure and fsf straddle jumps shown in figure from a view it is difficult to distinguish these movements another area of confusion is between the tuck and pike shape of the barani skill bri the features which distinguish these shapes are the angles of the hip and knees the tuck shape in this skill is often performed loosely this results in the angle of the hip being similar to that of the pike shape for identification the angle of the knees becomes the deciding feature and may be overwhelmed by noise from other features use of a support vector machine might improve classification accuracy for example the difficulty in estimating the wrist and ankle joints for the pose estimator can lead to noise in the angles for the elbows and knees weighting these features as less important might improve overall accuracy tariff occurrences straight bounce tuck jump pike jump straddle jump half twist jump full twist jump seat drop half twist to seat drop seat half twist to seat to feet from seat half twist to feet from seat front drop to feet from front back drop to feet from back half twist to feet from back front somersault tuck front somersault pike barani tuck barani pike barani straight crash dive back somersault tuck back somersault pike back somersault straight back somersault to seat tuck lazy back cody tuck back half barani ball out tuck rudolph rudi full front full back ftf fpf fsf fsst fssp brit brip bris cdi bsst bssp bsss bstt lbk cdyt bha bbot rui ffr fub ftf fpf fsf brit brip cdi bsst bssp bsss bstt ftf fpf fsf brit brip cdi bsst bssp bsss bstt code ground truth skill skill name identified skill figure confusion matrix showing the relative errors for each skill this is the average of iterations of cross validation table skill dataset excluded from classification it is likely that accuracy could be improved by increasing the amount of data current pose estimation algorithms take a single image as input it seems likely that performance could be improved by tracking pose over a video sequence adding a second video camera pointed towards the front of the gymnast would likely improve accuracy by allowing greater discrimination of motion parallel to the axis between the subject and the first camera however there are issues regarding the extra user effort in setting up the second camera and in synchronisation of the two devices modern trampoline judging systems incorporate force plates for detection of the centrality of landing on the trampoline bed fusing such information with the video data could possibly also result in improved accuracy body extraction was performed at fps on a core intel ghz cpu estimation of pose using the stacked hourglass network ran at fps on an ubuntu with an nvidia titan x pascal gpu and a core intel ghz cpu with default parameter settings execution of the monocap algorithm ran at fps on the same machine also with default parameters conclusion a system for identifying trampolining skills using a single monocular video camera was developed the system incorporated algorithms for background subtraction erosion and dilation pose estimation pose filtering and classification the system was found to provide accuracy in identifying the distinct skills present in a dataset contain skill examples in future work we plan to extend the classification algorithms to perform automated execution judging references andriluka et andriluka pishchulin gehler and schiele b human pose estimation new benchmark and state of the art analysis in ieee conference on computer vision and pattern recognition cvpr pages et escalona and olivieri automatic recognition and scoring of olympic rhythmic gymnastic movements human movement science fig fig trampoline code of points accessed helten et helten brock and seidel classification of trampoline jumps using inertial sensors sports engineering johnson and everingham johnson and everingham learning effective human pose estimation from inaccurate annotation in ieee conference on computer vision and pattern recognition cvpr pages newell et newell yang and deng j stacked hourglass networks for human pose estimation corr pishchulin et pishchulin andriluka gehler and schiele b poselet conditioned pictorial structures in ieee conference on computer vision and pattern recognition cvpr pages poppe poppe human motion analysis an overview computer vision and image understanding special issue on vision for interaction sapp and taskar sapp and taskar b modec multimodal decomposable models for human pose estimation in ieee conference on computer vision and pattern recognition cvpr pages sigal sigal human pose estimation accessed weinland et weinland ronfard and boyer a survey of methods for action representation segmentation and recognition computer vision and image understanding zhou et zhou zhu leonardos derpanis and daniilidis sparseness meets deepness human pose estimation from monocular video in ieee conference on computer vision and pattern recognition cvpr pages
1
parsing methods streamlined sep luca breveglieri stefano crespi reghizzi angelo morzenti dipartimento di elettronica informazione e bioingegneria deib politecnico di milano piazza leonardo da vinci milano italy email this paper has the goals of unifying parsing with parsing to yield a single simple and consistent framework and of producing provably correct parsing methods deterministic as well as tabular ones for extended grammars ebnf represented as networks departing from the traditional way of presenting as independent algorithms the deterministic lr the ll and the general tabular earley parsers we unify them in a coherent minimalist framework we present a simple general construction method for ebnf elr parsers where the new category of convergence conflicts is added to the classical and conflicts we prove its correctness and show two implementations by deterministic machines and by machines the latter to be also used for earley parsers then the beatty s theoretical characterization of ll grammars is adapted to derive the extended ell parsing method first by minimizing the elr parser and then by simplifying its state information through using the same notations in the elr case the extended earley parser is obtained since all the parsers operate on compatible representations it is feasible to combine them into mixed mode algorithms categories and subject descriptors parsing of formal languages theory general terms language parsing algorithm syntax analysis syntax analyzer additional key words and phrases extended bnf grammar ebnf grammar deterministic parsing parser parser elr recursive descent parser parser ell tabular earley parser october see formal languages and compilation crespi reghizzi breveglieri morzenti springer london edition planned parsing methods streamlined contents introduction preliminaries derivation for machine nets grammar call sites machine activation and parsing construction of elr parsers base closure and kernel of a elr condition elr versus classical lr definitions parser algorithm simplified parsing for bnf grammars parser implementation using an indexable stack related work on parsing of ebnf grammars deterministic parsing property and pilot compaction merging properties of compact pilots candidate identifiers or pointers unnecessary ell condition discussion stack contraction and predictive parser parser graph predictive parser computing left derivations parser implementation by recursive procedures direct construction of parser graph equations defining the prospect sets equations defining the guide sets tabular parsing string recognition syntax tree construction conclusion appendix parsing methods streamlined introduction in many applications such as compilation program analysis and document or natural language processing a language defined by a formal grammar has to be processed using the classical approach based on syntax analysis or parsing followed by translation efficient deterministic parsing algorithms have been invented in the s and improved in the following decade they are described in compiler related textbooks for instance aho et al grune and jacobs crespi reghizzi and efficient implementations are available and widely used but research in the last decade or so has focused on issues raised by technological advances such as parsers for datadescription languages more efficient general tabular parsing for grammars and probabilistic parsing for natural language processing to mention just some leading research lines this paper has the goals of unifying parsing with parsing and to a minor extend also with tabular earley parsing to yield a single simple and consistent framework and of producing provably correct parsing methods deterministic and also tabular ones for extended grammars ebnf represented as networks we address the first goal compiler and language developers and those who are familiar with the technical aspects of parsing invariably feel that there ought to be room for an improvement of the classical parsing methods annoyingly similar yet incompatible notions are used in lr ll and tabular earley parsers moreover the parsers are presented as independent algorithms as indeed they were when first invented without taking advantage of the known grammar inclusion properties ll k lr k to tighten and simplify constructions and proofs this may be a consequence of the excellent quality of the original presentations particularly but not exclusively knuth rosenkrantz and stearns earley by distinguished scientists which made revision and systematization less necessary our first contribution is to conceptual economy and clarity we have reanalyzed the traditional deterministic parsers on the view provided by beatty s beatty rigorous characterization of ll grammars as a special case of the lr this allows us to show how to transform parsers into the parsers by merging and simplifying parser states and by anticipating parser decisions the result is that only one set of technical definitions suffices to present all parser types moving to the second goal the best known and tabular parsers do not accept the extended form of bnf grammars known as ebnf or ecfg or also as regular rightpart rrpg that make use of regular expressions and are widely used in the language reference manuals and are popular among designers of parsers using recursive descent methods we systematically use a graphic representation of ebnf grammars by means of networks also known as syntax charts brevity prevents to discuss in detail the long history of the research on parsing methods for ebnf grammars it suffices to say that the existing deterministic method is and very popular at least since its use in the elegant recursive descent pascal compiler wirth where ebnf grammars are represented by a network of machines a formalism already in use since at least lomet on the other hand there have been numerous interesting but proposals for methods for ebnf grammars which operate under more or less general assumptions and are implemented by various parsing methods streamlined types of parsers more of that in section but a recent survey hemerik concludes rather negatively what has been published about parsing theory is so complex that not many feel tempted to use it it is a striking phenomenon that the ideas behind recursive descent parsing of ecfgs can be grasped and applied immediately whereas most of the literature on parsing of rrpgs is very difficult to access tabular parsing seems feasible but is largely unexplored we have decided to represent extended grammars as transition diagram systems which are of course equivalent to grammars since any regular expression can be easily mapped to its recognizer moreover transition networks which we dub machine nets are often more readable than grammar rules we stress that all our constructions operate on machine nets that represent ebnf grammars and unlike many past proposals do not make restrictive assumptions on the form of regular expressions for ebnf grammars most past methods had met with a difficulty how to formulate a condition that ensures deterministic parsing in the presence of recursive invocations and of cycles in the transition graphs here we offer a simple and rigorous formulation that adds to the two classical conditions neither nor conflicts a third one no convergence conflict our parser is presented in two variants which use different devices for identifying the right part or handle typically a substring of unbounded length to be reduced a deterministic pushdown automaton and an implementation to be named machine using unbounded integers as pointers into the stack the device is also used in our last development the tabular parser for ebnf grammars at last since all parser types described operate on uniform assumptions and use compatible notations we suggest the possibility to combine them into algorithms after half a century research on parsing certain facts and properties that had to be formally proved in the early studies have become obvious yet the endemic presence of errors or inaccuracies listed in hemerik in published constructions for ebnf grammars warrants that all new constructions be proved correct in the interest of readability and brevity we first present the enabling properties and the constructions also relying on significant examples then for the properties and constructions that are new we provide correctness proofs in the appendix the paper is organized as follows section sets the terminology and notation for grammars and transition networks section presents the construction of elr parsers section derives ell parsers first by transformation of parsers and then also directly section deals with tabular earley parsers preliminaries the concepts and terminology for grammars and automata are classical see aho et al crespi reghizzi and we only have to introduce some specific notations a bnf or grammar g is specified by the terminal alphabet the set of nonterminal symbols v the set of rules p and the starting symbol or axiom an element of v is called a grammar symbol a rule has the form a where a the left part is a nonterminal and the right part is a possibly empty denoted by string over parsing methods streamlined v two rules such as a and a are called alternative and can be shortened to a an extended bnf ebnf grammar g generalizes the rule form by allowing the right part to be a regular expression over v a is a formula that as operators uses union written concatenation kleene star and parentheses the language defined by a is denoted r for each nonterminal a we assume without any loss of generality that g contains exactly one rule a a derivation between strings is a relation u a v u w v with u w and v possibly empty strings and w r the derivation is leftmost respectively rightmost if u does not contain a nonterminal resp if v does not contain a nonterminal a series of derivations is denoted by a derivation a a v with v is called for a derivation u v the reverse relation is named reduction and denoted by v u the language l g generated by grammar g is the set l g x s x the language l a generated by nonterminal a is the set la g x a x a language is nullable if it contains the empty string a nonterminal that generates a nullable language is also called nullable following a tradition dating at least to lomet we are going to represent a grammar rule a as a graph the graph of a finite automaton to be named ma that recognizes a regular expression the collection m of all such graphs for the set v of nonterminals is named a network of finite machines and is a graphic representation of a grammar see fig this has advantages it offers a pictorial representation permits to directly handle ebn f grammars and maps quite nicely on the parser implementation in the simple case when contains just terminal symbols machine ma recognizes the language la g but if contains a nonterminal b machine ma has a edge labeled with b this can be thought of as the invocation of the machine mb associated to rule b and if nonterminals b and a coincide the invocation is recursive it is convenient although not necessary to assume that the machines are deterministic at no loss of generality since a nondeterministic finite state machine can always be made deterministic definition recursive net of finite deterministic machines g be an ebn f grammar with nonterminal set v s a b and grammar rules s a b rs ra rb denote the regular languages over alphabet v respectively defined by the ms ma mb are the names of finite deterministic machines that accept the corresponding regular languages rs ra rb as usual we assume the machines to be reduced in the sense that every state is reachable from the initial state and reaches a final state the set of all machines the machine net is denoted by prevent confusion the names of the states of any two machines are made different by appending the machine name as subscript the set of states of machine ma is qa to avoid confusion we call machines the finite automata of grammar rules and we reserve the term automaton for the pushdown automaton that accepts language l g parsing methods streamlined machine net m ebnf grammar g t e me t a t e a mt e fig ebn f grammar g axiom e and machine network m me mt axiom me of the running example initial respectively final states are tagged by an incoming resp outgoing dangling dart qa the initial state is and the s set of final states is fa qa the state set of the net is the union of all states q ma qa the transition function of every machine is denoted by the same symbol at no risk of confusion as the state sets are disjoint state qa of machine ma the symbol r ma qa or for brevity r qa denotes the regular language of alphabet v accepted by the machine starting from state qa if qa is the initial state language r ra includes every string that labels a path qualified as accepting from the initial to a final state disallowing reentrance into initial states to simplify the parsing algoc rithms we stipulate that for every machine ma no edge qa exists where qa qa and c is a grammar symbol in words no edges may enter the initial state the above normalization ensures that the initial state is not visited again within a computation that stays inside machine ma clearly any machine can be so normalized by adding one state and a few transitions with a negligible overhead such minor adjustment greatly simplifies the reduction moves of parsers for an arbitrary several algorithms such as and described for instance in crespi reghizzi produce a machine recognizing the corresponding regular language in practice the used in the right parts of grammars are so simple that they can be immediately translated by hand into an equivalent machine we neither assume nor forbid that the machines be minimal with respect to the number of states in facts in translation it is not always desirable to use the minimal machine because different semantic actions may be required in two states that would be indistinguishable by pure language theoretical definitions if the grammar is purely bn f then each right part has the form where every alternative is a finite string therefore ra is a finite language and any machine for ra has an acyclic graph which can be made into a tree if we accept to have a machine in the general case the graph of machine ma representing rule a is not acyclic in any case there is a mapping between the strings in language ra and the set of accepting paths of machine ma therefore the net m ms ma is essentially a notational variant of a grammar g as witnessed by the common practice to include both ebn f productions and syntax diagrams in language specifications we indifferently denote the language as l g or as l m parsing methods streamlined we need also the terminal language defined by the net starting from a state qa of machine ma possibly other than the initial one n o l ma qa l qa y r qa and y in the formula is a string over terminals and nonterminals accepted by machine ma starting from state qa the derivations originating from produce the terminal strings of language l qa in particular from previous definitions it follows that l ma l la g and l ms l l m l g example running example the ebn f grammar g and machine net m are shown in figure the language generated can be viewed as obtained from the language of strings by allowing character a to replace a substring all machines are deterministic and the initial states are not reentered most features needed to exercise different aspects of parsing are present iteration branching multiple final states and the nullability of a nonterminal to illustrate we list a language defined by its net and component machines along with their aliases r me r t r mt r e l me l l g l m a a a a a a a a a l mt l lt g a a a a to identify machine states an alternative convention quite used for bn f grammars relies on marked grammar rules for instance the states of machine mt have the aliases t a e t e t e a t e where the bullet is a character not in we need to define the set of initial characters for the strings recognized starting from a given state definition set of initials ini qa ini l qa a a l qa the set can be computed by applying the following logical clauses until a fixed point is reached let a be a terminal a b c nonterminals and qa ra states of the same machine the clauses are a a ini qa if edge qa ra b a ini qa if edge qa ra a ini b a ini qa if edge qa ra l is nullable a ini ra to illustrate we have ini ini a parsing methods streamlined derivation for machine nets for machine nets and ebn f grammars the preceding definition of derivation which models a rule such as e t as the infinite set of bn f alternatives e t t t has shortcomings because a derivation step such as e t t t replaces a nonterminal e by a string of possibly unbounded length thus a computation inside a machine is equated to just one derivation step for application to parsing a more analytical definition is needed to split such large step into a series of state transitions we recall that a bn f grammar is rl if the rule form is a a b or a where a and b v every machine can be represented by an equivalent rl grammar that has the machine states as nonterminal symbols grammar each machine ma of the net can be replaced with an equivalent right linear rl grammar to be used to provide a rigorous semantic for the derivations constructed by our parsers it is straightforward to write the rl grammar named equivalent with respect to the regular language r ma the nonterminals of are the states of qa and the axiom is there exists a rule pa x ra if an edge x pa ra is in and the empty rule pa if pa is a final state notice that x can be a nonterminal b of the original grammar therefore a rule of may have the form pa b ra which is still rl since the first symbol of the right part as a terminal symbol for grammar with this provision the identity is viewed l r ma clearly holds next for every rl grammar of the net we replace with each symbol b v occurring in a rule such as pa b ra and thus we obtain rules of the form pa ra the resulting bn f grammar is denoted and named the grammar of the net it has terminal alphabet nonterminal set q and axiom the right parts have length zero or two and may contain two nonterminal symbols thus the grammar is not rl obviously and g are equivalent they both generate language l g example grammar of the running example a as said in grammars we choose to name nonterminals by their alias states an instance of rule is using instead of g we obtain derivations the steps of which are elementary state transitions instead of an entire on a machine an example should suffice example derivation for grammar g the classical leftmost derivation e t t at a e a g g g g parsing methods streamlined is expanded into the series of truly atomic derivation steps of the grammar a a a a a a a a a we may also work and consider reductions such as a a as said the grammar is only used in our proofs to assign a precise semantic to the parser steps but has otherwise no use as a readable specification of the language to be parsed clearly an rl grammar has many more rules than the original ebn f grammar and is less readable than a syntax diagram or machine net because it needs to introduce a plethora of nonterminal names to identify the machine states call sites machine activation and b an edge labeled with a nonterminal qa ra is named a call site for machine mb and ra is the corresponding return state parsing can be viewed as a process that at call sites activates a machine which on its own graph performs scanning operations and further calls until it reaches a final state then performs a reduction and returns initially the axiom machine ms is activated by the program that invokes the parser at any step in the derivation the nonterminal suffix of the derived string contains the current state of the active machine followed by the return points of the suspended machines these are ordered from right to left according to their activation sequence example derivation and machine return points looking at derivation we find a machine me is active and its current state is previously machine mt was suspended and will resume in state an earlier activation of me was also suspended and will resume in state upon termination of mb when ma resumes in return state ra the collection of all the first legal tokens that can be scanned is named the set of this activation of mb this intuitive concept is made more precise in the following definition of a candidate by inspecting the next token the parser can avoid invalid machine call actions for uniformity when the input string has been entirely scanned we assume the next token to be the special character string terminator or a or or simply a candidate since we exclusively deal with lookahead length is a pair hqb ai in q the intended meaning is that token a is a legal token for the current activation of machine mb we reformulate the classical knuth notion of closure function for a machine net and we use it to compute the set of legal candidates definition closure functions the initial activation of machine ms is encoded by candidate let c be a set candidates initialized to the closure of c is the function parsing methods streamlined defined by applying the following clauses until a fixed point is reached c closure c bi closure c if c c b if hq ai closure c and edge q r in m and b ini l r a thus closure functions compute the set of machines reachable from a given call site through one or more invocations without any intervening state transition for conciseness we group together the candidates that have the same state and we write q ak instead of hq i hq ak i the collection ak is termed set and by definition it can not be empty we list a few values of closure function for the grammar of ex function closure a a parsing we show how to construct deterministic parsers directly for ebn f grammars represented by machine nets as we deviate from the classical knuth s method which operates on pure bn f grammars we call our method elr instead of lr for brevity whenever some passages are identical or immediately obtainable from classical ones we do not spend much time to justify them on the other hand we include correctness proofs of the main constructions because past works on extended bn f parsers have been found not rarely to be flawed hemerik at the end of this section we briefly compare our method with older ones an elr parser is a deterministic pushdown automaton dp da equipped with a set of states named macrostates for short to avoid confusion with net states an consists of a set of for brevity candidate the automaton performs moves of two types a shift action reads the current input character a token and applies the p da function to compute the next then the token and the next are pushed on the stack a reduce action is applied when the grammar symbols from the stack top match a recognizing path on a machine ma and the current token is admitted by the set for the parser to be deterministic in any configuration if a shift is permitted then reduction should be impossible and in any configuration at most one should be possible a reduce action grows the syntax forest pops the matched part of the stack and pushes the nonterminal symbol recognized and the next the p da accepts the input string if the last move reduces to the axiom and the input is exhausted the latter condition can be expressed by saying that the special character is the current token the presence of convergent paths in a machine graph complicates reduction moves because two such paths may require to pop different stack segments reduction handles this difficulty is acknowledged in past research on methods for ebnf grammars but the proposed solutions differ in the technique used and in generality to implement such reduction moves we enrich the stack organization with pointers which enable the parser to trace back a recognizing path while popping the stack such a pointer parsing methods streamlined can be implemented in two ways as a bounded integer offset that identifies a candidate in the previous stack element or as an unbounded integer pointer to a distant stack element in the former case the parser still qualifies as dp da because the stack symbols are taken from a a finite set not so in the latter case where the pointers are unbounded integers this organization is an indexable stack to be called a and will be also used by earley parsers construction of elr parsers given an ebn f grammar represented by a machine net we show how to construct an elr parser if certain conditions are met the method operates in three phases from the net we construct a df a to be called a pilot a pilot state named includes a non empty set of candidates of pairs of states and terminal tokens set the pilot is examined to check the conditions for deterministic parsing the check involves an inspection of the components of each and of the transitions outgoing from it three types of failures may occur or conflicts respectively signify that in a parser configuration both a shift and a reduction are possible or multiple reductions a convergence conflict occurs when two different parser computations that share a character lead to the same machine state if the test is passed we construct the deterministic p da the parser through using the pilot df a as its control and adding the operations needed for managing reductions of unbounded length at last it would be a simple exercise to encode the p da in a programming language for a candidate hpa and a terminal or nonterminal symbol x the under x qualified as depending on x being a terminal or is x hpa x hqa if edge pa qa exists the empty set otherwise for a set c of candidates the shift under a symbol x is the union of the shifts of the candidates in algorithm construction of the elr pilot graph the pilot is the df a named p defined by set r of pilot alphabet is the union v of the terminal and nonterminal alphabets initial is the set closure set r and the r v r are computed starting from by the following steps do its traditional lengthy name is recognizer of viable lr prefixes known as go to function named theta to avoid confusion with the traditional name delta of the transition function of the net also parsing methods streamlined r for all the i r symbols x v do i closure i x if i then x add the i i to the graph of if i r then add the i to the set end if end if end for while r base closure and kernel of a for every i the set of candidates is partitioned into two subsets base and closure the base includes the candidates hq i q is not an initial state clearly for the i computed at line of the algorithm the base coincides with the pairs computed by i x the closure contains the remaining candidates of i hq i i q is an initial state the initial has an empty base by definition all other have a base while the closure may be empty the kernel of a is the projection on the first component q q hq i a particular condition that may affect determinism occurs when for two states that belong to the same i the outgoing transitions are defined under the same grammar symbol definition multiple transition property and convergence a pilot i has the multiple transition property m t p if it includes two candidates hq and hr such that for some grammar symbol x both transitions q x and r x are defined such a i and the pilot transition i x are called convergent if q x r x a convergent transition has a convergence conflict if the sets overlap if to illustrate we consider two examples example pilot of the running example the pilot graph p of the ebn f grammar and net of example see figure is shown in figure in each the top and bottom parts contain the base and the closure respectively when either part is missing the side of the shows which part is present has no base and has no closure the tokens are grouped by state and final states are evidenced by encircling none of the edges of the graph is convergent parsing methods streamlined t a a t a a p a a a e a t a t a a t a a a e a a fig elr pilot graph p of the machine net in figure two such as and having the same kernel differing just for some sets are called some simplified parser constructions to be later introduced rely on kernel equivalence to reduce the number of we observe that for any two i and i and for any grammar symbol x the i x and i x are either both defined or neither one and are kernelequivalent to illustrate the notion of convergent transition with and without conflict we refer to figure where and have convergent transitions the latter with a conflict parsing methods streamlined elr condition the presence of a final candidate in a tells the parser that a reduction move ought to be considered the set specifies which tokens should occur next to confirm the decision to reduce for a machine net more than one reduction may be applied in the same final state and to choose the correct one the parser stores additional information in the stack as later explained we formalize the conditions ensuring that all parser decisions are deterministic definition elr condition a grammar or machine net meets condition elr if the corresponding pilot satisfies the following conditions condition every i satisfies the next two clauses no conflict a for all candidates hq i q is final and for all edges i i a no conflict for all candidates hq hr i q and r are final condition no transition of the pilot graph has a convergence conflict the pilot in figure meets conditions and and no edge of is convergent elr versus classical lr definitions first we discuss the relation between this definition and the classical one knuth in the case that the grammar is bn f each nonterminal a has finitely many alternatives a since the alternatives do not contain star or union operations a straightforward nondeterministic n f a machine na has an acyclic graph shaped as a tree with as many legs originating from the initial state as there are alternative rules for a clearly the graph of na satisfies the no reentrance hypothesis for the initial state in general na is not minimal no in the classical lr pilot machine can exhibit the multiple transition property and the only requirement for parser determinism comes from clauses and of def in our representation machine ma may differ from na in two ways first we assume that machine ma is deterministic out of convenience not of necessity thus consider a nonterminal c with two alternatives c if e then i if e then i else determinization has the effect of normalizing the alternatives by left factoring the longest common prefix of using the equivalent ebn f grammar c if e then i else i second we allow and actually recommend that the graph of ma be minimal with respect to the number of states in particular the final states of na are merged together by state reduction if they are undistinguishable for the df a ma also a state of ma may correspond to multiple states of na such state reduction may cause some pilot edges to become convergent therefore in addition to checking conditions and def imposes that any convergent edge be free from conflicts since this point is quite subtle we illustrate it by the next example example and convergent transitions consider the equivalent bn f and ebn f grammars and the corresponding machines in fig after determinizing the states s s s s of machine ns are equivalent parsing methods streamlined grammars bnf ebnf s abc abd bc ae s ab c d bc ae a as a as machine nets b a c a a ns b d ms c e c a a c d b b b e net common part a na ma s fig bn f and ebn f grammars and networks ns na and ms ma na and are merged into the state of machine ms turning our attention to the lr and elr conditions we find that the bn f grammar has a conflict caused by the derivations below and s a e a s e a a b c e on the other hand for the ebn f grammar the pilot p ms ma shown in fig has the two convergent edges highlighted as arrows one with a conflict and the c other without arc violates the elr condition because the of and in are not disjoint notice in that for explanatory purposes in the two candidates and ei deriving from a convergent transition with no conflict have been kept separate and are not depicted as only one candidate with a single as it is usually done with candidates that have the same state we observe that in general a in the machines of the net has the effect in the pilot automaton to transform violations into convergence conflicts next we prove the essential property that justifies the practical value of our theoretical development an ebn f grammar is elr if and only if the equivalent rightlinearized grammar defined in sect is lr t heorem let g be an ebn f grammar represented by a machine net m and let be the equivalent grammar then net m meets the elr condition if and only if grammar meets the lr condition the proof is in the appendix parsing methods streamlined a a b a a e b e c convergent s e e e e c conflictual d e b d c e e s e a e a e e fig pilot graph of machine net ms ma in fig the edges are convergent although this proposition may sound intuitively obvious to knowledgeable readers we believe a formal proof is due past proposals to extend lr definitions to the ebn f albeit often with restricted types of regular expressions having omitted formal proofs have been later found to be inaccurate see sect the fact that conflicts are preserved by the two pilots is quite easy to prove the less obvious part of the proof concerns the correspondence between convergence conflicts in the elr pilot and conflicts in the grammar to have a grasp without reading the proof the convergence conflict in fig corresponds to the conflict in the of fig we address a possible criticism to the significance of theorem that starting from an ebn f grammar several equivalent bn f grammars can be obtained by removing the regular expression operations in different ways such grammars may or may not be lr a fact that would seem to make somewhat arbitrary our definition of elr which is based on the highly constrained form we defend the significance and generality of our choice on two grounds first our original grammar specification is not a set of s but a set of machines df a and the choice to transform the df a into a grammar is standard and almost obliged because as already shown by heilbrunner the other standard form would exhibit conflicts in most cases second the same author proves that if a grammar equivalent to g is lr then every grammar equivalent to g provided it is not ambiguous is lr besides he shows that this definition of elr grammar dominates all the parsing methods streamlined preexisting alternative definitions we believe that also the new definitions of later years are dominated by the present one to illustrate the discussion it helps us consider a simple example where the machine net is elr by theorem is lr yet another equivalent grammar obtained by a very natural transformation has conflicts example a phrase s has the structure e s e where a construct e has either the form bn en or bn en e with n the language is defined by the elr net below s e ms e b e f me b f mf b f e on the contrary there is a conflict in the equivalent grammar s e ss e e f e f bef b bb b caused by the indecision whether to reduce to b or shift the grammar postpones any reduction decision as long as possible and avoids conflicts parser algorithm given the pilot df a of an elr grammar or machine net we explain how to obtain a deterministic pushdown automaton dp da that recognizes and parses the sentences at the cost of some repetition we recall the three sorts of abstract machines involved the net m of df a s ms ma with state set q q states are drawn as circular nodes the pilot df a p with set r states are drawn as rectangular nodes and the dp da a to be next defined as said the dp da stores in the stack the series of entered during the computation enriched with additional information used in the parsing steps moreover the are interleaved with terminal or nonterminal grammar symbols the current the one on top of stack determines the next move either a shift that scans the next token or a reduction of a topmost stack segment also called reduction handle to a nonterminal identified by a final candidate included in the current the absence of conflicts makes the choice between shift and reduction operations deterministic similarly the absence of conflicts allows the parser to uniquely identify the final state of a machine however this leaves open the problem to determine the stack segment to be reduced for that two designs will be presented the first uses a finite pushdown alphabet the second uses unbounded integer pointers and strictly speaking no longer qualifies as a pushdown automaton first we specify the pushdown stack alphabet since for a given net m there are finitely many different candidates the number of is bounded and the number of candidates in any is also bounded by cmax the dp da stack elements parsing methods streamlined are of two types grammar symbols and stack sms an sms denoted by j contains an ordered set of triples of the form state candidate identifier cid named stack candidates specified as hqa cidi where qa q and cid cmax or cid for readability a cid value will be prefixed by a marker the parser makes use of a surjective mapping from the set of sms to the set of denoted by with the property that j i if and only if the set of stack candidates of j deprived of the candidate identifiers equals the set of candidates of i for notational convenience we stipulate that identically subscripted symbols jk and ik are related by jk ik as said in the stack the sms are interleaved with grammar symbols algorithm elr parser as dp da a let j j ak j k be the current stack where ai is a grammar symbol and the top element is j k initialization the analysis starts by pushing on the stack the sms s s hq for every candidate hq thus the initial of the pilot shift move let the top sms be j i j and the current token be a assume that by inspecting i the pilot has decided to shift and let i a i the shift move does push token a on stack and get next token push on stack the sms j computed as follows o n a j hqa hqa is at position i in j qa qa o n thus j i notice that the last condition in implies that qa is a state of the base of i reduction move state the stack is j j ak j k and let the corresponding be i i j i with i assume that by inspecting i k the pilot chooses the reduction candidate c hqa i k where qa is a final but state let tk hqa i j k be the only stack candidate such that the current token a from ik a cid chain starts which links tk to a stack candidate hpa i j k and so on until a stack candidate th j h is reached that has cid therefore its state is initial th the reduction move does grow the syntax forest by applying reduction ak a pop the stack symbols in the following order j k ak j k j h execute the nonterminal shift move i h a see below reduction move initial state it differs from the preceding case in that the chosen candidate is c the parser move grows the syntax forest by the reduction a and performs the nonterminal shift move corresponding to i k a parsing methods streamlined machine ma pilot i i a pushdown stack j a j a qa qa b qa hqa a hqa i hqa a hqa a plus completion in the parser stack with pointers fig schematization of the shift move qa qa nonterminal shift move it is the same as a shift move except that the shifted symbol a is a nonterminal the only difference is that the parser does not read the next input token at line of shift move acceptance the parser accepts and halts when the stack is the move is the nonterminal shift defined by s and the current token is for shift moves we note that the computed by alg may contain multiple stack candidates that have the same state this happens whenever edge i i a is convergent it may help to look at the situation of a shift move eq and schematized in figure example parsing trace the step by step execution of the parser on input string a produces the trace shown in figure for clarity there are two parallel tracks the input string progressively replaced by the nonterminal symbols shifted on the stack and the stack of stack the stack has one more entry than the scanned prefix the suffix yet to be scanned is to the right of the stack inside a stack element j k each is identified by its ordinal position starting from for the first a value in a cid field of element j k encodes a pointer to the of j k to ease reading the parser trace simulation the number appears framed in each stack element is denoted etc the final candidates are encircled etc and to avoid clogging sets are not shown as they are only needed for convergent transitions which do not occur here but the can always be found by inspecting the pilot graph fig highlights the shift moves as dashed forward arrows that link two topmost stack candidates for instance the first from the figure top terminal shift on is e the first nonterminal shift is the third one on e after the null reduction similarly for all the other shifts fig highlights the reduction handles by means of solid backward arrows that link the candidates involved for instance see reduction e t the stack configuration above shows the chain of three pointers and these three stack elements form the handle and are popped and finally the initial pointer this stack element is the reduction origin and is not popped the initial pointer marks the initial candidate which is obtained by means of a closure operation applied to candidate in the same stack element see the dotted arrow that links them from which the subsequent shift on t starts see the dashed arrow in the stack configuration below the solid on candidate parsing methods streamlined string to be parsed with and stack contents stack base effect after a initialisation of the stack shift on a reduction a t and shift on t reduction t t e and shift on e e e reduction e t and shift on t t shift on a a t a t a t reduction e and shift on e e a shift on e a shift on e shift on t reduction e t and shift on t reduction t e and accept without shifting on e fig parsing steps for string a grammar and elr pilot in figures and the name jh of a sms maps onto the corresponding jh ih at the shift of token see the effect on the line below highlights the null reduction e which does not pop anything from the stack and is immediately followed by a shift on we observe that the parser must store on stack the scanned grammar symbols because parsing methods streamlined in a reduction move at step they may be necessary for selecting the correct reduction handle and build the subtree to be added to the syntax forest returning to the example the execution order of reductions is e t e t tt t pasting together the reductions we obtain this syntax tree where the order of reductions is displayed e t e t t e a clearly the reduction order matches a rightmost derivation but in reversed order it is worth examining more closely the case of convergent edges returning to figure notice that the stack candidates linked by a cid chain are mapped onto state candidates that have the same machine state here the stack candidate hqa is linked via to hqa the set of such candidates is in general a superset of the set included in the stack candidate due to the possible presence of convergent transitions the two sets and coincide when no convergent transition is taken on the pilot automaton at parsing time an elr grammar with convergent edges is studied next example the net pilot graph and the trace of a parse are shown in figure where for readability the cid values in the stack candidates are visualized as backward pointing arrows stack contains two candidates which just differ in their lookahead sets in the corresponding pilot the two candidates are the targets of a convergent transition highlighted in the pilot graph simplified parsing for bnf grammars for lr grammars that do not use regular expressions in the rules some features of the elr parsing algorithm become superfluous we briefly discuss them to highlight the differences between extended and basic parsers since the graph of every machine is a tree there are no edges entering the same machine state which rules out the presence of convergent edges in the pilot moreover the alternatives of a nonterminal a are recognized in distinct final states a a of machine ma therefore if the candidate chosen by the parser is a the dp da simply pops stack elements and performs the reduction a since pointers to a preceding stack candidate are no longer needed the stack coincide with the pilot ones second for a related reason the interleaved grammar symbols are no longer needed on the stack because the pilot of an lr grammar has the property that all the edges entering an carry the same label therefore the reduction handle is uniquely determined by the final candidate of the current parsing methods streamlined machine net a ms a e d ma b e a b pilot graph a b e d d convergent edge a d a d parse traces a b d d a e d d a d d reductions b e a above and a a d s below fig elr net pilot with convergent edges double line and parsing trace of string a b e d parsing methods streamlined the simplifications have some effect on the formal notation for bn f grammars the use of machine nets and their states becomes subjectively less attractive than the classical notation based on marked grammar rules parser implementation using an indexable stack before finishing with parsing we present an alternative implementation of the parser for ebn f grammars algorithm where the memory of the analyzer is an array of elements such that any element can be directly accessed by means of an integer index to be named a the reason for presenting the new implementation is twofold this technique is compatible with the implementation of tabular parsers sect and is potentially faster on the other hand a as is more general than a pushdown stack therefore this parser can not be viewed as a pure dp da as before the elements in the are of two alternating types vsms and grammar symbols a vsms denoted by j is a set of triples named candidates the form of which hqa elemidi simply differs from the earlier stack candidates because the third component is a positive integer named element identifier instead of a cid notice also that now the set is not ordered the surjective mapping from to pilot is denoted as before each elemid points back to the element containing the initial state of the current machine so that when a reduction move is performed the length of the string to be reduced and the reduction handle can be obtained directly without inspecting the stack elements below the top one clearly the value of elemid ranges from to the maximum height algorithm elr parser automaton a using a let j j ak j k be the current stack where ai is a grammar symbol and the top element is j k initialization the analysis starts by pushing on the stack the sms s s hq for every candidate hq thus the initial of the pilot shift move let the top vsms be j i j and the current token be a assume that by inspecting i the pilot has decided to shift and let i a i the shift move does push token a on stack and get next token push on stack the sms j more precisely k k and j k j computed as follows o n a j hqa ji hqa ji is in j qa qa o n ki thus j i reduction move state the stack is j j ak j k and let the corresponding be i i j i with i assume that by inspecting i k the pilot chooses the reduction candidate c hqa i k where qa is a final but state let tk hqa hi j k be the only stack candidate such that the current token a parsing methods streamlined the reduction move does grow the syntax forest by applying reduction ak a pop the stack symbols in the following order j k ak j k j h execute the nonterminal shift move i h a see below reduction move initial state it differs from the preceding case in that the chosen candidate is c ki the parser move grows the syntax forest by the reduction a and performs the nonterminal shift move corresponding to i k a nonterminal shift move it is the same as a shift move except that the shifted symbol a is a nonterminal the only difference is that the parser does not read the next input token at line of shift move acceptance the parser accepts and halts when the stack is the move is the nonterminal shift defined by s and the current token is although the algorithm uses a stack it can not be viewed as a p da because the stack alphabet is unbounded since a vsms contains integer values example parsing trace figure to be compared with figure shows the same step by step execution of the parser as in example on input string a the graphical conventions are the same in every stack element of type vsms the second field of each candidate is the elemid index which points back to some inner position of the stack elemid is equal to the current stack position for all the candidates in the closure part of a stack element and points to some previous position for the candidates of the base as in figure the reduction handles are highlighted by means of solid backward pointers and by a dotted arrow to locate the candidate to be shifted soon after reducing notice that now the arrows span a longer distance than in figure as the elemid goes directly to the origin of the reduction handle the forward shift arrows are the same as those in fig and are here not shown the results of the analysis the execution order of the reductions and the obtained syntax tree are identical to those of example we illustrate the use of the on the previous example of a net featuring a convergent edge net in figure the parsing traces are shown in figure where the integer pointers are represented by backward pointing solid arrows related work on parsing of ebnf grammars over the years many contributions have been published to extend knuth s lr k method to ebn f grammars the very number of papers each one purporting to improve over previous attempts testifies that no optimal solution has been found most papers usually start with critical reviews of related proposals from which we can grasp the difficulties and motivations then perceived the following discussion particularly draws from the later papers morimoto and sassa kannapinn hemerik the first dichotomy concerns which format of ebn f specification is taken as input either a grammar with regular expressions in the right parts or a grammar with finiteautomata f a as right parts since it is nowadays perfectly clear that and f a are interchangeable notations for regular languages the distinction is no longer relevant yet some authors insisted that a language designer should be allowed to specify syntax constructs by arbitrary s even ambiguous ones which allegedly permit a more flexible parsing methods streamlined stack base string to be parsed with and stack contents with indices effect after a initialisation of the stack shift on a t reduction e t and shift on t a reduction a t and shift on t e reduction t t e and shift on e shift on a t a t t reduction e and shift on e e a shift on e a a shift on e e shift on t reduction e t and shift on t reduction t e and accept without shifting on e fig tabulation of parsing steps of the parser using a for string a generated by the grammar in figure having the elr pilot of figure parsing methods streamlined pilot graph a b e d d a d convergent edge a d parse traces a b d d a e d d a d d reductions b e a above and a a d s below fig parsing steps of the parser using a for string a b e d recognized by the net in figure with the pilot here reproduced for convenience mapping from syntax to semantics in their view which we do not share transforming the original s to df a s is not entirely satisfactory others have imposed restrictions on the s for instance limiting the depth of kleene star nesting or forbidding common subexpressions and so on although the original motivation to simplify parser construction has since vanished it is fair to say that the s used in language reference manuals are typically very simple for the reason of avoiding parsing methods streamlined obscurity others prefer to specify right parts by using the very readable graphical notations of syntax diagrams which are a pictorial variant of fa diagrams whether the f a s are deterministic or does not really make a difference either in terms of grammar readability or ease of parser generation even when the source specification includes n f a s it is as simple to transform them into df a s by the standard construction as it is to leave the responsibility of removing to the algorithm that constructs the lr automaton pilot on the other hand we have found that an inexpensive normalization of f a s disallowing reentrance into initial states def pays off in terms of parser construction simplification assuming that the grammar is specified by a net of f a s two approaches for building a parser have been followed a transform the grammar into a bn f grammar and apply knuth s lr construction or b directly construct an elr parser from the given machine net it is generally agreed that approach b is better than approach a because the transformation adds inefficiency and makes it harder to determine the semantic structure due to the additional structure added by the transformation morimoto and sassa but since approach a leverages on existing parser generators such as bison it is quite common for language reference manuals featuring syntax chart notations to include also an equivalent bn f lr or even lalr grammar in celentano a systematic transformation from ebn f to bn f is used to obtain for an ebn f grammar an elr parser that simulates the classical knuth s parser for the bn f grammar the technical difficulty of approach b which all authors had to deal with is how to identify the left end of a reduction handle since its length is variable and possibly unbounded a list of different solutions can be found in the already cited surveys in particular many algorithms including ours use a special shift move sometimes called stackshift to record into the stack the left end of the handle when a new computation on a net machine is started but whenever such algorithms permit an initial state to be reentered a conflict between and normal shift is unavoidable and various devices have been invented to arbitrate the conflict some add states to control how the parser should dig into the stack chapman lalonde while others sassa and nakata use counters for the same purpose not to mention other proposed devices unfortunately it was shown in kannapinn that several proposals do not precisely characterize the grammars they apply to and in some cases may fall into unexpected errors motivated by the mentioned flaws of previous attempts the paper by lee and kim aims at characterizing the lr k property for ecf g grammars defined by a network of fa s although their definition is intended to ensure that such grammars can be parsed from left to right with a of k symbols the authors admit that the subject of efficient techniques for locating the left end of a handle is beyond the scope of this paper after such a long history of interesting but proposals our finding of a rather simple formulation of the elr condition leading to a naturally corresponding parser was rather unexpected our definition simply adds the treatment of convergent edges to knuth s definition the technical difficulties were well understood since long and we combined existing ideas into a simple and provably correct solution of course more experimental work would be needed to evaluate the performance of our algorithms parsing methods streamlined on grammars deterministic parsing a simpler and very flexible parsing method traditionally called ell applies if an elr grammar satisfies further conditions although less general than the elr this method has several assets primarily the ability to anticipate parsing decisions thus offering better support for translation and to be implemented by a neat modular structure made of recursive procedures that mirror the graphs of network machines in the next sections our presentation rigorously derives step by step the properties of topdown deterministic parsers as we add one by one some simple restrictions to the elr condition first we consider the single transition property and how it simplifies the shiftreduce parser the number of is reduced to the number of net states convergent edges are no longer possible and the chain of stack pointers can be disposed second we add the requirement that the grammar is not and we obtain the traditional predictive parser which constructs the syntax tree in at last the direct construction of ell parsers sums up historical note in contrast to the twisted story of elr methods early efforts to develop parsing algorithms for ebn f grammars have met with remarkable success and we do not need to critically discuss them but just to cite the main references and to explain why our work adds value to them deterministic parsers operating topdown were among the first to be constructed by compilation pioneers and their theory for bn f grammars was shortly after developed by rosenkrantz and stearns knuth a sound method to extend such parsers to ebn f grammars was popularized by wirth compiler systematized in the book lewi et al and included in widely known compiler textbooks aho et al however in such books deterministic parsing is presented before methods and independently of them presumably because it is easier to understand on the contrary this section shows that parsing for ebn f grammars is a corollary of the elr parser construction that we have just presented of course for pure bn f grammars the relationship between lr k and ll k grammar and language families has been carefully investigated in the past see in particular beatty building on the concept of multiple transitions which we introduced for elr analysis we extend beatty s characterization to the ebn f case and we derive in a minimalist and provably correct way the ell parsing algorithms as a of our unified approach we mention in the conclusion the use of heterogeneous parsers for different language parts property and pilot compaction given an elr net m and its elr pilot p recall that a has the multipletransition property m t p if two identically labeled state transitions originate from two candidates present in the for brevity we also say that such violates the property st p the next example illustrates several cases of violation example violations of st p extended left to right leftmost with length of equal to one parsing methods streamlined a machine net a n a ms b n mn n pilot graph p a n a b n b b a n b b b b fig elr net of ex with multiple candidates in the base of three cases are examined first grammar s n n a n b generating the deterministic k language an bm n m is represented by the net in fig top the presence of two candidates in and also in other reveals that the parser has to carry on two simultaneous attempts at parsing until a reduction takes place which is unique since the pilot satisfies the elr condition there are neither nor conflicts nor convergence conflicts second the net in figure illustrates the case of multiple candidates in the base of a that is entered by a convergent edge third grammar s b a s c a has a pilot that violates st p yet it contains only one candidate in each base on the other hand if every base contains one candidate the parser configuration space can be reduced as well as the range of parsing choices furthermore st p entails that there are no convergent edges in the pilot next we show that having the same kernel qualified for brevity as kernelidentical can be safely coalesced to obtain a smaller pilot that is equivalent to the original one and bears closer resemblance to the machine net we hasten to say that such transformation does not work in general for an elr pilot but it will be proved to be correct under the st p hypothesis parsing methods streamlined merging the merging operation coalesces two kernelidentical and suitably adjusts the pilot graph then possibly merges more it is defined as follows algorithm m erge replace by a new denoted by where each lookahead set is the union of the corresponding ones in the merged hp hp i and hp i and the becomes the target for all the edges that entered or x x x i i or i for each pair of edges from and labeled x the target which clearly are are merged if x x then call m erge x x clearly the merge operation terminates with a graph with fewer nodes the set of all is an equivalence class by applying the merge algorithm to the members of every equivalence class we construct a new graph to be called the compact pilot and denoted by example compact pilot we reproduce in figure the machine net the original elr pilot and in the bottom part the compact pilot where for convenience the have been renumbered notice that the sets have expanded in the in the first row is the union of the corresponding in the merged and we are going to prove that this loss of precision is not harmful for parser determinism thanks to the stronger constraints imposed by st p we anticipate from section that the compact pilot can be directly constructed from the machine net saving some work to compute the larger elr pilot and then to merge nodes properties of compact pilots we are going to show that we can safely use the compact pilot as parser controller p roperty let p and c be respectively the elr pilot and the compact pilot of a net m satisfying st p the elr parsers controlled by p and by c are equivalent they recognize language l m and construct the same syntax tree for every x l m p roof we show that for every the compact pilot has no elr conflicts since the merge operation does not change the kernel it is obvious that every c for the compact pilot the number of is the same as for the lr and lalr pilots which are historical simpler variants of lr parsers not considered here see for instance crespi reghizzi however neither lr nor lalr pilots have to comply with the st p condition a weaker hypothesis would suffice that every has at most one candidate in its base but for simplicity we have preferred to assume also that convergent edges are not present parsing methods streamlined machine net a t me mt t e pilot graph p a t a t a a a e a t a t a a a a a a e a a t a compact pilot graph a a a a t a t t e a a a a fig from top to bottom machine net m elr pilot graph p and compact pilot c the equivalence classes of are and the of c are named to evidence their correspondence with the states of net parsing methods streamlined satisfies st p therefore conflicts can be excluded since they involve two candidates in a base a condition which is ruled out by st p next suppose by contradiction that a conflict between a final a occurs in but neither in nor in since hqa ai is state qa and a shift of qb qb in the base of it must also be in the bases of or and an a labeled edge originates from or therefore the conflict was already there in one or both next suppose by contradiction that has a new initial conflict between an outgoing a labeled edge and an candidate ai by definition of merge ai is already in the set of say moreover for any two and for any symbol x v either both x and x are defined or neither one and the are therefore an a labeled edge originates from which thus has a conflict a contradiction last suppose by contradiction that contains a new initial conflict between and for some terminal character a that is in both sets clearly in one of the merged say there are in the closure the candidates i with a and i with a and in there are in the closure the candidates i with a and i with a to show the contradiction we recall how are computed let the bases be respectively hqc i for and hqc i for then any character in say comes in two possible ways it is present in or it is a character that follows some state that is in the closure of focusing on character a the second possibility is excluded because a whereas the elements brought by case are necessarily the same for and it remains that the presence of a in and in comes from its presence in and in hence a must be also in a contradiction the parser algorithms differ only in their controllers p and first take a string accepted by the parser controlled by since any created by merge encodes exactly the same cases for reduction and for shift as the original merged the parsers will perform exactly the same moves for both pilots moreover the chains of candidate identifiers are clearly identical since the candidate offset are not affected by merge therefore at any time the parser stacks store the same elements up to the merge relation and the compact parser recognizes the same strings and constructs the same tree second suppose by contradiction that an illegal string is recognized using the compact parser for this string consider the first parsing time where p stops in error in whereas c is able to move from if the move is a terminal shift the same shift is necessarily present in if it is a reduction since the candidate identifier chains are identical the same string is reduced to the same nonterminal therefore also the following nonterminal shift operated by c is legal also for p which is a contradiction we have thus established that the parser controlled by the compact pilot is equivalent to the original one candidate identifiers or pointers unnecessary thanks to the st p property the parser can be simplified to remove the need for cid s or stack pointers we recall that a cid was needed to find the reach of a reduction move into the stack elements parsing methods streamlined were popped until the cid chain reached an initial state the sentinel under st p hypothesis that test is now replaced by a simpler device to be later incorporated in the final ell parser alg with reference to alg only shift and reduction moves are modified first let us focus on the situation when the old parser with j on top of stack and hqa performs the shift of x terminal or from state qa which is necessarily the shift would require to compute and record a cid into the sms j to be pushed on stack in the same situation the pointerless parser cancels from the element j all candidates other than qa since they correspond to discarded parsing alternatives notice that the canceled candidates are necessarily in hence they contain only initial states their elimination from the stack allows the parser to uniquely identify the reduction to be made when a final state of machine ma is entered by a simple rule keep popping the stack until the first occurrence of initial state is found second consider a shift from an initial state which is necessarily in in this case the pointerless parser leaves j unchanged and pushes j x on the stack the other candidates present in j can not be canceled because they may be the origin of future nonterminal shifts since cid s are not used by this parser a stack element is identical to an of the compact pilot thus a shift move first updates the top of stack element then it pushes the input token and the next we specify only the moves that differ from alg algorithm pointerless parser ap l let the pilot be compacted are denoted ki and stack symbols hi the set of candidates of hi is weakly included in ki shift move let the current character be a h be the top of stack element containing a candidate hqa let qa qa and k a k be respectively the state transition and transition to be applied the shift move does if qa qa is not initial eliminate all other candidates from h set h equal to push a on stack and get next token push h k on the stack reduction move state let the stack be h h ak h k assume that the pilot chooses the reduction candidate hqa k k where qa is a final but state let h h be the topmost stack element such that k h the move does grow the syntax forest by applying the reduction ak a and pop the stack symbols h k ak h k h h execute the nonterminal shift move k h a reduction move initial state it differs from the preceding case in that for the chosen reduction candidate the state is initial and final reduction a is applied to grow the syntax forest then the parser performs the nonterminal shift move k k a nonterminal shift move it is the same as a shift move except that the shifted symbol is a nonterminal the only difference is that the parser does not read the next input token at line of shift move clearly this reorganization removes the need of cid s or pointers while preserving correctness parsing methods streamlined p roperty if the elr pilot of an ebn f grammar or machine net satisfies the st p condition the pointerless parser ap l of alg is equivalent to the elr parser a of alg p roof there are two parts to the proof both straightforward to check first after parsing the same string the stacks of parsers a and ap l contain the same number k of stack elements respectively j j k and k k k and for every pair of corresponding elements the set of states included in k i is a subset of the set of states included in j i because alg may have discarded a few candidates furthermore we claim the next relation for any pair of stack elements at position i and i in j i candidate hqa points to hpa in j i hqa k i and the only candidate in k i is hpa in j i candidate hqa points to in j i hqa k i and k i equals the projection of j i on hstate then by the specification of reduction moves a performs reduction ak a ap l performs the same reduction example pointerless parser trace such a parser is characterized by having in each stack element only one machine state plus possibly a few initial ones therefore the parser explores only one possible reduction at a time and candidate pointers are not needed given the input string a figure shows the execution trace of a pointerless parser for the same input as in figure parser using cid s the graphical conventions are unchanged the of the compact pilot in each cell is framed is denoted etc the final candidates are encircled etc and the are omitted to avoid clogging so a candidate appears as a pure machine state of course here there are no pointers instead the initial candidates canceled by alg from a mstate are striked out we observe that upon starting a reduction the initial state of the active machine might in principle show up in a few stack elements to be popped for instance the first from the figure top reduction e t of machine mt pops three stack elements namely and and the one popped last contains a striked out candidate that would be initial for machine mt but the real initial state for the reduction remains instead unstriked below in the stack in element which in fact is not popped and thus is the origin of the shift on t soon after executed we also point out that alg does not cancel any initial candidates from a stack element if a shift move is executed from one such candidate see the shift move case of alg the motivation for not canceling is twofold first these candidates will not cause any early stop of the series of pop moves in the reductions that may come later or said differently they will not break any reduction handle for instance the first from the figure top shift on of machine mt keeps both initial candidates and in the parsing methods streamlined string to be parsed with and stack contents stack base effect after a initialisation of the stack a shift on reduction e e shifts on e and reduction e t and shift on t t shift on a t a a t a a e shift on a t reduction a t and shift on t e reduction t t e and shifts on e and t reduction e t and shift on t e reduction t e and accept without shitting on e fig steps of the pointerless parsing algorithm apl the candidates are canceled by the shift moves as explained in algorithm parsing methods streamlined stack as the shift originates from the initial candidate this candidate will instead be canceled it will show striked when a shift on e is executed soon after reduction t t e as the shift originates from the candidate second some of such initial candidates may be needed for a subsequent nonterminal shift move to sum up we have shown that condition st p permits to construct a simpler parser which has a reduced stack alphabet and does not need pointers to manage reductions the same parser can be further simplified if we make another hypothesis that the grammar is not ell condition our definition of deterministically parsable grammar or network comes next definition ell a machine net m meets the ell condition if the following three clauses are satisfied there are no derivations the net meets the elr condition the net has the single transition property st p condition can be easily checked by drawing a graph denoted by g that has as nodes the initial states of the net and has an edge if in machine ma there exists an b edge ra or more generally a path a a a b k qk ra where all the nonterminals ak are nullable the net is if and only if graph g contains a circuit actually left recursive derivations cause in all but one situations a violation of clause or of def the only case that would remain undetected is a derivation involving the axiom and caused by of the form s or a a s k three different types of derivations are illustrated in figure the first two cause violations of clause or so that only the third needs to be checked on graph g the graph contains a on node it would not be difficult to formalize the properties illustrated by the examples and to restate clause of definition as follows the net has no derivation of the form involving the axiom and not using this apparently weaker but indeed equivalent condition is stated by beatty in his definition of ll grammars discussion to sum up ell grammars as defined here are elr grammars that do not allow derivations and satisfy the property no multiple candidates in bases and hence no convergent transitions this is the historical acronym ell has been introduced over and over in the past by several authors with slight differences we hope that reusing again the acronym will not be considered an abuse parsing methods streamlined a a b a me x b a e mx a conflict in caused by a derivation that makes use of ab a b b a ms a ma a b a derivation through a nonterminal other than the axiom has the effect of creating two candidates in the base of and thus violates clause a a a a a me e a clauses and are met and the derivation is entirely contained in a e a fig nets violating ell conditions top with violates clause middle on a axiom violates clause bottom axiom e is undetected by clauses and a more precise reformulation of the definitions of ll and ell grammars which have accumulated in half a century to be fair to the perhaps most popular definition of ll grammar we contrast it with ours there are marginal contrived examples where parsing methods streamlined a violation of st p caused by the presence of multiple candidates in the base of a does not hinder a parser from working deterministically beatty a typical case is the lr grammar s a a b b a c b c c one of the has two candidates in the base ha c ai hb c bi the choice between the alternatives of s is determined by the following character a or b yet the grammar violates st p it easy to see that a necessary condition for such a situation to occur is that the language derived from some nonterminal of the grammar in question consists only of the empty string a v such that la g however this case can be usually removed without any penalty nor a loss of generality in all grammar applications in fact the grammar above can be simplified and the equivalent grammar s a b a a b b is obtained which complies with st p stack contraction and predictive parser the last development to be next presented transforms the already compacted pilot graph into the control flow graph of a predictive parser the way the latter parser uses the stack differs from the previous models the and pointerless parsers more precisely now a terminal shift move which always executes a push operation is sometimes implemented without a push and sometimes with multiple pushes the former case happens when the shift remains inside the same machine the predictive parser does not push an element upon performing a terminal shift but it updates the top of stack element to record the new state multiple pushes happen when the shift determines one or more transfers from the current machine to others the predictive parser performs a push for each transfer the essential information to be kept on stack is the sequence of machines that have been activated and have not reached a final state where a reduction occurs at each parsing time the current or active machine is the one that is doing the analysis and the current state is kept in the top of stack element previous activations of the same or other machines are in the suspended state for each suspended machine ma a stack entry is needed to store the state qa from where the machine will resume the computation when control is returned after performing the relevant reductions the main advantage of predictive parsing is that the construction of the syntax tree can be anticipated the parser can generate the left derivation of the input parser graph moving from the above considerations first we slightly transform the compact pilot graph c and make it isomorphic to the original machine net the new graph is named parser graph p cf g because it represents the blueprint of parser code the first step of the transformation splits every of c that contains multiple candidates into a few nodes that contain only one candidate second the nodes are coalesced and the original sets are combined into one the third step creates new edges named call edges whenever a machine transfers control to another machine at last each call edge is labeled with a set of characters named guide set which is a summary of the information needed for the parsing decision to transfer control to another machine definition parser graph every node of the p cf g denoted by f is identified by a state q of machine net m parsing methods streamlined and denoted without ambiguity by q moreover every node qa where qa fa is final consists of a pair hqa where set named set is the union of the sets of every candidate hqa i existing in the compact pilot graph c hqa i c the edges of f are of two types named shift and call x there exists in f a shift edge qa ra with x terminal or if the same edge is in machine ma there exists in f a call edge qa where is a nonterminal possibly differa ent from a if qa ra is in ma hence necessarily in some k of c there exist candidates hqa and and the k contains candidate hra i the call edge label named guide is recursively defined as follows holds b if and only if any conditions below hold b ini l is nullable and b ini l ra and l ra are both nullable and b in f a call edge and b relations are not recursive and respectively consider that b is generated by called by ma or by ma but starting from state ra or that b follows ma rel is recursive and traverses the net as far as the chain of call sites activated we observe that rel determines an inclusion relation between any two concatenated call edges qa we also write gui qa instead of qa we next extend the definition of guide set to the terminal shift edges and to the dangling a darts that the final nodes for each terminal shift edge p q labeled with a a we set gui p q a for each dart that tags a final node containing a candidate hfa with fa fa we set gui fa then in the p cf g all edges except the nonterminal shifts can be interpreted as conditional instructions enabled if the current character cc belongs to the associated guide set a terminal shift edge labeled with a is enabled by a predicate cc a or cc a for uniformity a call edge labeled with represents a conditional procedure invocation where the enabling predicate is cc a final node dart labeled with is interpreted as a conditional instruction to be executed if cc the remaining p cf g edges are nonterminal shifts which are interpreted as unconditional instructions we show that for the same state such predicates never conflict with one another p roperty for every node q of the p cf g of a grammar satisfying the ell condition the guide sets of any two edges originating from q are disjoint although traditionally the same word has been used for both and parsers the set definitions differ and we prefer to differentiate their names it is the same as a predictive parsing table element in aho et al parsing methods streamlined p roof since every machine is deterministic identically labeled shift edges can not originate from the same node and it remains to consider the cases of and edge pairs a consider a shift edge qa ra with a and two call edges qa and qa first assume by contradiction a if a comes from rel then in the pilot qa a there are two base candidates a condition that is ruled out by st p if it comes from rel in the with base qa k there is a conflict between the shift of a and the reduction which owes its a to a path from ra in machine ma if it comes from rel in k there is the same conflict as before though now reduction owes ahead a to some other machine that previously invoked machine ma finally it may come from rel which is the recursive case and defers the three cases before to some other machine that is immediately invoked by machine mb there is either a violation of st p or a conflict second assume by contradiction a and a since a comes from one of four relations twelve combinations should be examined but since they are similar to the previous argumentation we deal only with one namely case clearly qa a violates st p the converse of property also holds which makes the condition of having disjoint guide sets a characteristic property of ell grammars we will see that this condition can be checked easily on the p cf g with no need to build the elr pilot automaton p roperty if the guide sets of a p cf g are disjoint then the net satisfies the ell condition of definition the proof is in the appendix example for the running example the p cf g is represented in figure with the same layout as the machine net for comparability in the p cf g there are new nodes only node in this example which derive from the initial candidates excluding those containing the axiom extracted from the closure part of the of having added such nodes the closure part of the c nodes except for node becomes redundant and has been eliminated from p cf g node contents as said prospect sets are needed only in final states they have the following properties final states that are not initial the prospect set coincides with the corresponding set of the compact pilot this is the case of nodes and a state such as the prospect set is the union of the sets of every candidate that occurs in for instance i takes from and from solid edges represent the shift ones already present in the machine net and pilot graph dashed edges represent the call ones labeled by guide sets and how to compute them is next illustrated guide set of call edge and of is a since from state both characters can be shifted guide set of edge includes the terminals e since is in mt language l is nullable and ini a since from a call edge goes out to with prospect set a parsing methods streamlined compact pilot graph machine network t me t t a a t t e a a a a mt a a e a a parser graph me mt t t a a e a a a fig the parser graph f of the running example from the net and the compact pilot in accordance with prop the terminal labels of all the edges that originate from the same node do not overlap predictive parser it is straightforward to derive the parser from the p cf g its nodes are the pushdown stack elements the top of stack element identifies the state of the active machine while inner stack elements refer to states of suspended machines in the correct order of suspension there are four sorts of moves a scan move associated with a terminal shift edge reads the current character cc as the corresponding machine would do a call move associated with a call edge checks the enabling predicate saves on stack the return state and switches to the invoked machine without consuming cc a return move is triggered when the active machine enters a final state whose prospect set includes cc the active state is set to the return state of the most recently suspended machine a recognizing move terminates parsing algorithm predictive recognizer a stack elements are the states of p cf g f at the beginning the stack contains the initial candidate i hqa i be the top element meaning that the active machine ma is in state qa moves are next specified this way parsing methods streamlined move cc if the shift edge qa ra exists then scan the next character and replace the stack top by hra i the active machine does not change move b if there exists a call edge qa such that cc let qa ra be the corresponding nonterminal shift edge then pop push element hra i and push element i move if qa is a final state and cc is in the prospect set associated with qa in the p cf g f then pop move if ma is the axiom machine qa is a final state and cc then accept and halt any other case reject the string and halt from prop it follows that for every parsing configuration at most one move is possible the algorithm is deterministic computing left derivations to construct the syntax tree the algorithm is next extended with an output function thus turning the dp da into a pushdown transducer that computes the left derivation of the input string using the grammar the output actions are specified in table i and are so straightforward that we do not need to prove their correctness moreover we recall that the syntax tree for grammar is essentially an encoding of the syntax tree for the original ebn f grammar g such that each node has at most two child nodes table derivation steps computed by predictive parser current character is cc parser move output derivation step b scan move for transition qa ra qa b ra call move for call edge qa and transition qa ra qa ra return move for state qa fa qa example running example trace of predictive parser for input x a parsing methods streamlined stack x i predicate left derivation a a i i a scan i i a a a i i a a a i i a scan i i a i i a i i scan a i i a i accept a a a a for the original grammar the corresponding derivation is e t e t a parser implementation by recursive procedures predictive parsers are often implemented using recursive procedures each machine is transformed into a syntactic procedure having a cf g matching the corresponding p cf g subgraph so that the current state of a machine is encoded at runtime by the program counter parsing starts in the axiom procedure and successfully terminates when the input has been exhausted unless an error has occurred before the standard runtime mechanism of procedure invocation and return automatically implements the call and return moves an example should suffice to show that the procedure is mechanically obtained from the p cf example recursive descent parser for the p cf g of figure the syntactic procedures are shown in figure the can be optimized in several ways direct construction of parser graph we have presented and justified a series of rigorous steps that lead from an elr pilot to the compact parser and finally to the parser graph however for a human wishing to design a predictive parser it would be tedious to perform all those steps we therefore provide a simpler procedure for checking that an ebn f grammar satisfies the ell condition the procedure operates directly on the grammar parser control flow graph and does not require the construction of the elr pilot it uses a set of recursive equations defining the prospect and guide sets for all the states and edges of the p cf g the equations are interpreted as instructions to compute iteratively the guide sets after which the ell check simply verifies according to property that the guide sets are disjoint equations defining the prospect sets parsing methods streamlined recursive descent parser machine network procedure t state if cc a then cc next else if cc then cc next state if cc a then call e else error end if state if cc then cc next else error end if else error end if state if cc a then return else error end if end procedure t me t a mt e recursive descent parser program ell p arser cc next call e if cc then accept else reject end if end program procedure e optimized while cc a do call t end while if cc then return else error end if end procedure fig main program and syntactic procedures of a recursive descent parser ex and fig function next is the programming interface to the lexical analyzer or scanner function error is the messaging interface x if the net includes shift edges of the kind pi q then the prospect set of state q is x i pi a if the net includes nonterminal shift edge qi ri and the corresponding call edge in the pcfg is qi then the prospect set for the initial state of machine ma is ini l ri if n ullable l ri then else a qi notice that the two sets of rules apply in an exclusive way to disjoints sets of nodes because the normalization of the machines disallows into initial states equations defining the guide sets a for each call edge qa associated with a nonterminal shift edge qa ra parsing methods streamlined such that possibly other call edges depart from state the guide set gui qa of the call edge is defined as follows see also conditions ini l if n ullable then ini l ra else gui qa if n ullable n ullable l ra then else s gui for a final state fa fa the guide set of the tagging dart equals the prospect set gui fa a for a terminal shift edge qa ra with a the guide set is simply the shifted terminal a gui qa ra a a computation starts by assigning to the prospect set of initial state of axiom machine the all other sets are initialized to empty then the above rules are repeatedly applied until a fixpoint is reached notice that the rules for computing the prospect sets are consistent with the definition of set given in section furthermore the rules for computing the guide sets are consistent with the definition of parser control flow graph provided in section example running example computing the prospect and guide sets the following table shows the computation of most prospect and guide sets for the p cf g of figure the computation is completed at the third step prospect sets of guide sets of a a a a a a a a a a a a a a example guide sets in a grammar the grammar s n n a n b of example violates the single transition property as shown in figure hence it is not ell this can be verified by computing the guide sets on the p cf have a a the gui a and gui gui a guide sets on the edges departing from states and are not disjoint tabular parsing the lr and ll parsing methods are inadequate for dealing with nondeterministic and ambiguous grammars the seminal work earley introduced a algorithm parsing methods streamlined for recognizing strings of any grammar even of an ambiguous one though it did not explicitly present a parsing method a means for constructing the potentially numerous parsing trees of the accepted string later efficient representations of parse forests have been invented which do not duplicate the common subtrees and thus achieve a complexity for the tree construction algorithm until recently aycock and borsotti earley parsers performed a complete recognition of the input before constructing any syntax tree grune and jacobs concerning the possibility of directly using extended bn f grammars earley himself already gave some hints and later a few parser generators have been implemented but no authoritative work exists to the best of our knowledge another discussion concerns the pros and cons of using since this issue is out of scope here and a few experimental studies see aycock and horspool have indicated that algorithms can be faster at least for programming languages we present the simpler version that does not use this is in line with the classical theoretical presentations of the earley parsers for bn f grammars such as the one in actually our focus in the present work is on programming languages that are nonambiguous formal notations unlike natural languages therefore our variant of the earley algorithm is well suited for ebn f grammars though possibly nondeterministic and the related procedure for building parse trees does not deal with multiple trees or forests string recognition our algorithm is straightforward to understand if one comes from the vector stack implementation of the elr parser of section when analyzing a string x xn with x n or x with x n the algorithm uses a vector e n or e respectively of n elements called earley vector every vector element e i contains a set of pairs h px j i that consist of a state px of the machine mx for some nonterminal x and of an integer index j with j i n that points back to the element e j that contains a corresponding pair with the initial state of the same machine mx this index marks the position in the input string from where the currently assumed derivation from nonterminal x may have started before introducing our variant of the earley algorithm for ebn f grammars we preliminarily define the operations completion and terminalshift completion e i with index i n do loop that computes the closure operation for each pair that launches machine mx x for each pair h p j i e i and x q v q p q do add pair h i i to element e i end for nested loops that compute the nonterminal shift operation for each final pair that enables a shift on nonterminal x parsing methods streamlined for each pair h f j i e i and x v such that f fx do for each pair that shifts on nonterminal x x for each pair h p l i e j and q q p q do add pair h q l i to element e i end for end for while some pair has been added notice that in the completion operation the nullable nonterminals are dealt with by a combination of closure and nonterminal shift operations terminalshift e i with index i n loop that computes the terminal shift operation for each preceding pair that shifts on terminal xi x for each pair h p j i e i and q q p q do add pair h q j i to element e i end for the algorithm below for earley syntactic analysis uses completion and t erminalshif algorithm earley syntactic analysis analyze the terminal string x for possible acceptance define the earley vector e n with n e h i initialize the first elem e for i to n do initialize all e n e i end for completion e complete the first elem e i while the vector is not finished and the previous elem is not empty while i n e i do terminalshift e i put into the current elem e i completion e i complete the current elem e i end while example ebn f grammar with nullable nonterminal parsing methods streamlined the earley acceptance condition if the following h f i e n with f fs a string x belongs to language l g if and only of the earley acceptance condition is true figure lists a ebn f grammar and shows the corresponding machine net the string a a b b a a is analyzed its syntax tree and analysis trace are in figures and respectively the edges in the latter figure ought to be ignored as they are related with the syntax tree construction discussed later machine network extended grammar ebnf a a b b b a ms a a b b a b a ma ab a b c bba c mb fig b b a ebn f grammar g and network of example s a a b b b b a a fig syntax tree for the string a a b b a a of example the following lemma correlates the presence of certain pairs in the earley vector elements with the existence of a leftmost derivation for the string prefix analyzed up to that parsing methods streamlined a fig a b b a a tabular parsing trace of string a a b b a a with the machine net in figure point and together with the associated corollary provides a proof of the correctness of the algorithm l emma if it holds h qa j i e i which implies inequality j i with qa qa state qa belongs to the machine ma of nonterminal a then it holds h j i e j and the grammar admits a leftmost derivation xi qa if j i or qa if j i the proof is in the appendix c orollary if the earley acceptance condition is satisfied if h fs i e n with fs fs then the ebnf grammar g admits a derivation s x x l s and string x belongs to language l g the following lemma which is the converse of lemma states the completeness of the earley algorithm l emma take an ebnf grammar g and a string x xn of length n that belongs to language l g in the grammar consider any leftmost derivation d of a prefix xi i n of x that is d xi qa w with qa qa and w the two points below apply if it holds w w rb z for some rb qb then it holds j j i and a pb qb such that the machine net has an arc pb rb and grammar admits parsing methods streamlined two leftmost derivations xj pb z and xi qa so that derivation d decomposes as follows d pb rb d xj pb z xj xi qa rb z xj rb z xi qa w a as an arc pb rb in the net maps to a rule pb rb in grammar this point is split into two steps the second being the crucial one a if it holds w then it holds a s nonterminal a is the axiom qa qs and h qa i e i b if it also holds xi l g the prefix also belongs to language l g then it holds qa fs fs state qa fs is final for the axiomatic machine ms and the prefix is accepted by the earley algorithm limit cases if it holds i then it holds xi if it holds j i then it holds xi and if it holds x so n then both cases hold j i if the prefix coincides with the whole string x i n then step implies that string x which by hypothesis belongs to language l g is accepted by the earley algorithm which therefore is complete the proof is in the appendix syntax tree construction the next procedure buildtree bt builds the parse tree of a recognized string through processing the vector e constructed by the earley algorithm function bt is recursive and has four formal parameters nonterminal x v state f and two nonnegative indices j and nonterminal x is the root of the sub tree to be built state f is final for machine mx it is the end of the computation path in mx that corresponds to analyzing the substring generated by indices j and i satisfy the inequality j i n they respectively specify the left and right ends of the substring generated by x x xi g x if j i if j i g grammar g admits derivation s xn or s and the earley algorithm accepts string x thus element e n contains the final axiomatic pair h f i to build the tree of string x with root node s function bt is called with parameters bt s f n then the function will recursively build all the subtrees and will assemble them in the final tree function bt returns the syntax tree in the form of a parenthesized string with brackets labeled by the root nonterminal of each sub tree the commented code follows buildtree x f j i x is a nonterminal f is a final state of mx and j i n return as parenthesized string the syntax tree rooted at node x node x will have a list c of terminal and nonterminal child nodes either list c will remain empty or it will be filled from right to left c set to the list c of child nodes of x parsing methods streamlined q f set to f the state q in machine mx k i set to i the index k of vector e walk back the sequence of term nonterm shift in mx while q do while current state q is not initial x a try to backwards recover a terminal shift move p q check if node x has terminal xk as its current child leaf h k p qx such that if then x h p j i e h net has p q c xk c concatenate leaf xk to list c end if y b try to backwards recover a nonterm shift oper p q check if node x has nonterm y as its current child node y v e fy h j h k i p qx if then y h e h i e k h p j i e h net has p q recursively build the subtree of the derivation y xk if h k or y if h k g g and concatenate to list c the subtree of node y c buildtree y e h k c end if q p shift the current state q back to p k h drag the current index k back to h end while return c x return the tree rooted at node x figure reports the analysis trace of example and also shows solid edges that correspond to iterations of the while loop in the procedure and dashed edges that match the recursive calls notice that calling function bt with equal indices i and j means building a subtree of the empty string which may be made of one or more nullable nonterminals this happens in the figure witch shows the tree of the bt calls and of the returned subtrees for example with the call bt b notice that since a leftmost derivation uniquely identifies a syntax tree the conditions in the two mutually exclusive conditionals inside the while loop are always satisfied in only one way otherwise the analyzed string would admit several distinct left derivations and therefore it would be ambiguous as previously remarked nullable terms are dealt with by the earley algorithm through a chain of closure and nonterminal shift operations an optimized version of the algorithm can be defined to perform the analysis of nullable terminals in a single step along the same parsing methods streamlined bt s a a b b b a b a s bt b b b a b bt b b fig calls and return values of buildtree for example lines as defined by aycock and horspool further work by the same authors also defines optimized procedures for building the parse tree in the presence of nullable nonterminals which can also be adjusted and applied to our version of the earley algorithm conclusion we hope that this extension and conceptual compaction of classical parser construction methods will be appreciated by compiler and language designers as well as by instructors starting from syntax diagrams which are the most readable representation of grammars in language reference manuals our method directly constructs deterministic parsers which are more general and accurate than all preceding proposals then we have extended to the ebn f case beatty s old theoretical comparisons of ll versus lr grammars and exploited it to derive general deterministic parsers through stepwise simplifications of parsers we have evidenced that such simplifications are correct if and derivations are excluded for completeness to address the needs of ebn f grammars we have included an accurate presentation of the tabular earley parsers including syntax tree generatio our goal of coming up with a minimalist comprehensive presentation of parsing methods for extended bn f grammars has thus been attained to finish we mention a practical development there are circumstances that suggest or impose to use separate parsers for different language parts identified by a grammar partition by the sublanguages generated by certain subgrammars the idea of grammar partition dates back to korenjak who wanted to reduce the size of an lr pilot by decomposing the original parser into a family of subgrammar parsers and to thus reduce the number of candidates and parser size reduction remains a goal of parser partitioning in the domain of natural language processing see meng et al in such projects the component parsers are all homogeneous whereas we are more interested in heterogeneous partitions which use different parsing algorithms why should one want to diversify the algorithms used for different language parts first a language may contain parsing methods streamlined parts that are harder to parse than others thus a simpler ell parser should be used whenever possible limiting the use of an elr parser or even of an earley one to the sublanguages that warrant a more powerful method second there are examples of language embedding such as sql inside c where the two languages may be biased towards different parsing methods although heterogeneous parsers can be built on top of legacy parsing programs past experimentation of parsers and psaila has met with practical rather than conceptual difficulties caused by the need to interface different parser systems our unifying approach looks promising for building seamless heterogeneous parsers that switch from an algorithm to another as they proceed this is due to the homogeneous representation of the parsing stacks and tables and to the exact formulation of the conditions that enable to switch from a more to a less general algorithm within the current approach mixed mode parsing should have little or no implementation overhead and can be viewed as a pragmatic technique for achieving greater parsing power without committing to a more general but less efficient algorithm such as the generalized lr parsers initiated by tomita aknowledgement we are grateful to the students of formal languages and compiler courses at politecnico di milano who bravely accepted to study this theory while in progress we also thank giorgio satta for helpful discussions of earley algorithms references a ho l am s ethi and u llman j compilers principles techniques and tools prenticehall englewoof cliffs nj aycock j and b orsotti a early action in an earley parser acta informatica aycock j and h orspool practical earley parsing comput j b eatty on the relationship between the ll and lr grammars journal of the acm c elentano a lr parsing technique for extended grammars computer languages c hapman lalr parser generation for regular right part grammars acta inf c respi r eghizzi formal languages and compilation springer london c respi eghizzi and p saila grammar partitioning and modular deterministic parsing computer languages e arley j an efficient parsing algorithm commun acm g a note on a proposed lalr parser for extended grammars inf process lett g rune and jacobs parsing techniques a practical guide vrije universiteit amsterdam g rune and jacobs parsing techniques a practical guide ed springer london h eilbrunner on the definition of elr k and ell k grammars acta informatica h emerik towards a taxonomy for ecfg and rrpg parsing in language and automata theory and applications dediu ionescu and eds lncs vol springer k annapinn reconstructing lr theory to eliminate redundance with an application to the construction of elr parsers in german thesis technical university of berlin k nuth on the translation of languages from left to right information and control k nuth syntax analysis acta informatica k orenjak j a practical method for constructing lr k processors commun acm l a l onde constructing lr parsers for regular right part grammars acta informatica l ee and k im characterization of extended lr k grammars inf process lett parsing methods streamlined l ewi d e v laminck h uens and h uybrechts a programming methodology in compiler construction amsterdam i and ii l omet b a formalization of transition diagram systems acm m eng l uk x u and w eng glr parsing with multiple grammars for natural language queries acm trans on asian language information processing june m orimoto and s assa yet another generation of lalr parsers for regular right part grammars acta informatica r introduction to formal languages dover new york rosenkrantz j and s tearns properties of deterministic parsing information and control s assa and n akata i a simple realization of for regular right part grammars inf process lett t omita efficient parsing for natural language a fast algorithm for practical systems kluwer boston w irth algorithms data structures programs publishers englewood cliffs nj parsing methods streamlined appendix proof of theorem let g be an ebn f grammar represented by machine net m and let be the equivalent grammar then net m meets the elr condition if and only if grammar meets the lr condition p roof let q be the set of the states of clearly for any rule x y z of it holds x q y q and z q q let p and be the elr and lr pilots of g and respectively preliminarily we study the correspondence between their transition functions and and their it helps us compare the pilot graphs for the running example in figures denoted by i and i and and for example in figure notice that in the of the lr pilot the candidates are denoted by marked rules with a we observe that since grammar is bn f the graph of has the property that all the edges entering the same have identical labels in contrast the identicallabel property which is a form of locality does not hold for p and therefore a of p is possibly split into several of due to the very special grammar form the following mutually exclusive classification of the of is exhaustive is initial is intermediate if every candidate in has the form pa y qa is a sink reduction if every candidate has the form pa y qa for instance in figure the intermediate are numbered from to and the sink reduction from to we say that a candidate hqx of p corresponds to a candidate of of the form hpx s qx if the sets are identical if then two mstates i and of p and respectively are called correspondent if the candidates in and in correspond to each other moreover we arbitrarily define as correspondent the initial and to illustrate in the running example a few pairs of correspondent are and the following straightforward properties of correspondent will be needed l emma the mapping defined by the correspondence relation from the set containing and the intermediate of to the set of the of p is total and onto surjective for any terminal or nonterminal symbol s and for any correspondent i and transition i s i is defined transition i s is defined and i i is intermediate moreover i i are correspondent let state fa be final candidate hfa is in i actually in its base a correspondent contains candidates hpa s fa and hfa let state be final initial with a candidate is in i a correspondent contains candidate let state be final candidate is in candidate is in parsing methods streamlined a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a fig pilot graph of the grammar of the running example see ex parsing methods streamlined and for any initial state it holds for any pair of correspondent i and i for every alternative of p roof of lemma first to prove that the mapping is onto assume by contradiction that a i such that hqa i has no correspondent intermediate in clearly there exists a such that the kernels of and are identical because the grammar has the same derivations as m vs sect and it remains that the of a correspondent candidate in and differ the definition of set for a bn f grammar not included is the traditional one it is easy to check that our def of closure function computes exactly the same sets a contradiction item we observe that the bases of i and are identical therefore any candidate with s qb computed computed by def matches a candidate in i s is by the traditional closure function for therefore if i s is defined also i defined and yields an intermediate moreover the next are clearly s is a sink reduction spondent since their bases are identical on the other hand if i then i s is undefined s item consider an edge pa fa entering state fa by item consider j and as correspondent that include the predecessor state pa resp within the candidate s includes hpa and hqa t pa then j s includes hfa and j hpa s fa hence also j s includes hfa item if candidate is in i it is in thus contains a candidate hpb such that closure hpb therefore some intermediate contains a candidate hqb s pb in the base and candidate in its closure the converse reasoning is analogous item this case is obvious item we consider the case where the i is not initial and the state results from the closure operation applied to a candidate of the other cases where i is the initial or the state results from the closure operation applied to a candidate of can be similarly dealt with some machine a mb includes the edge pb qb pb for every correspondent state there are some suitable x and rb such that rb x pb pb qb for every alternative of nonterminal this concludes the proof of the lemma part if of theorem we argue that a violation of the elr condition in p implies an lr conflict in three cases need to be examined shift reduce conflict consider a conflict in i hfb a i where fb is final and i a is defined a by lemma items and there exists a correspondent such that i thus proving that the same conflict is in is defined and hfb a i i similarly consider a conflict in i a i where is final and initial and i a is defined parsing methods streamlined a by lemma items and there exists a correspondent such that i thus proving that the same conflict is in is defined and a i i reduce reduce conflict consider a conflict in i hfa a i hfb a i where fa and fb are final by item of the lemma the same conflict exists in a hfa a i hfb a i similarly a conflict in i a i hfb a i where and fb are final corresponds by items and to a conflict in a i hfb a i by item the same holds true for the special case x convergence conflict consider a convergence conflict i i where i hpa a i hqa a i pa x qa x ra hra a i first if neither pa nor qa are the initial state both candidates are in the base of i by x item there are correspondent intermediate and transition with hpa x ra a i hpa x ra a i therefore ra contains two reduction candidates with identical which is a conflict quite similarly if arbitrarily qa then candidate hpa a i is in the base of i and necessarily contains a candidate c hs such that closure c a i therefore for some t and y there exists a correspondent of i such that ht y s and x ra a i hence it holds x and x ra a i i part only if of theorem we argue that every lr conflict in entails a violation of the elr condition in shift reduce conflict the conflict occurs in a such that hfb a i a is defined by items and or of the lemma the correspondent i and i i contains hfb a i and the move i a is defined thus resulting in the same conflict reduce reduce conflict first consider a having the form hfa a i hfb a i where fa and fb are final by item of the lemma the correspondent i contains the candidates i hfa a i hfb a i and has the same conflict second consider a having the form hfa a i a i where fa is final by items and the same conflict is in the correspondent parsing methods streamlined at last consider a conflict in a sink reduction hpa x ra a i hqa x ra a i ra i such that contains candidates hpa then there exist a and a transition x ra a i and hqa x ra a i therefore the correspondent i x contains candidate hra a i and there are a and a transition such that it holds hpa x ra a i hqa x ra a i since is not a sink reduction state let us call i its correspondent state then pa is initial then hpa a i by virtue of item pa is not initial then there exists a candidate hta z pa a i notice that the is the same because we are still in the same machine ma and hpa a i i a similar reasoning applies to state qa therefore hpa a i i x and i i has a convergence conflict this concludes the if and only if parts and the theorem itself as a second example to illustrate convergence conflicts the pilot graph equivalent to p of figure is shown in figure proof of property if the guide sets of a p cf g are disjoint then the machine net satisfies the ell condition of definition p roof since the ell condition consists of the three properties of the elr pilot absence of left recursion st p absence of multiple transitions and absence of and conflicts we will prove that the presence of disjoint guide sets in the p cf g implies that all these three conditions hold we prove that if a grammar represented as a net is left recursive then its guide sets are not disjoint if the grammar is left recursive then n such that in the p cf g there are n call edges then since a v such that la g it holds a and ai v such that there is a a shift edge pa and a gui hence the guide sets for these two edges in the p cf g are not disjoint notice that the presence of left recursion due to rules of the kind a x a with x a nullable nonterminal can be ruled out because this kind of left recursion leads to a conflict as discussed in section and shown in figure we prove that the presence of multiple transitions implies that the guide sets in the p cf g are not all disjoint this is done by induction through starting from the initial of the pilot automaton which has an empty base and showing that all reachable of the pilot automaton satisfy the single transition property st p unless the guide sets of the p cf g are not all disjoint we also note that the transitions from the satisfying st p lead to the base of which is a singleton set we call such singleton base and we say that they satisfy the singleton base property sbp induction base assume there exists a multiple transition from the initial closure i hence includes n candidates i i parsing methods streamlined a a b a b e a e b a a b b b b b e b c d c c e e c c e c e c e c e e c c sink a e c d e c e b a a grammar for ex a s b s b s c s s s c e a a a a a a a e b sink c c e conflict convergence conflict fig part of the traditional with marked rules pilot of the grammar for ex the reducec reduce conflict in sink matches the convergence conflict of the edge of p fig i and for some h and k with h k n the machine net inx x cludes transitions pxh and pxk with x being a terminal symbol x a or being a nonterminal one x z v let us first consider the case x a and assume the candidate i derives from candidate i through a possibly iterated closure operation then the p cf g includes the call edges the inclusions a gui gui hold and there are two nona disjoint guide sets gui pxh a and gui a assuming instead that there is a candidate such that both i and i are derived by the closure operation through distinct paths then the p cf g includes the call edges with and parsing methods streamlined with thus the following inclusions hold a gui gui and a gui gui therefore the two guide sets gui and gui are not disjoint let us now consider the case x z v then if the candidate i derives from i the p cf g includes the call edges as well as and hence ini gui and ini gui gui and the two guide sets gui and gui are not disjoint the case of both candidates i and i deriving from a common candidate through distinct sequences of closure operations is similarly dealt with and leads to the conclusion that in the p cf g there are two call edges originating from state the guide sets of which are not disjoint this concludes the induction base inductive step consider a singleton base i such that a multiple transition i x is defined since i has a singleton base of the two candidates from where the multiple transition originates at least one is in the case of both candidates in the closure is at all similar to the one treated in the base case of the induction therefore we consider the case of one candidate in the base with a state qa and one in the closure with an initial state then if x a the p cf g has states ra ry and with n and yn y shift edges a a qa ra and ry and call edges qa such that a ini gui gui a and a a a gui qa ra thus the two guide sets gui qa ra and gui qa are not disjoint otherwise if x z v the p cf g includes the nonterminal shift z z edges qa ra and ryn with ryn qyn and the call edges qa and also the two call edges qa and then it holds gui qa gui qa ini z and the two guide sets gui qa and gui qa are not disjoint this concludes the induction we prove that the presence of or conflicts implies that the guide sets are not disjoint we can assume that all are singleton base hence if there are two conflicting candidates at most one of them is in the base as in the point above so a first we consider conflicts and the two cases of candidates being both in the closure or only one i if both candidates are in the closure then there are n candidates i i such that the candidates i and i are conflicting hence for some a it holds a gui and a gui if the candidate i derives from candidate i through a sequence of closure operations then the p cf g includes the chain of call edges with a gui gui therefore a gui gui and these two guide sets are not disjoint if instead there is a candidate such that both i and i are obtained from that candidate through distinct paths it can be shown that in the p cf g there are two call edges departing from state the guide sets of which are not parsing methods streamlined disjoint as in the point above ii the case where one candidate is in the base and one is in the closure is quite similar to the one in the previous point there are a candidate hfa and n candidates i i such that the candidates hfa and i are conflicting hence for some a it holds a gui fa and a gui the p cf g includes the call edges fa whence a gui fa gui fa and these two guide sets are not disjoint b then we consider conflicts and the three cases that arise depending on whether the conflicting candidates are both in the closure or only one or whether there is one candidate that is both shift and reduction i if there are two conflicting candidates both in the closure of i then either the closure of i includes n candidates i i and the p cf g includes the call edges or the closure includes three candidates i and i such that i and i derive from through distinct chains of closure operations we first consider a linear chain of closure operations from i to i let us first consider the case where i is a reduction candidate hence is a final state and a such that a and q a qxn such that q then it holds a gui gui therefore a gui gui and these two guide sets are not disjoint let us then consider the symmetric case where is a final state hence i is a reduction candidate a and a such that a and q such that q then it holds a gui gui therefore a ii a gui q gui and these two guide sets are not disjoint the other case where two candidates i and i derive from a third one through distinct chains of closure operations is treated in a similar way and leads to the conclusion that in the p cf g there are two call edges departing from state the guide sets of which are not disjoint if there are two conflicting candidates one in the base and one in the closure then we consider the two cases that arise depending on whether the reduction candidate is in the closure or in the base similarly to point b i above let us first consider the case where the base contains the candidate hpa the closure includes n candidates i i state is final hence i is a reduction candidate and a such that a a and q qa such that pa q then since is final it holds n ullable xn and a gui a whence a gui pa q gui pa and these two guide sets are not disjoint let us then consider the symmetric case where the base contains a candidate hfa with fa fa hence hfa is a reduction candidate the closure includes n candidates i i and a parsing methods streamlined a such that a and q qxn such that q then it holds a gui gui fa therefore a gui fa gui fa and these two guide sets are not disjoint iii if there is one candidate hpa that is both shift and reduce then pa fa a and a such that a and qa qa such that pa qa then it a holds a gui pa gui pa qa and so there are two guide sets on edges departing from the same p cf g node that are not disjoint this concludes the proof of the property proof of lemma if it holds h qa j i e i which implies inequality j i with qa qa state qa belongs to the machine ma of nonterminal a then it holds h j i e j and the grammar admits a leftmost derivation xi qa if j i or qa if j p roof by induction on the sequence of insertion operations performed in the vector e by the analysis algorithm base e and the property stated by the lemma is trivially satisfied e and induction we examine the three operations below terminalshift if hqa ji e i results from a terminalshift operation then it holds qa such that qa xi qa and hqa ji e i as well as ji e j and qa by the inductive hypothesis hence xi qa closure if ii e i then the property is trivially satisfied ii e i and nonterminalshift suppose first that j i if hqfa ji e i then ji e j and xi qfa by the inductive hypothesis furthermore if hqb ki e j then ki e k and xj qb by the inductive hypothesis also if qb a qb then qb qb and since hqb ki is added to e i it is eventually proved that hqb ki e i implies ki e k and xj qb xj qb xj xi qfa qb xi qb if j i then the same reasoning applies but there is not any terminalshift since the derivation does not generate any terminal in particular for the nonterminalshift case we have that if hqfa ii e i then ii e i and qfa by the inductive hypothesis and the rest follows as before but with k i and qb the reader may easily complete by himself with the two remaining proof cases where k j i or k j i which are combinations this concludes the proof of the lemma parsing methods streamlined proof of lemma take an ebnf grammar g and a string x xn of length n that belongs to language l g in the grammar consider any leftmost derivation d of a prefix xi i n of x that is d xi qa w with qa qa and w the two points below apply if it holds w w rb z for some rb qb then it holds j j i and a pb qb such that the machine net has an arc pb rb and grammar admits two leftmost derivations xj pb z and xi qa so that derivation d decomposes as follows d pb rb d d xj pb z xj xi qa rb z xj rb z xi qa w a as an arc pb rb in the net maps to a rule pb rb in grammar this point is split into two steps the second being the crucial one a if it holds w then it holds a s nonterminal a is the axiom qa qs and h qa i e i b if it also holds xi l g the prefix also belongs to language l g then it holds qa fs fs state qa fs is final for the axiomatic machine ms and the prefix is accepted by the earley algorithm limit cases if it holds i then it holds xi if it holds j i then it holds xi and if it holds x so n then both cases hold j i if the prefix coincides with the whole string x i n then step implies that string x which by hypothesis belongs to language l g is accepted by the earley algorithm which therefore is complete p roof by induction on the length of the derivation s x base since the thesis case is satisfied by taking i induction we examine a few cases x xi qa w xi ra w because qa ra then the closure operation adds to e i the item ii hence the thesis case holds by taking for j the value i for qa the value for qb the value ra and for w the value ra xi qa w xi ra w because qa ra then the operation terminalshift e i adds to e i the item hra ji hence the thesis case holds by taking for j the same value and for qa the value ra xi qa w xi w because qa fa fa and fa then z qa fs fs and xi fs xi then the current derivation step is the last one and the string is accepted because the pair hfs e i by the inductive hypothesis w qb z then by applying the inductive hypothesis to xj the following b facts hold k k j pc qc such that xk pc z pc qc parsing methods streamlined a xj pb pb qb and the vector e is such that hpc hi e k ki e k hpb ki e j ji e j and hqa ji e i then the nonterminal shift operation in the completion procedure adds to e i the pair hqb ki a first we notice that from xj pb pb qb and xi fa b it follows that xi qb next since pc qc it holds xk pc z xk qc z therefore xk xi qb qc z and the thesis holds by taking for j the value k for qa the value qb for qb the value qc and for w the value qc z the above situation is schematized in figure ak hpc hi hpb ki hpa ji ki ji hqb ki ek fig aj ej schematic trace of tabular parsing this concludes the proof of the lemma ai ei
6
similarity rasmus pagh ninh pham francesco and morten mar it university of copenhagen denmark abstract we present an algorithm for computing similarity joins based on hashing lsh in contrast to the filtering methods commonly suggested our method has provable subquadratic dependency on the data size further in contrast to straightforward implementations of known algorithms on external memory our approach is able to take significant advantage of the available internal memory whereas the time complexity of classical algorithms includes a factor of n where is a parameter of the lsh used the complexity of our algorithm merely includes a factor where n is the data size and m is the size of internal memory our algorithm is randomized and outputs the correct result with high probability it is a simple recursive procedure and we believe that it will be useful also in other computational settings such as parallel computation keywords similarity join locality sensitive hashing cache aware cache oblivious introduction the ability to handle noisy or imprecise data is becoming increasingly important in computing in database settings this kind of capability is often achieved using similarity join primitives that replace equality predicates with a condition on similarity to make this more precise consider a space u and a distance function d u u the similarity join of sets r s u is the following given a radius r compute the set r s x y r s d x y r this problem occurs in numerous applications such as web deduplication document clustering data cleaning as such applications arise in datasets the problem of scaling up similarity join for different metric distances is getting more important and more challenging many known similarity join techniques prefix filtering positional filtering inverted filtering are based on filtering techniques the research leading to these results has received funding from the european research council under the eu framework programme erc grant agreement no in part supported by university of padova project and by miur of italy project amanda while working at the university of padova supported by the danish national research foundation sapere aude program that often but not always succeed in reducing computational costs if we let n these techniques generally require n comparisons for worstcase data another approach is hashing lsh where candidate output pairs are generated using collisions of carefully chosen hash functions the lsh is defined as follows definition fix a distance function d u u for positive reals r c and c a family of functions h is r cr if for uniformly chosen h h and all x y u if d x y r then pr h x h y if d x y cr then pr h x h y we say that h is monotonic if pr h x h y is a function of the distance function d x y we also say that h uses space s if a function h h can be stored and evaluated using space lsh is able to break the n barrier in cases where for some constant c the number of pairs in r s is not too large in other words there should not be too many pairs that have distance within a factor c of the threshold the reason being that such pairs are likely to become candidates yet considering them does not contribute to the output for notational simplicity we will talk about far pairs at distance greater than cr those that should not be reported near pairs at distance at most r those that should be reported and pairs at distance between r and cr those that should not be reported but the lsh provides no collision guarantees our contribution in this paper we study similarity join methods based on lsh that is we are interested in minimizing the number of operations where a block of b points from u is transferred between an external memory and an internal memory with capacity for m points from u our main result is the first algorithm for similarity join that has provably dependency on the data size n and at the same time inverse polynomial dependency on m in essence where previous methods have an overhead factor of either or we obtain an overhead of where is a parameter of the lsh employed strictly improving both we show theorem consider r s u let n assume log n m n and that there exists a monotonic r cr family of functions with respect to distance measure d using space b and with let log log then there exists a randomized algorithm computing r s d with probability o using n n m b mb mb the hides polylog n factors we conjecture that the bound in theorem is close to the best possible for the class of signature based algorithms that work by generating a set of lsh values from a and monotonic family and checking all pairs that collide our conjecture is based on an informal argument given in full in section we describe a input where it seems significant advances are required to beat theorem asymptotically further we observe that for m n our bound coincides with the optimal bound of reading the input and when m our bound coincides with the bounds of the best known internal memory algorithms it is worth noting that whereas most methods in the literature focus on a single or a few distance measure our method works for an arbitrary space and distance measure that allows lsh hamming manhattan euclidean jaccard and angular metric distances since our approach makes use of lsh as a black box the problem of reporting the complete join result with certainty would require major advances in lsh methods see for recent progress in this direction a primary technical hurdle in the paper is that we can not use any kind of strong concentration bounds on the number of points having a particular value since hash values of an lsh family may be correlated by definition another hurdle is duplicate elimination in the output stemming from pairs having multiple lsh collisions however in the context of algorithms it is natural to not require the listing of all near pairs but rather we simply require that the algorithm enumerates all such near pairs more precisely the algorithm calls for each near pair x y a function emit x y this is a natural assumption in external memory since it reduces the complexity in addition it is desired in many applications where join results are intermediate results pipelined to a subsequent computation and are not required to be stored on external memory our upper bound can be easily adapted to list all instances by increasing the complexity of an unavoidable additive term of organization the organization of the paper is as follows in section we briefly review related work section describes our algorithms including a approach and the main results a solution its analysis and a randomized approach to remove duplicates section provides some discussions on our algorithms with some real datasets section concludes the paper related work in this section we briefly review lsh the computational model and some similarity join techniques hashing lsh lsh was originally introduced by indyk and motwani for similarity search problems in high dimensional data this technique obtains a sublinear o n time complexity by increasing the gap of collision probability between near points and far points using the lsh family as defined in definition the gap of collision probability is polynomial with an exponent of log log dependent on it is worth noting that the standard lshs for metric distances including hamming jaccard and angular distances are monotonic these common lshs are and use space comparable to that required to store a point except the lsh of which requires space n o we do not explicitly require the hash values themselves to be particularly small however using universal hashing we can always map to small bit strings while introducing no new collisions with high probability thus we assume that b hash values fit in one memory block computational model we study algorithms for similarity join in the external memory model which has been widely adopted in the literature see the survey by vitter the external memory model consists of an internal memory of m words and an external memory of unbounded size the processor can only access data stored in the internal memory and move data between the two memories in blocks of size b for simplicity we will here measure block and internal memory size in units of points from u such that they can contain b points and m points respectively the complexity of any algorithm is defined as the number of blocks moved between the two memories by the algorithm the approach makes explicit use of the parameters m and b to achieve its complexity whereas the one does not explicitly use any model parameters the latter approach is desirable as it implies optimality on all levels of the memory hierarchy and does not require parameter tuning when executed on different physical machines note that the model assumes that the internal memory is ideal in the sense that it has an optimal cachereplacement policy such policy can evict the block that is used furthest in the future and can place a block anywhere in the cache full associativity similarity join techniques we review some of similarity join techniques most closely related to our work similarity join a popular approach is to make use of indexing techniques to build a data structure for one relation and then perform queries using the points of the other relation the indexes typically perform some kind of filtering to reduce the number of points that a given query point is compared to see indexing can be space consuming in particular for lsh but in the context of similarity join this is not a big concern since we have many queries and thus can afford to construct each hash table on the fly on the other hand it is clear that similarity join techniques will not be able to take significant advantage of internal memory when n m indeed the query complexity stated in is o thus the complexity of using indexing for similarity join will be high the indexing technique of can be adapted to compute similarity joins more efficiently by using the fact that many points are being looked up in the hash tables this means that all lookups can be done in a batched fashion using sorting this results in a dependency on n that is where is a parameter of the lsh family generic joins when n is close to m the can be improved by using general join operators optimized for this case it is easy to see that when is an integer a nested loop join requires n m b our algorithm will make use of the following result on cacheoblivious nested loop joins theorem he and luo given a similarity join condition the join of relations r and s can be computed by a algorithm in o b mb this number of suffices to generate the result in memory but may not suffice to write it to disk we note that a similarity join can be part of a join involving more than two relations for the class of acyclic joins where the variables compared in join conditions can be organized in a tree structure one can initially apply a full reducer that removes tuples that will not be part of the output this efficiently reduces any acyclic join to a sequence of binary joins handling cyclic joins is much harder see and outside the scope of this paper our algorithms in this section we describe our efficient algorithms we start in section with a algorithm it uses an lsh family where the value of the collision probability is set to be a function of the internal memory size section presents our main result a recursive and algorithm which uses the lsh with a approach and does not make any assumption on the value of collision probability section describes the analysis and section shows how to reduce the expected number of times of emitting near pairs algorithm asimjoin we will now describe a simple algorithm called asimjoin which achieves the worst case bounds as stated in theorem asimjoin relies on an r cr family of hash functions with the following properties and for a suitable value given an arbitrary monotonic r cr family h the family can be built by concatenating hash functions from for simplicity we assume that is an integer and thus the probabilities and can be exactly obtained nevertheless the algorithm and its analysis can be extended algorithm asimjoin r s r s are the input sets repeat log n times associate to each point in r and s a counter initially set to repeat l times choose uniformly at random use to partition r and s in buckets rv sv of points with the hash value v for each hash value v generated in the previous step for simplicity we assume that split rv and sv into chunks ri v and si v of size at most for every chunk ri v of rv load in memory ri v for every chunk si v of sv do load in memory si v compute ri v si v and emit all near pairs for each far pair increment the associated counters by remove from si v and ri v all points with the associated counter larger than and write si v back to external memory write ri v back to external memory to the general case by increasing the complexity by a factor at most in the worst case in practical scenarios this factor is a small constant asimjoin assumes that each point in r and s is associated with a counter initially set to this counter can be thought as an additional dimension of the point which hash functions and comparisons do not take into account the algorithm repeats l times the following procedure a hash function is randomly drawn from the r cr family and it is used for partitioning the sets r and s into buckets of points with the same hash value we let rv and sv denote the buckets respectively containing points of r and s with the same hash value then the algorithm iterates through every hash value and for each hash value v it uses a double nested loop for generating all pairs of points in rv sv the double nested loop loads consecutive chunks of rv and sv of size at most the outer loop runs on the smaller set say rv while the inner one runs on the larger one say sv for each pair x y the algorithm emits the pair if d x y r increases by counters associated with x and y if d x y cr or ignores the pair if r d x y cr every time the counter of a point exceeds the point is considered to be far away from all points and will be removed from the bucket chunks will be moved back in memory when they are no more needed the entire asimjoin algorithm is repeated log n times to find all near pairs with high probability the following theorem shows the bounds of the approach theorem consider r s u and let n be sufficiently large assume there exists a monotonic r cr family of functions with respect to distance measure d with and for a suitable value with probability the asimjoin algorithm enumerates all near pairs using n n m b mb mb proof we observe that the cost of steps that is of n n b for r and s according to a hash function is l sort n m n l m repetitions we now consider the cost of an iteration of the loop in step for a given hash value when the size of one bucket say rv is smaller than we are able to load the whole rv into the internal memory and then load consecutive blocks of sv to execute join operations hence the cost of this step is at most the total cost of the log n iterations of step among all possible hash values where at least one bucket has size smaller than n n is at most l n b m b the cost of step when both buckets rv and sv are larger than is bm this means that the amortized cost of each pair in rv sv is bm therefore the amortized cost of all iterations of step when there are no bucket size less than can be upper bounded by multiplying the total number of generated pairs by bm based on this observation we classify and enumerate generated pairs into three groups near pairs pairs and far pairs we denote by cn ccn and cf the respective size of each group and upper bound these quantities to derive the proof number of near pairs by definition lsh gives a lower bound on the probability of collision of near pairs it may happen that the collision probability of near pairs is thus two near points might collide in all l repetitions of step and in all log n repetitions of step this means that cn log n note that this bound is a deterministic worst case bound number of pairs any pair from r s appears in a bucket with probability at most due to monotonicity of our lsh family since we have l repetitions each pair collides at most in expectation in other words the expected number of pair collisions among l repetitions is at most by using the chernoff bound exercise with log n independent l repetitions in step we have pr ccn log n pr ccn log n we let sort n o log be shorthand for the complexity of sorting n points number of far pairs if x r s is far away from all points the expected number of collisions of x in l hash table including duplicates is at most since then the point is removed by step hence the total number of examined far pairs is cf lm log n therefore by summing the number of near pairs cn pairs ccn and far pairs cf and multiplying these quantities by the amortized complexity bm we upper bound the cost of all iterations of step when there are no buckets of size less than is n m n b bm bm with probability at least by summing all the previous bounds we get the claimed bound with high probability we now analyze the probability to enumerate all near pairs consider one iteration of step a near pair is not emitted if at least one of the following events happen the two points do not collide in the same bucket in each of the l iterations of step this happens with probability l one of the two points is removed by step because it collides with more than far points by the markov s inequality and since there are at most n far points the probability that x collides with at least points in the l iterations is at most then this event happens with probability at most therefore a near pair does not collide in one iteration of step with probability at most and never collides in the log n iterations with probability at most log n then by an union bound it follows that all near pairs there are at most n of them collide with probability at least and the theorem follows as already mentioned in the introduction a near pair x y can be emitted many times during the algorithm since points x and y can be hashed on the same value in p x y l rounds of step where p x y denotes the actual collision probability a simple approach for avoiding duplicates is the following for each near pair found during the iteration of step the pair is emitted only if the two points did not collide by all hash functions used in the previous i rounds the check starts from the hash function used in the previous round and backtracks until a collision is found or there are no more hash functions this approach increases the worst case complexity by a factor section shows a more efficient randomized algorithm that reduces the number of replica per near pair to a constant this technique also applies to the algorithm described in the next section algorithm osimjoin r s r s are the input sets and is the recursion depth if then swap the references to the sets such that if or then compute r s using the algorithm of theorem and return pick a random sample s of points from s or all points if compute containing all points of r that have distance smaller than cr to at least half points in s compute s using the algorithm of theorem repeat l times choose h h uniformly at random use h to partition and s in buckets rv sv of points with hash value v for each v where rv and sv are nonempty recursively call osimjoin rv sv algorithm osimjoin the above algorithm uses an r cr family of functions with and for partitioning the initial sets into smaller buckets which are then efficiently processed in the internal memory using the nested loop algorithm if we know the internal memory size m this lsh family can be constructed by concatenating hash functions from any given primitive r cr family without knowing m in the setting such family can not be built therefore we propose osimjoin a algorithm that efficiently computes the similarity join without knowing the internal memory size m and the block length osimjoin uses as a a given monotonic r cr family the value of and can be considered constant in a practical scenario as common in settings we use a recursive approach for splitting the problem into smaller and smaller subproblems that at some point will fit the internal memory although this point is not known in the algorithm we first give a high level description of the algorithm and an intuitive explanation we then provide a more detailed description and analysis osimjoin receives in input the two sets r and s of similarity join and a parameter denoting the depth in the recursion tree initially that is used for recognizing the base case let n and denote with log n and n two global values that are kept invariant in the recursive levels and computed using the initial input size n for simplicity we assume that and are integers and further assume without loss of generality that the initial size n is a power of two note that if is not an the monotonicity requirement can be relaxed to the following pr h x h y pr h h y for every two pairs x y and y where d x y r and d y a monotonic lsh family clearly satisfies this assumption integer the last iteration in step can be performed with a random variable l such that e l osimjoin works as follows if the problem is currently at the recursive level n or the recursion ends and the problem is solved using the nested loop described in theorem otherwise the following operations are executed by exploiting sampling the algorithm identifies a subset of r containing almost all points that are near or to a constant fraction of points in s steps then we compute s using the cacheoblivious of theorem and remove points in from r step subsequently the algorithm repeats l times the following operations a hash function is extracted from the r cr family and used for partitioning r and s into buckets denoted with rv and sv with any hash value v steps then the join rv sv is computed recursively by osimjoin step the explanation of our approach is the following by recursively partitioning input points with hash functions from h the algorithm decreases the probability of collision between two far points in particular the collision probability of a far pair is at the recursive level on the other hand by repeating the partitioning times in each level the algorithm guarantees that a near pair is enumerated with constant probability since the probability that a near pair collide is at the recursive level it deserves to be noticed that the collision probability of far and near pairs at the recursive level is and respectively which are asymptotically equivalent to the values in the algorithm in other words the partitioning of points at this level is equivalent to the one in the algorithm with collision probability for a far pair finally we observe that when a point in r becomes close to many points in s it is more efficient to detect and remove it instead of propagating it down to the base cases this is due to the fact that the collision probability of very near pairs is always large close to and the algorithm is not able to split them into subproblems that fit in memory complexity and correctness of osimjoin analysis of complexity we will bound the expected number of of the algorithm rather than the worst case this can be converted to an high probability bound by running log n parallel instances of our algorithm without loss of generality we assume that the optimal cache replacement splits the cache into log n parts that are assigned to each instance the total execution stops when the first parallel instance terminates which with probability at least is within a logarithmic factor of the expected bound logarithmic factors are absorbed in the for notational simplicity in this section we let r and s denote the initial input sets and let and denote the subsets given in input to a particular recursive subproblem note that due to step can denote a subset of r but also of s similarly for we also let denote the sampling of in step and with the subset of computed in step lemma says that two properties of the choice of random sample in step are almost certain and the proof relies on chernoff bounds on the choice of in the remainder of the paper we assume that lemma holds and refer to this event as a holding with probability o lemma with probability at least o over the random choices in step the following bounds hold for every subproblem osimjoin cr proof let x be a point which is to at most one sixth of the points in the point x enters if there are at least cnear points in and this happens for a chernoff bound theorem with probability at most each point of appears in at most li subproblems and there are at most n points in r therefore with probability n we have that in every subproblem osimjoin no point with at most points in is in hence each point in has at least points in and the bound in equation follows we can similarly show that with probability n we have that in every subproblem osimjoin all points with at least points in are in then each point in has far points in and equation follows to analyze the number of for subproblems of size more than m we bound the cost in terms of different types of collisions of pairs in r s that end up in the same subproblem of the recursion we say that x y is in a particular subproblem osimjoin if x y observe that a pair x y is in a subproblem if and only if x and y have colliding hash values on every step of the call path from the initial invocation of osimjoin definition given q r s let ci q be the number of times a pair in q is in a call to osimjoin at the level of recursion we also let ci k q with k log m denote the number of times a pair in q is in a call to osimjoin at the level of recursion where the smallest input set has size in if k log m and in m if k log m the count is over all pairs and with multiplicity so if x y is in several subproblems at the level all these are counted next we bound the complexity of osimjoin in terms of ci r s and ci k r cr s for any i we will later upper bound the expected size of these quantities in lemma and then get the claim of theorem lemma let and m log n given that a holds the complexity of osimjoin r s is c r s c r s log m i i k x x n x cr b mb proof to ease the analysis we assume that no more than of internal memory is used to store blocks containing elements of r and s respectively since the model assumes an optimal cache replacement policy this can not decrease the complexity also internal memory space used for other things than data input and output buffers the recursion stack of size at most is less than by our assumption that m log n as a consequence we have that the number of for solving a subproblem osimjoin where and is o including all recursive calls this is because there is space dedicated to both input sets and only for reading the input are required by charging the cost of such subproblems to the writing of the inputs in the parent problem we can focus on subproblems where the largest set has size more than we notice that the cost of steps is dominated by other costs by our assumption that the set fits in internal memory which implies that it suffices to scan data once to implement these steps this cost is clearly negligible with respect to the remaining steps and thus we ignore them we first provide an upper bound on the complexity required by all subproblems at a recursive level above let osimjoin i be a recursive call at the recursive level for i the cost of loop join in step in osimjoin i is o m b by theorem we can ignore the o term since it is asymptotically negligible with respect to the cost of each iteration of step which is upper bounded later by than pairs and thus equation we have that contains more the cost of step in osimjoin i is o m b this means that we can bound the total cost of all executions of step at level i of the recursion with o ci r s m b since each near pair x y appears in ci x y subproblems at level i the second major part of the complexity is the cost of preparing recursive calls in osimjoin i steps in fact in each iteration of step the cost is which includes the cost of hashing and of sorting to form buckets since each point of and is replicated in l subproblems in step we have that each point of the initial sets r and s is replicated times at level i since the average cost per entry is have that the total cost for preparing recursive calls at level i is n by summing the above terms we have that the total complexity of all subproblems in the recursive level is upper bounded by r s c i n mb b we now focus our analysis to bound the complexity required by all subproblems at a recursive level below let again osimjoin i be a recursive call at the recursive level for i we observe that part of the cost of a subproblem at level i can be upper bounded by a suitable function of collisions among far points in osimjoin i more specifically consider an iteration of step in a subproblem at level i then the cost for preparing the recursive calls and for performing step in each subproblem at level i generated during the iteration can be upper bounded as b bm since each near pair in is found in step in at most one subproblem at level i generated during the iteration since we have that we easily get that the above bound can be ten as b min m we observe that this bound holds even when i in this case the cost includes all required for solving the subproblems at level called in the iteration and which are solved using the nested loop in theorem see step by lemma we have that the above quantity can be upper bounded with the number of far collisions between and getting cr b min m recall that ci k q denotes the number of times a pair in q is in a call to osimjoin at the level of recursion where the smallest input set has size in if k log m and in m if k log m then the total cost for preparing the recursive calls in steps in all subproblems at level i and for performing step in all subproblems at level i log m x ci k r cr s l the l factor in the above bound follows since far collisions at level i are used for amortizing the cost of step for each one of the l iterations of step to get the total complexity of the algorithm we sum the complexity required by each recursive level we bound the cost of each level as follows for a level i we use the bound in equation for a level i we use the bound we note that the true input size of a subproblem is and not however the expected value of ci k r cr s is computed assuming the worst case where there are no close pairs an thus in equation for level i we use the bound given in equation to which we add the first term in equation since the cost of step at level is not included in equation note that the addition of equations and gives a weak upper bound for level the lemma follows we will now analyze the expected sizes of the terms in lemma clearly each pair from r s is in the top level call so the number of collisions is n but in lower levels we show that the expected number of times that a pair collides either decreases or increases geometrically depending on whether the collision probability is smaller or larger than or equivalently depending on whether the distance is greater or smaller than the radius r the lemma follows by expressing the number of collisions of the pairs at the recursive level as a branching process lemma given that a holds for each i we have i e ci r s cr cr e ci r s r r li e ci r s n i for any k log m e ci k r s cr proof let x r and y we are interested in upper bounding the number of collisions of the pair at the recursive level we envision the problem as branching process more specifically a galtonwatson process see where the expected number of children recursive calls that preserve a particular collision is pr h x h y for random h it is a standard fact from this theory that the expected population size at generation i number of times x y is in a problem at recursive level i is pr h x h y i theorem if d x y cr we have that pr h x h y and each far pair appears at most i times in expectation at level i from which follows equation moreover since the probability of collisions is monotonic in the distance we have that pr h x h y if d x y r and pr h x h y if r d x y cr from which follow equations and in order to get the last bound we observe that each entry of r and s is i replicated li l is the total times at level i thus we have that n maximum number of far collisions in subproblems at level i where the smallest input set has size in each one of these collisions survives up to level i with probability and thus the expected number of these collisions is n i we are now ready to prove the complexity of osimjoin as claimed in theorem by the linearity of expectation and lemma we get that the expected complexity of osimjoin is m e ci k r s e ci r s log x x n x cr b k m b where note that ci log m r cr s ci r cr s we have cr n and ci r s ci r s ci r r s by plugging in the bounds on the expected number of collisions given in lemma we get the claimed result analysis of correctness the following lemma shows that osimjoin outputs with probability o all near pairs as claimed in theorem lemma let r s u and n executing o n dent repetitions of osimjoin r outputs r s with probability at least o we now argue that a pair x y with d x y r is output with probability log n let xi ci x y be the number of subproblems at the level i containing x y by applying branching process we get that e xi pr h x h y i if pr h x h y then in fact there is positive constant probability that x y survives indefinitely does not go extinct since at every branch of the recursion we eventually compare points that collide under all hash functions on the path from the root call this implies that x y is reported with a positive constant probability in the critical case where pr h x h y we need to consider the variance of xi which by theorem is equal to where is the variance of the number of children hash collisions in recursive calls if is integer the number of children in our branching process follows a binomial distribution with mean this implies that also in the case where is not integer it is easy to see that the variance is bounded by that is we have var xi which by chebychev s inequality means that for some integer j i o x pr xi j x var xi pj since we have e xi pr xi j then pr xi j and since pr xi is with j this implies that pr xi i furthermore the recursion depth o log n implies the probability that a near pair is found is log n thus by repeating o n times we can make the error probability o for a particular pair and o for the entire output by applying the union bound removing duplicates given two near points x and y the definition of lsh requires their collision probability p x y pr h x h y if p x y our osimjoin algorithm can emit x y many times as an example suppose that the algorithm ends in one recursive call then the pair x y is expected to be in the same bucket for p x y l iterations of step and thus it is emitted p x y l times in expectation moreover if the pair is not emitted in the first recursive level the expected number of emitted pairs increases as p x y l i since the pair x y is contained in p x y l i subproblems at the recursive level a simple solution requires to store all emitted near pairs on the external memory and then using a sorting algorithm for removing repetitions where is the expected however this approach requires average replication of each emitted pair which can dominate the complexity of osimjoin a similar issue appears in the algorithm asimjoin as well a near pair is emitted at most times since there is no recursion and the partitioning of the two input sets is repeated only times if the collision probability pr h x h y can be explicitly computed in o time and no for each pair x y it is possible to emit each near pair once in expectation without storing near pairs on the external memory we note that the collision probability can be computed for many metrics including hamming and jaccard and angular distances for the algorithm the approach is the following for each near pair x y that is found at the recursive level with i the pair is emitted with probability p x y l i otherwise we ignore it for the algorithm the idea is the same but a near pair is emitted with probability p x y with theorem the above approaches guarantee that each near pair is emitted with constant probability in both asimjoin and osimjoin proof the claim easily follows for the algorithm indeed the two points of a near pair x y have the same hash value in p x y l in expectation of the repetitions of step therefore by emitting the pair with probability p x y l we get the claim we now focus on the algorithm where the claim requires a more articulated proof given a near pair x y let gi and hi be random variables denoting respectively the number of subproblems at level i containing the pair x y and the number of subproblems at level i where x y is not found by the nested loop join algorithm in theorem let also ki be a random variable denoting the actual number of times the pair x y is emitted at level i we have followings properties e ki hi gi hi p x y l i since a near pair is emitted with probability p x y l i only in those subproblems where the pair is found by the join algorithm e gi p x y l i since a near pair is in the same bucket with probability p x y i it follows from the previous analysis based on standard branching since each pair exists at the beginning of the algorithm since each pair surviving up to the last recursive level is found by the nested loop join algorithm i hp we are interested in upper bounding e ki by induction that e l x ki e hl p x y l l for any l for l the first call to osimjoin and note that e the equality is verified since e e e e e we now consider a generic level l since a pair propagates in a lower recursive level with probability p x y we have e gl e e gl p x y le thus gl hl e kl e e kl hl e p x y l l e e p x y l p x y l l by exploiting the inductive hypothesis we get e l x ki e kl e since we have e x hp i ki e hl p x y l l ki and the claim follows we observe that the proposed approach is equivalent to use an lsh where p x y for each near pair finally we remark that this approach does not avoid replica of the same near pair when the algorithm is repeated for increasing the collision of near pairs thus the probability of emitting probability a pair is at least as shown in the second part of section and o n repetitions of osimjoin suffices to find all pairs with high probability however the expected number of replica of a given near pair becomes o n even with the proposed approach discussion we will argue informally that our complexity of theorem is close to the optimal for simple arguments we split the complexity of our algorithms in two parts n n m b mb mb we now argue that are necessary first notice that we need o per hash function for transferring data between memories computing and writing hash values to disk to find collisions second since each brings at most b points in order to compute the distance with m points residing in the internal memory we need to examine m n pairs this means that when the collision probability of far pairs and the number of collisions of far pairs is at most m n in expectation we only need o to detect such far pairs now we consider the case where there are n pairs at distance cr due to the monotonicity of lsh family the collision probability for each such pair must be o to ensure that o suffices to examine such pairs in turn this means that the collision probability for near pairs within distance r must be at most o so we need repetitions different hash functions to expect at least one collision for any near pair then a data set can be given so that we might need to examine for each of the hash functions a constant fraction the pairs in r s whose collision probability is constant for example this can happen if r and s include two clusters of very near points one could speculate that some pairs could be marked as finished during computation such that we do not have to compute their distances again however it seems hard to make this idea work for an arbitrary distance measure where there may be very little structure for the output set hence the o m b additional per repetition is needed in order to argue that the term is needed we consider the case where all pairs in r s have distance r for a value small enough to make the collision probability of pair at distance r indistinguishable from the collision probability of pair at distance then every pair in r s must be brought into the internal memory to ensure the correct result which requires this holds for any algorithm enumerating or listing the near pairs therefore there does not exist an algorithm that beats the quadratic dependency on n for such input sets unless the distribution of the input is known beforehand however when is subquadratic regarding n a potential approach to achieve subquadratic dependency in expectation for similarity join problem is filtering invalid pairs based on their distances currently method is the only way to do this cdf of pairwise distances cdf of pairwise similarities jaccard similarity cosine similarity distance distance jaccard and cosine similarity and distance a enron email dataset b mnist dataset fig the cumulative distributions of pairwise similarities and pairwise distances on samples of points from enron email and mnist datasets we note that values decrease on the of figure while they increase in figure note that when m n our cost is o as we would expect since just reading the input is optimal at the other extreme when b m our bound matches the time complexity of internal memory techniques when are bounded by m n then our algorithm achieves subquadratic dependency on such an assumption is realistic in some datasets as shown in the experimental evaluation section to complement the above discussion we will evaluate our complexity by computing explicit constants and then evaluating the total number of spent by analyzing real datasets performing these simulated experiments has the advantage over real experiments that we are not impacted by any properties of a physical machine we again split the complexity of our algorithms in two parts n n m b mb mb and carry out experiments to demonstrate that the first term often dominates the second term in real datasets in particular we depict the cumulative distribution function cdf in scale of all pairwise distances and all pairwise similarities jaccard and cosine on two commonly used datasets enron and as shown in figure since the enron data set does not have a fixed data size per point we consider a version of the data set where the dimension has been reduced such that each vector has a fixed size https http data set metric r cr enron jaccard enron cosine mnist n n standard lsh nested loop asimjoin fig a comparison of cost for similarity joins on the standard lsh nested loop and asimjoin algorithms figure shows an inverse polynomial relationship with a small exponent m between similarity threshold s and the number of pairwise similarities greater than the degree of the polynomial is particularly low when s this setting s is commonly used in many applications for both jaccard and cosine similarities similarly figure also shows a monomial relationship between the distance threshold r and the number of pairwise distances smaller than in turn this means that the number of pairs is not much greater than cm in other words the second term is often much smaller than the first term finally for the same data sets and metrics we simulated the algorithm with explicit constants and examined the cost to compare with a standard nested loop method section and a lower bound on the standard lsh method section we set the cache size m which is reasonable for judging a number of cache misses since the size ratio between cpu caches and ram is in that order of magnitude in general such setting allows us to investigate what happens when the data size is much larger than fast memory for simplicity we use b since all methods contain a multiplicative factor on the complexity the values of were computed using good lsh families for the specific metric and parameters r and these parameters are picked according to figure such that the number of pairs are only an order of magnitude larger than the number of near pairs the complexity used for nested loop join is n b here we assume both sets have size n the complexity for the standard lsh approach is lower bounded by sort n this complexity is a lower bound on the standard sorting based approaches as it lacks the additional cost that depends on how lsh distributes the points since m we can bound the of the sorting complexity and use sort n since points read and written twice the complexity of our approach is stated in theorem the computed in figure show that the complexity of our algorithm is lower than that of all instances examined nested loop suffers from quadratic dependency on n while the standard lsh bounds lack the dependency on m overall the cost indicates that our algorithm is practical on the examined data sets conclusion in this paper we examine the problem of computing the similarity join of two relations in an external memory setting our new algorithm of section and algorithm of section improve upon current state of the art by around a factor of unless the number of pairs is huge more than n m we believe this is the first algorithm for similarity join and more importantly the first subquadratic algorithm whose performance improves significantly when the size of internal memory grows it would be interesting to investigate if our approach is also practical this might require adjusting parameters such as our bound is probably not easy to improve significantly but interesting open problems are to remove the error probability of the algorithm and to improve the implicit dependence on dimension in b and m note that our work assumes for simplicity that the unit of m and b is number of points but in general we may get tighter bounds by taking into account the gap between the space required to store a point and the space for hash values also the result in this paper is made with general spaces in mind and it is an interesting direction to examine if the dependence on dimension could be made explicit and improved in specific spaces references alexandr andoni and piotr indyk hashing algorithms for approximate nearest neighbor in high dimensions in proceedings of focs pages arvind arasu venkatesh ganti and raghav kaushik efficient exact joins in proceedings of vldb pages roberto bayardo yiming ma and ramakrishnan srikant scaling up all pairs similarity search in proceedings of www pages andrei broder steven glassman mark manasse and geoffrey zweig syntactic clustering of the web computer networks moses charikar similarity estimation techniques from rounding algorithms in proceedings of stoc pages surajit chaudhuri venkatesh ganti and raghav kaushik a primitive operator for similarity joins in data cleaning in proceedings of icde page mayur datar nicole immorlica piotr indyk and vahab mirrokni localitysensitive hashing scheme based on distributions in proceedings of socg pages devdatt dubhashi and alessandro panconesi concentration of measure for the analysis of randomized algorithms cambridge university press matteo frigo charles e leiserson harald prokop and sridhar ramachandran algorithms in proceedings of focs pages aristides gionis piotr indyk and rajeev motwani similarity search in high dimensions via hashing in proceedings of vldb pages theodore e harris the theory of branching processes courier dover publications bingsheng he and qiong luo joins in proceedings of cikm pages monika rauch henzinger finding web pages a evaluation of algorithms in proceedings of sigir pages piotr indyk and rajeev motwani approximate nearest neighbors towards removing the curse of dimensionality in proceedings of stoc pages hung ngo christopher and atri rudra skew strikes back new developments in the theory of join algorithms sigmod record andrzej pacuk piotr sankowski karol wegrzycki and piotr wygocki localitysensitive hashing without false negatives for l in proceedings of cocoon pages rasmus pagh hashing without false negatives in proceedings of soda pages jeffrey scott vitter algorithms and data structures for external memory now publishers chuan xiao wei wang xuemin lin and jeffrey xu yu efficient similarity joins for near duplicate detection in proceedings of www pages mihalis yannakakis algorithms for acyclic database schemes in proceedings of vldb pages
8
jan completion by derived double centralizer marco porta liran shaul and amnon yekutieli abstract let a be a commutative ring and let a be a weakly proregular ideal in a if a is noetherian then any ideal in it is weakly proregular suppose m is a compact generator of the category of cohomologically complexes we prove that the derived double centralizer of m is isomorphic to the completion of a the proof relies on the mgm equivalence from psy and on derived morita equivalence our result extends earlier work of dgi and efimov ef introduction let a be a commutative ring we denote by d mod a the derived category of given m d mod a we define m exta m homd mod a m m i this is a graded with the yoneda multiplication which we call the ext algebra of m suppose we choose a resolution p m the resulting dg aalgebra b enda p is called a derived endomorphism dg algebra of m it turns out see proposition that the dg algebra b is unique up to quasil i isomorphism and of course its cohomology h b h b is canonically isomorphic to exta m as graded consider the derived category dgmod b of left dg we can view p as an object of dgmod b and thus like in we get the graded aalgebra extb p this is the derived double centralizer algebra of m by corollary the graded algebra extb p is independent of the resolution p m up to isomorphism let a be an ideal in a the functor can be right derived giving a triangulated functor from d mod a to itself a complex m d mod a is called a cohomologically complex if the canonical morphism m m is an isomorphism the full triangulated category on the cohomologically torsion complexes is denoted by d mod a it is known that when a is finitely generated the category d mod a is compactly generated for instance by the koszul complex k a a associated to a finite generating sequence a an of a date december key words and phrases adic completion derived functors derived morita theory mathematics subject classification primary secondary this research was supported by the israel science foundation and the center for advanced studies at bgu marco porta liran shaul and amnon yekutieli b the completion of a this is a commutative let us denote by a b if a is finitely generated then the ring a b is and in it there is the ideal b a a b complete a weakly proregular sequence in a is a finite sequence a of elements of a whose koszul cohomology satisfies certain vanishing conditions see definition this concept was introduced by ajl and schenzel sc an ideal a in a is called weakly proregular if it can be generated by a weakly proregular sequence it is important to note that if a is noetherian then any finite sequence in it is weakly proregular so that any ideal in a is weakly proregular but there are some fairly natural examples see ajl example b and psy example here is our main result repeated as theorem in the body of the paper theorem let a be a commutative ring let a be a weakly proregular ideal in a and let m be a compact generator of d mod a choose a resolution p m and define b enda p then there is a unique isomorphism b of graded extb p a our result extends earlier work of dgi and efimov ef see remark for a discussion let us say a few words on the proof of theorem we use derived morita theory to find an isomorphism of graded algebras between extb p and exta n op where n a the necessary facts about derived morita theory are recalled in section we then use mgm equivalence recalled in section to prove that b b exta n a exta a acknowledgments we wish to thank bernhard keller john greenlees alexander efimov maxim kontsevich vladimir hinich and peter for helpful discussions we are also grateful to the anonymous referee for a careful reading of the paper and constructive remarks weak proregularity and mgm equivalence let a be a commutative ring and let a be an ideal in it we do not assume that a is noetherian or complete there are two operations on associated to this data the completion and the for an c m an m its completion is the m m i element m m is called an element if a m for i the elements form the submodule m of m let us denote by mod a the category of so we have additive functors and from mod a to itself the functor is left exact whereas is neither left exact nor right exact an is called complete if the canonical homomorphism m m is bijective some texts would say that m is complete and separated and m is if the canonical homomorphism m m is bijective if the ideal a is finitely generated then the functor is idempotent namely for any module m its completion m is complete there are counterexamples to that for infinitely generated ideals see ye example the derived category of mod a is denoted by d mod a the derived functors d mod a d mod a completion by derived double centralizer exist the left derived functor is constructed using resolutions and the right derived functor is constructed using resolutions this means that for any complex p the canonical morphism p p is an isomorphism and for any complex i the canonical morphism i i is an isomorphism the relationship between the derived functors and was first studied in ajl where the duality was established following the paper gm a complex m d mod a is called a cohomologically complex if r the canonical morphism m m is an isomorphism the complex m is called a cohomologically complete complex if the canonical morl phism m m is an isomorphism we denote by d mod a and d mod a the full subcategories of d mod a consisting of cohomologically atorsion complexes and cohomologically complete complexes respectively these are triangulated subcategories very little can be said about the functors and and about the corresponding triangulated categories d mod a and d mod a in general however we know a lot when the ideal a is weakly proregular before defining weak proregularity we have to talk about koszul complexes recall that for an element a a the koszul complex is k a a a a concentrated in degrees and given a finite sequence a an of elements of a the koszul complex associated to this sequence is k a a k a k a an this is a complex of finitely generated free concentrated in degrees there is a canonical isomorphism of k a a a where a is the ideal generated by the sequence a for any i let ai ain if j i then there is a canonical homomorphism of complexes pj i k a aj k a ai which in corresponds to the surjection aj ai thus for every k z we get an inverse system of k h k a ai with transition homomorphisms hk pj i hk k a aj hk k a ai of course for k the inverse limit equals the a completion of a what turns out to be crucial is the behavior of this inverse system for k for more details please see psy section an inverse system mi of abelian groups with transition maps pj i mj mi is called if for every i there exists j i such that pj i is zero definition let a be a finite sequence in a the sequence a is called a weakly proregular sequence if for every k the inverse system is an ideal a in a is called a a weakly proregular ideal if it is generated by some weakly proregular sequence the etymology of the name weakly proregular sequence and the history of related concepts are explained in ajl and sc marco porta liran shaul and amnon yekutieli if a is a regular sequence then it is weakly proregular more important is the following result theorem ajl if a is noetherian then every finite sequence in a is weakly proregular so that every ideal in a is weakly proregular here is another useful fact theorem sc let a be a weakly proregular ideal in a ring a then any finite sequence that generates a is weakly proregular these theorems are repeated with different proofs as psy theorem and psy corollary respectively as the next theorem shows weak proregularity is the correct condition for the derived torsion functor to be suppose a is a finite sequence that generates the ideal a a consider the infinite dual koszul complex i a a lim homa k a a a given a complex m there is a canonical morphism m a a m in d mod a theorem sc the sequence a is weakly proregular iff the morphism is an isomorphism for every m d mod a the following theorem which is psy theorem plays a central role in our work theorem mgm equivalence let a be a commutative ring and a a weakly proregular ideal in it for any m d mod a one has m d mod a and m d mod a the functor d mod a d mod a is an equivalence with remark slightly weaker versions of theorem appeared previously they are ajl theorem and sc theorem the difference is that in these earlier results it was assumed that the ideal a is generated by a sequence an that is weakly proregular and moreover each ai has bounded torsion this extra condition certainly holds when a is noetherian for the sake of convenience in the present paper we quote psy regarding derived completion and torsion it is tacitly understood that in the noetherian case the results of ajl and sc suffice the derived double centralizer in this section we define the derived double centralizer of a dg module see remarks and for a discussion of this concept l and related literature let k be a commutative ring and let a ai be a dg associative and unital but not necessarily commutative given left dg m completion by derived double centralizer and n we denote by homia m n the of homomorphisms of degree i we get a dg m homa m n homia m n with the usual differential the object enda m homa m m is a dg since the left actions of a and enda m on m commute we see that m is a left dg module over the dg algebra a enda m the category of left dg is denoted by dgmod a the set of morphisms homdgmod a m n is precisely the set of in the dg homa m n note that dgmod a is a abelian category let dgmod a be the homotopy category of dgmod a so that dgmod a m n homa m n the derived category dgmod a is gotten by inverting the in dgmod a the categories dgmod a and dgmod a are and triangulated if a happens to be a ring ai for i then dgmod a c mod a the category of complexes in mod a and dgmod a d mod a the usual derived category for m n dgmod a we define extia m n dgmod a m n i and exta m n m extia m n definition let a be a dg and m dgmod a define exta m exta m m this is a graded with the yoneda multiplication composition of morphisms in dgmod a we call exta m the ext algebra of m there is a canonical homomorphism of graded h enda m exta m if m is either or then this homomorphism is bijective definition let a be a dg and m a dg choose a resolution p m in dgmod a the dg b enda p is called a derived endomorphism dg algebra of m note that there are isomorphisms of graded h b exta p exta m the dependence of the derived endomorphism dg algebra b enda p on the resolution p m is explained in the next proposition proposition let m be a dg and let p m and p m be resolutions in dgmod a define b enda p and b enda p then there is a dg b and a dg b p with dg b b and b b and dg b p p and p p marco porta liran shaul and amnon yekutieli proof choose a p p in dgmod a lifting the given quasiisomorphisms to m this can be done of course let l cone h p dgmod a the mapping cone of so as graded l p p p h i d and the differential is dl dp where is viewed as a degree homomorphism p p of course l is an acyclic dg module takeh q p p and let b be the triangular matrix graded algebra b b q with the obvious matrix multiplication this makes sense because there is a canonical isomorphism of dg algebras b enda p note that b is a subalgebra of enda l we make b into a dg algebra with differential db denda l the projections b b and b b on the diagonal entries are dg algebra because their kernels are the acyclic complexes homa p l and homa l p respectively now under the restriction functor dgmod b dgmod b we have p and likewise p p consider the exact sequence i h l p l in in dgmod b there is an induced distinguished triangle dgmod b but l is acyclic so is an isomorphism finally let us choose a resolution p in dgmod b then induces a p in dgmod b corollary in the situation of proposition there is an isomorphism of graded extb p extb p proof since b b is a of dg algebras it follows that the restriction functor dgmod b dgmod b is an equivalence of triangulated categories therefore we get an induced isomorphism of graded algebras extb p extb p similarly we get a graded isomorphism extb p extb p definition let m be a dg and let p m be a resolution in dgmod a the graded extb p is called a derived double centralizer of m remark the uniqueness of the graded extb p provided by corollary is sufficient for the purposes of this paper see theorem it is possible to show by a more detailed calculation that the isomorphism provided by corollary is in fact canonical it does not depend on the choices made in the proof of proposition the let us choose a resolution q p in dgmod b and define the dg algebra c endb q then c should be called a double endomorphism dg algebra of m of course h c extb p as graded algebras there should be a canonical dg algebra homomorphism a we tried to work out a comprehensive treatment of derived endomorphism algebras and their iterates using the methods and did not get very far hence it is not included in the paper we expect that a full treatment is only possible in terms of completion by derived double centralizer remark derived endomorphism dg algebras and the double derived ones were treated in several earlier the papers including dgi jo and ef these papers do not mention any uniqueness properties of these dg algebras indeed as far as we can tell they just pick a convenient resolution p m and work with the dg algebra enda p cf subsection of dgi where this issue is briefly discussed the most detailed treatment of derived endomorphism dg algebras that we know is in keller s paper ke in ke section the concept of a lift of a dg module is introduced the pair b p from definition is called a standard lift in ke it is proved that lifts are unique up to this is basically what is done in our proposition but there is no statement regarding uniqueness of these also there is no discussion of derived double centralizers supplement on derived morita equivalence derived morita theory goes back to rickard s paper ri which dealt with rings and tilting complexes further generalizations can be found in ke bv jo for our purposes in section we need to know certain precise details about derived morita equivalence in the case of dg algebras and compact generators specifically formula for the functor f appearing in theorem and hence we give the full proof here let e be a triangulated category with infinite direct sums recall that an object m e is called compact or small if for any collection nz of objects of e the canonical homomorphism m m nz home m nz home m is bijective the object m is called generator of e if for any nonzero object n e there is some i z such that home m n i as in section we consider a commutative ring k and a dg a the next lemma seems to be known but we could not find a reference lemma let e be a triangulated category with infinite direct sums let f g dgmod a e be triangulated functors that commute with infinite direct sums and let f g be a morphism of triangulated functors assume that f a g a is an isomorphism then is an isomorphism in proof suppose we are given a distinguished triangle m m m dgmod a such that two of the three morphisms and are isomorphisms then the third is also an isomorphism since both functors f g commute with shifts and direct sums and since is an isomorphism it follows that is an isomorphism for any free dg s next consider a dg module p choose any z zj of p this gives rise to an exhaustive ascending filtration pj of p by dg submodules with for every j we have a distinguished triangle pj pj in dgmod a where pj is the inclusion since pj is a free dg module by induction we conclude that is an isomorphism for every j the marco porta liran shaul and amnon yekutieli telescope construction see bn remark gives a distinguished triangle m m pj pj p with pj this shows that is an isomorphism finally any dg module m admits a p m with p semifree therefore is an isomorphism let e be a be a full triangulated subcategory of dgmod a which is closed under infinite direct sums and let m fix a resolution p m in dgmod a and let b enda p so b is a derived endomorphism dg algebra of m definition since p dgmod a b there is a triangulated functor g dgmod b op dgmod a g n n p which is calculated by resolutions in dgmod b op warning p is usually not over b the functor g commutes with infinite direct sums and g b p m in dgmod a therefore g n e for every n dgmod b op because p is over a there is a triangulated functor f dgmod a dgmod b op f l homa p l we have f m b in dgmod b op f p lemma the functor f e dgmod b op commutes with infinite direct sums if and only if m is a compact object of proof we know that dgmod a m l j hj rhoma m l hj f l functorially for l dgmod a so m is compact relative to e if and only if the functors hj f commute with direct sums in but that is the same as asking f to commute with direct sums in theorem let a be a dg let e be a be a full triangulated subcategory of dgmod a which is closed under infinite direct sums and let m be a compact generator of choose a resolution p m in dgmod a and define b enda p then the functor f e dgmod b op from is an equivalence of triangulated categories with the functor g from proof let us write d a dgmod a etc we begin by proving that the functors f and g are adjoints take any l d a and n d b op we have to construct a bijection homd a g n l homd b op n f l completion by derived double centralizer which is bifunctorial choose a resolution q n in dgmod b op since the dg p is we have a sequence of isomorphisms of homd a g n l rhoma g n l homb op q homa p l homa q p l rhomb op n f l homd b op n f l the only choice made was in the resolution q n so all is bifunctorial the corresponding morphisms f g and g f are denoted by and respectively next we will prove that g is fully faithful we do this by showing that for every n the morphism n f n in d b op is an isomorphism we know that g factors via the full subcategory e d a and therefore using lemma we know that the functor f g commutes with infinite direct sums so by lemma it suffices to check for n b but in this case is the canonical homomorphism of dg b op b homa p b p which is clearly bijective it remains to prove that the essential image of the functor g is take any l e and consider the distinguished triangle g f l l in e in which l e is the mapping cone of applying f and using we get a l distinguished triangle f l f l f therefore f but rhoma m l f l and therefore homd a m i for every i since m is a generator of e we get hence is an isomorphism and so l is in the essential image of the main theorem this is our interpretation of the completion appearing in efimov s recent paper ef that is attributed to kontsevich cf remark below for a comparison to ef and to similar results in recent literature here is the setup for this section a is a commutative ring and a is a weakly proregular ideal in a we do not assume that b a the completion of a is noetherian nor complete let a b b a and let b a a a which is an ideal of a since the ideal a is finitely generated b is complete and hence as a ring a b is it follows that the a b complete the full subcategory d mod a d mod a is triangulated and closed under infinite direct sums the results of sections and are invoked with k a recall the koszul complex k a a associated to a finite sequence a in a see section it is a bounded complex of free and hence it is a dg the next result was proved by several authors see bn proposition ln corollary ii and ro proposition proposition let a be a finite sequence that generates a then the koszul complex k a a is a compact generator of d mod a of course there are other compact generators of d mod a theorem let a be a commutative ring let a be a weakly proregular ideal in a and let m be a compact generator of d mod a choose some resolution p m in c mod a and let b enda p then extib p for b all i and there is a unique isomorphism of p a marco porta liran shaul and amnon yekutieli recall that the dg b is a derived endomorphism dg algebra of m definition and the graded extb p is a derived double centralizer of m definition we need a few lemmas before proving the theorem lemma let m be a compact object of d mod a then m is also compact in d mod a so it is a perfect complex of proof choose a finite sequence a that generates a by psy corollary there is an isomorphism of functors a a where a a is the infinite dual koszul complex therefore the functor commutes with infinite direct sums let n d mod a and consider the function r hom homd mod a m n homd mod a m n given a morphism m n in d mod a define r m n since the functor is idempotent theorem the function is an r inverse to hom so the latter is bijective let nz be a collection of objects of d mod a due to the fact that m is a compact object of d mod a and to the observations above we get isomorphisms m m homd mod a m nz homd mod a m nz z homd mod a m z m nz nz homd mod a m z z m m nz homd mod a m z we see that m is also compact in d mod a consider the contravariant functor d d mod b d mod b op defined by choosing an injective resolution a i over a and letting d homa i lemma the functor d induces a duality a contravariant equivalence between the full subcategory of d mod b consisting of objects perfect over a and the full subcategory of d mod b op consisting of objects perfect over a proof take m d mod b which is perfect over a it is enough to show that the canonical homomorphism of dg m d d m homa homa m i i is a for this we can forget the structure and just view this as a homomorphism of dg choose a resolution p m where p is a bounded complex of finitely generated projective we can replace m with p in equation and after that we can replace i with a now it is clear that this is a c lemma let m and n be complexes of we write m b m and n n completion by derived double centralizer c and l m c m c are isomor the morphisms m m b m phisms the homomorphism c n b homd mod a m n b hom homd mod a m is bijective proof the morphism is an isomorphism by psy proposition by theorem the complex m is cohomologically complete and therefore c is also cohomologically complete but this means that l is an isomorphism m b m b take a morphism m n in d mod a by part we know that and l are isomorphisms so we can define b n c m lb n the function is an inverse to hom proof of theorem we shall calculate extb p indirectly by lemma we know that m and hence also p is perfect over a so according to lemma there is an isomorphism of graded extb op d p op extb p next we note that d p homa p i homa p a f a in dgmod b op here f is the functor from therefore we get an isomorphism of graded extb op d p extb op f a let n a d mod a we claim that f a f n in dgmod b op r to see this we first note that the canonical morphism n a in d mod a can be represented by an actual dg module homomorphism n a say by replacing n with a resolution of it consider the induced homomorphism homa p n homa p a of dg b op like in the proof of lemma it suffices to show that this is a of dg this is true since by gm duality psy theorem the canonical morphism r rhom rhoma m n rhoma m a in d mod a is an isomorphism we conclude that there is a graded isomorphism extb op f a extb op f n take e d mod a in theorem since f e dgmod b op is an equivalence and e is full in d mod a we see that f induces an isomorphism of graded exta n extb op f n b in the next step is to use the mgm equivalence we know that n d mod a and the functor induces an isomorphism of graded b exta n exta a marco porta liran shaul and amnon yekutieli b by lemma the homoit remains to analyze the graded exta a morphism b a i b b hom homd mod a a rhomd mod a a a i b for i and the is bijective for every i therefore extia a b b homomorphism a exta a is bijective combining all the steps above we see that extib p for i and there is bop but a b is commutative so a bop b an isomorphism p b regarding the uniqueness since the image of the ring homomorphism a a b is dense and a is b complete it follows that the only automorb is the identity therefore the isomorphism p b phism of a a that we produced is unique remark to explain how surprising this theorem is take the case p m k a a the koszul complex associated to a sequence a an that generates the ideal a as a free forgetting the grading and the differential we have p a the grading of p depends on n only it is an exterior algebra the differential of p is the only place where the sequence a enters similarly the dg algebra b enda p is a graded matrix algebra over a of size the differential of b is where a is expressed forgetting the differentials working with the graded p classical morita theory tells us that endb p a as graded furthermore p is a projective b so we even have extb p a however the theorem tells us that for the structure of p we have b thus we get a transcendental outcome the completion a b by extb p a homological operation with finite input basically finite linear algebra over a together with a differential remark our motivation to work on completion by derived double centralizer came from looking at the recent paper ef by efimov the main result of ef is theorem about the completion of the category d qcoh x of a noetherian scheme x along a closed subscheme y this idea is attributed to kontsevich corollary of ef is a special case of our theorem it has the extra assumptions that the ring a is noetherian and regular it has finite global cohomological dimension after writing the first version of our paper we learned that a similar result was proved by dgi in that paper the authors continue the work of dg on derived completion and torsion their main result is theorem which is a combination of mgm equivalence and derived morita equivalence in an abstract setup that includes algebra and topology the manifestation of this main result in commutative algebra is dgi proposition that is also a special case of our theorem the ring a is noetherian and the quotient ring is regular recall that our theorem only requires the ideal a to be weakly proregular and there is no regularity condition on the rings a and the word regular has a double meaning here it is quite possible that the methods of dgi or ef can be pushed further to remove the regularity conditions from the rings a and completion by derived double centralizer however it is less likely that these methods can handle the case assuming only that the ideal a is weakly proregular references ajl bn bv dg dgi ef gm jo ke ln psy ri ro sc ye alonso jeremias and lipman local homology and cohomology on schemes ann sci ens correction availabe online at http bokstedt and neeman homotopy limits in triangulated categories compositio math bondal and van den bergh generators and representability of functors in commutative and noncommutative geometry moscow math j dwyer and greenless complete modules and torsion modules american j math no dwyer greenlees and iyengar duality in algebra and topology advances math efimov formal completion of a category along a subcategory eprint at http greenlees and may derived functors of completion and local homology algebra recollement for differential graded algebras algebra keller deriving dg categories ann sci ecole norm sup lipman and neeman and boundedness of the twisted inverse image functor illinois j math number porta shaul and yekutieli on the homology of completion and torsion to appear in algebras and representation theory rickard derived equivalences as derived functors london math soc rouquier dimensions of triangulated categories journal of schenzel proregular sequences local cohomology and completion math scand yekutieli on flatness and completion for infinitely generated modules over noetherian rings comm algebra issue department of mathematics ben gurion university be er sheva israel address porta shaul shlir yekutieli amyekut
0
ndt neual decision tree towards fully functioned neural graph han xiao dec abstract though traditional algorithms could be embedded into neural architectures with the proposed principle of xiao the variables that only occur in the condition of branch could not be updated as a special case to tackle this issue we multiply the conditioned branches with dirac symbol then approximate dirac symbol with the continuous functions in this way the gradients of variables could be worked out in the process approximately making a fully functioned neural graph within our novel principle we propose the neural decision tree ndt which takes simplified neural networks as decision function in each branch and employs complex neural networks to generate the output in each leaf extensive experiments verify our theoretical analysis and demonstrate the effectiveness of our model introduction inspired by brain science neural architectures have been proposed in mcculloch pitts this branch of artificial intelligence develops from single perception casper et to deep complex network lecun et achieving several critical successes such as alphago silver et notably all the operators matrix multiply function convolution etc in traditional neural networks are numerical and continuous which could benefit from algorithm rumelhart et recently methods hungarian algorithm algorithm searching are embedded into neural architectures in a dynamically manner opening a new chapter for intelligence system xiao state key laboratory of intelligent technology and systems national laboratory for information science and technology department of computer science and technology tsinghua university beijing pr china correspondence to han xiao proceedings of the th international conference on machine learning stockholm sweden pmlr copyright by the author s generally neural graph is defined as the intelligence architecture which is characterized by both logics and neurons with this proposed principle from the seminal work we attempt to tackle image classification specifically regarding this task the overfull categories make too much burden for classifiers which is a normal issue for datasets such as imagenet deng et we conjecture that it would make effects to roughly classify the samples with decision tree then category the corresponding samples with strong neural network in each leaf because in each leaf there are much fewer categories to predict the attribute split in traditional decision trees random forest etc is oversimplified for precise zhou feng thus we propose the method of neural decision tree ndt which applies neural network as decision function to strengthen the performance regarding the calculus procedure of ndt the basic principle is to treat the logic flow if for while in the sense of programming language as a dynamic process which is illustrated in figure this figure demonstrates the classification of four categories sun moon car and pen where an if structure is employed to split the samples into two branches where the fully connected networks generate the results respectively in the forward propagation our methodology activates some branch according to the condition of if then dynamically constructs the graph according to the instructions in the activated branch in this way the calculus graph is constructed as a and continuous structure where backward propagation could be performed conventionally demonstrated in figure b generally we should note that the repeat for while could be treated as performing if in multiple times which could also be tackled by the proposed principle thus all the traditional algorithms could be embedded into neural architectures for more details please refer to xiao however as a special challenge of this paper the variables that are only introduced in the condition of branch could not be updated in the backward propagation because they are outside the dynamically constructed graph for the example of w in figure thus to make a completely functioned neural graph this paper attempts to tackle this issue in an ndt neual decision tree towards fully functioned neural graph cantly which illustrates the effectiveness of our methodology the most important conclusion is that our model is differentiable which verifies our theory and provides the novel methodology of fully functioned neural graph contributions we complete the principle of neural graph which characterizes the intelligence systems with both logics and neurons also we provide the proof that neural graph is turing complete which makes a learnable turing machine for the theory of computation to tackle the issue of overfull categories we propose the method of neural decision tree ndt which takes simplified neural networks as decision function in each branch and employs complex neural networks to generate the output in each leaf our model outperforms other baselines extensively verifying the effectiveness of our theory and method organization in the section our methodology and neural architecture are discussed in the section we specific the implementation of fully functioned neural graph in detail in the section we provide the proof that neural graph is turing complete in the section we conduct the experiments for performance and verification in the section we briefly introduce the related work in section we list the potential future work from a developing perspective finally in the section we conclude our paper and publish our codes figure illustration of how logic flow is processed in our methodology referring to b we process the structure of a in a dynamically manner theoretically we construct the graph according to the active branch in the forward propagation when the forward propagation has constructed the graph according to logic instructions the backward propagation would be performed as usual in a continuous and graph practically the dynamically constructed process corresponds to batch operations the samples with id are activated in if branch while those with id are tackled in else branch after the end of instruction the hidden representations are joined as the classified results proximated manner simply we multiply the symbols inside the branch with dirac function or specifically regarding figure we reform f cn etwork img as f cn etwork img in the if branch and perform the corresponding transformation in the else branch where is multiplication the forward propagation would not be modified by the reformulation while as to the backward process we approximate the dirac symbol with a continuous function to work out the gradients of condition which solves this issue it is noted that in this paper the continuous function is we conduct our experiments on public benchmark datasets mnist and cifar experimental results illustrate that our model outperforms other baselines extensively and methodology first we introduce the overview of our model then we discuss each component specifically last we discuss our model from the ensemble perspective architecture our architecture is illustrated in figure which is composed by three customized components namely feature condition and target network firstly the input is transformed by feature network and then the hidden features are classified by decision tree component composed by hierarchal condition networks secondly the target networks predict the categories for each sample in each leaf finally the targets are joined to work out the cross entropy objective the process is exemplified in algorithm feature network to extract the abstract features with deep neural structures we introduce the feature network which is often a stacked cnn and lstm condition network to exactly each sample we employ a simplified neural network as condition network which is usually a or with the function of tanh this layer is only applied in the inner nodes of decision tree actually the effectiveness of traditional decision tree stems from the information gain ndt neual decision tree towards fully functioned neural graph t plef j pright j pntotal j li j pntotal pntotal j li j pntotal where cn is short for condition network and li j is the adhoc label vector of sample where the true label position is and otherwise by simple computations we have pntotal t li j ln plef j ntotal pntotal li j ln pright j ntotal where ig is short for inf ogain as discussed in introduction we approximate the dirac symbol as a continuous function specifically as thus the gradient of condition network could be deducted as s cn s cn figure the neural architecture of ndt depth the input is classified by decision tree component with the condition networks then the target networks predict the categories for each sample in each leaf notably the tree component takes advantages of subbatch technique while the targets are joined in batch to compute the cross entropy objective splitting rules which could not be learned by condition networks directly thus we involve an objective item for each decision node to maximize the information gain as max inf ogain nright x right p ln pright j ntotal j where n is the corresponding count is the feature number and p is the corresponding probabilistic distribution of features regarding the derivatives relative to dirac symbol we firstly reformulate the information gain in the form of dirac symbol as nlef t nright nx total actually all the reduction could be performed automatically within the proposed principle that to multiply the symbols inside the branch with dirac function pntotal specifically as an example of the count nlef t target network to finally predict the category of each sample we apply a complex network as the target network which often is a stacked convolution one for image or an lstm for sentence analysis from ensemble perspective nlef t x lef t t p ln plef j ntotal j nx total where s is the sign function ndt could be treated as an ensemble model which ensembles many target networks with the hard branching condition networks currently there exist two branches of ensemble methods namely split by features or split by samples both of which increases the difficulty of single classifier however ndt splits the data by categories which means single classifier deals with a simpler task the key point is the split purity of condition networks because the branching reduces the sample numbers for each leaf relatively to single classifier if our model keeps the sample number per category ndt could make more effects for an example of one leaf the sample number reduces to while the category number reduces from to with similarly sufficient samples our model deals with which is much easier than thus our model benefits from the strengthen of single classifier ndt neual decision tree towards fully functioned neural graph algorithm neural decision tree ndt to implement logic component we propose two batch operations namely and operation take the example of figure c to begin there are five samples in the batch in the forward pass once processing in the branch according to the condition the batch is split into two each of which is respectively tackled by the instructions in the corresponding branch simultaneously after processed by two branches the are joined into one batch according to the original order in the backward propagation the gradients of joined batch are split into two parts which correspond to two when the process has propagated through two branches the gradients of two are joined again to form the gradients of stacked cnn theoretically a sample in some means the corresponding branch is activated for this sample and the other branch is deactivated on the other word the hidden representations of this sample connect to the activated branch rather than the deactivated one thus the logic components perform our proposed principle in the manner of and operation notably if there is no variable that is only introduced in the condition it is unnecessary to update the condition which makes corresponding neural graph an exact method dynamical graph construction previously introduced neural graph is the intelligence architecture which is characterized by both logics and neurons mathematically the component of neurons are continuous functions such as matrix multiply hyperbolic tangent tanh convolution layer etc which could be implemented as mathematical operations obviously simple principal implementation for mode is easy and direct but practically all the latest training methods take the advantages of batched mode hence we focus on the batched implementation of neural graph in this section conventionally neural graph is composed by two styles of variable namely symbols such as w in figure and atomic types such as the integer d in algorithm line in essence symbolic variables originate from the weights between neurons while the atomic types are introduced by the embedded traditional algorithms therefore regarding the component of logics there exist two styles and logic components which are differentiated in implementation symbolspecific logics indicates the condition involves the symbols such as line in algorithm while logics means there are only atomic types in the condition such as line in algorithm however our proposed principle that dynamically constructing neural graph could process both the situations to implement logic component we propose a more flexible batch operation namely allocatebatch take the example of hungarian layer xiao the hungarian algorithm deals with the similarity matrix to provide the alignment information according to which the dynamic links between symbols are dynamically allocated shown in figure of xiao thus the forward and backward propagation could be performed in a continuous calculus graph simply in the forward pass we record the allocated dynamic links of each sample in the batch while in the backward pass we propagate the gradients along these dynamic links obviously the logic components perform our proposed principle in the manner of operation the traditional algorithms are a combination of branch if and repeat for while repeat could be treated as performing branch in multiple times thus the three batch operations namely and operation could process all the traditional algorithms such as resolution method searching label propagation pca bandit mab adaboost neural graph is turing complete actually if neural graph could simulate the turing machine it is turing complete turing machine is composed by four parts a tape head state register and a finite table of instructions correspondingly ndt neual decision tree towards fully functioned neural graph symbols are based on tensor arrays which simulate the celldivided tape process indicate where to variables record the state last the logic flow if while for constructs the finite instruction table in summary neural graph is turing complete specifically neural graph is a learnable turing machine rather than a static one learnable turing machine could adjust the according to data and environment traditional computation models focus on static algorithms while neural graph takes advantages of data and perception to strengthen the rationality of behaviors experiment in the section we verify our model on two datasets mnist lecun et and cifar krizhevsky we first introduce the experimental settings in section then in section we conduct performance experiments to testify our model last in section to further verify our theoretical analysis that ndt could reduce the category number of leaf nodes we perform a case study to justify our assumption experimental setting there exist three customized networks in our model that the feature condition and target network we simply apply identify mapping as feature network regarding the condition network we apply a fully connected perceptions with the for mnist and for cifar regarding the target network we also employ a fully connected perceptions with the for mnist for and for to train the model we leverage adadelta zeiler as our optimizer with as moment factor and we train the model until convergence but at most rounds regarding the batch size we always choose the largest one to fully utilize the computing devices notably the of approximated continuous function is performance verification mnist the mnist dataset lecun et is a classic benchmark dataset which consists of handwritten digit images x pixels in size organized into classes to with training and test samples we select some representative and competitive baselines modern we know the feature and target network are too oversimplified for this task but this version targets at an exemplified model which still could verify our conclusions we will perform a complex feature and target network in the version table performance evaluation on mnist dataset methods single target network cnn deep belief net svm rbf kernel random forest accuracy ndt depth architecture with dropout and relus classic linear classifier svm with rbf kernel deep belief nets and a standard random forest with trees we could observe that ndt will beat all the baselines verifying our theory and justifying the effectiveness of our model compared to single target network ndt promotes point which illustrates the ensemble of target network is effective compared to random forest that is also a method ndt promotes point which demonstrates the neurons indeed strengthen the decision trees cifar the dataset krizhevsky is also a classic benchmark for overfull category classification which consists of color natural images x pixels in size from classes with training and test images several representative baselines are selected as network in network nin lin et fitnets rao et deep supervised network dsn lee et srivastava et springenberg et exponential linear units elu clevert et fitresnets mishkin matas gcforest zhou feng and deep resnet he et we could conclude that ndt will beat all the strong baselines which verifies the effectiveness of neural decision trees and justifies the theoretical analysis compared to single target network ndt promotes point which illustrates the ensemble of target network is effective compared with gcforest the performance improves points which illustrates that neurons empower the decision trees more effectively than direct ensembles compared with resnet that is the strongest baseline we promote the results over points which justifies ndt neual decision tree towards fully functioned neural graph figure case study for ndt in mnist with depth a and depth b the left tables are the test sample numbers that correspond to leaf node and category for example the sliced means there are test samples of category in leaf node a we slice the main component of a leaf and draw the corresponding decision trees in the right panel notably x indicates the empty class table performance evaluation error on cifar methods nin dsn fitnets elu fitresnets resnet gcforest random forest single target network ndt depth our assumption that ndt could reduce the category number of leaf nodes to enhance the intelligence systems case study to further testify our assumption that ndt could reduce the category number of leaf nodes we perform a case study in mnist we make a statistics of test samples for each leaf node illustrated in figure the item of table means leaf node has how many samples in category for example the in the first row and second column means that there are test samples of category are into leaf node a correspondingly we draw the decision trees in the right panel with labeled categories which specifically illustrates the decision process of ndt for a complete verification we vary the depth of ndt with and firstly we could clearly draw the conclusion from figure that each leaf node needs to predict less categories which justifies our assumption for example in the bottom figure the node a only needs to predict the category which is a single classification and the node h only needs to predict the categories which is a four classification because small classification is less difficult than large one our target network in the leaf could perform better which leads to performance promotion in a manner secondly from figure split purity could be worked out generally the tanh achieves a decent split purity indeed the most difficult leaf nodes d in the top and h in the bottom are not perfect but others gain a competitive split purity statistically the main component or the sliced grid takes share of total samples which in a large probability ndt would perform better than accuracy in this case ndt neual decision tree towards fully functioned neural graph finally we discuss the depth from the top to the bottom of figure the categories are further split for example the node b in the top is split into c and d in the bottom which means that the category and are further in this way deep neural decision tree is advantageous but much deeper ndt makes less sense because the categories have been already split well there would be mostly no difference for or however considering the efficiency and consuming resources we suggest to apply a suitable depth or theoretically about where is the total category number related work in this section we briefly introduce three lines of related work image recognition decision tree and neural graph convolution layer is necessary in current neural architectures for image recognition almost every model is a convolutional model with different configurations and layers such as springenberg et and dsn lee et empirically deeper network produces better accuracy but it is difficult to train much deeper network for the issue of gradients glorot bengio recently there emerge two ways to tackle this problem srivastava et and residual network he et inspired by lstm network applies and for each layer which allow information to flow across layers along the computation path without attenuation for a more direct manner residual network simply employs identity mappings to connect relatively top and bottom layers which propagates the gradients more effectively to the whole network notably achieving the performance residual network resnet is the strongest model for image recognition temporarily decision tree is a classic paradigm of artificial intelligence while random forest is the representative methodology of this branch during recent years completely random tree forest has been proposed such as iforest liu et for anomaly detection however with the popularity of deep neural network lots of researches focus on the fusion between neurons and random forest for example richmond et converts cascaded random forests to convolutional neural network welbl leverages random forests to initialize neural network specially as the model gcforest zhou feng allocates a very deep architecture for forests which is experimentally verified on several tasks notably all of this branch could not jointly train the neurons and decision trees which is the main disadvantage to jointly fuse neurons and logics xiao proposes the basic principle of neural graph which could embed traditional algorithms into neural architectures the seminal paper merges the hungarian algorithm with neurons as hungarian layer which could effectively recognize sentence pairs however as a special case the variables only introduced in the condition could not be updated which is a disadvantage for characterizing complex systems thus this paper focuses on this issue to make a fully functioned neural graph future work we list three lines of future work design new components of neural graph implement a script language for neural graph and analyze the theoretical properties of learnable turing machine this paper exemplifies an approach to embed decision tree into neural architectures actually many traditional algorithms could promote intelligence system with neurons for example neural searching could learn the heuristic rules from data which could be more effective and less resource consuming for a further example we could represent the data with deep neural networks and conduct label propagation upon the hidden representations where the propagation graph is constructed by method because the label propagation and deep neural networks are trained jointly the performance promotion could be expected in fact a fully functioned neural graph may be extremely hard and complex to implement thus we expect to publish a script language for modeling neural graph and also a library that includes all the mainstream intelligence methods based on these instruments neural graph could be more convenient for practical usage finally as we discussed neural graph is turing complete making a learnable turing machine we believe theoretical analysis is necessary for compilation and ability of neural graph take an example do the learnable and static turing machine have the same ability take a further example could our brain excel turing machine if not some excellent neural graphs may gain advantages over biological brain because both of them are learnable turing machines if it could the theoretical foundations of intelligence should be reformed take the final example what is the best computation model for intelligence conclusion this paper proposes the principle of fully functioned neural graph based on this principle we design the neural decision tree ndt for image recognition experimental results on benchmark datasets demonstrate the effectiveness of our proposed method ndt neual decision tree towards fully functioned neural graph references casper m mengel m fuhrmann c herrmann e appenrodt b schiedermaier p reichert m bruns t engelmann c and grnhage perceptrons an introduction to computational geometry clevert djorkarn unterthiner thomas and hochreiter sepp fast and accurate deep network learning by exponential linear units elus computer science deng jia dong wei socher richard li li kai and li imagenet a hierarchical image database in computer vision and pattern recognition cvpr ieee conference on pp ieee glorot xavier and bengio yoshua understanding the difficulty of training deep feedforward neural networks in proceedings of the thirteenth international conference on artificial intelligence and statistics pp he kaiming zhang xiangyu ren shaoqing and sun jian deep residual learning for image recognition pp he kaiming zhang xiangyu ren shaoqing and sun jian deep residual learning for image recognition in computer vision and pattern recognition pp krizhevsky alex learning multiple layers of features from tiny images lecun bottou bengio and haffner gradientbased learning applied to document recognition proceedings of the ieee lecun yann bengio yoshua and hinton geoffrey deep learning nature lee chen yu xie saining gallagher patrick zhang zhengyou and tu zhuowen nets eprint arxiv pp lin min chen qiang and yan shuicheng network in network computer science liu fei tony kai ming ting and zhou zhi hua isolation forest in eighth ieee international conference on data mining pp mcculloch warren s and pitts walter a logical calculus of ideas imminent in nervous activity mishkin dmytro and matas jiri all you need is a good init rao jinfeng he hua and lin jimmy estimation for answer selection with deep neural networks in acm international on conference on information and knowledge management pp richmond david kainmueller dagmar yang michael myers eugene and rother carsten relating cascaded random forests to deep convolutional neural networks for semantic segmentation computer science rumelhart hinton and williams j learning internal representations by error propagation mit press silver david huang aja maddison chris guez arthur sifre laurent driessche george van den schrittwieser julian antonoglou ioannis panneershelvam veda and lanctot marc mastering the game of go with deep neural networks and tree search nature springenberg jost tobias dosovitskiy alexey brox thomas and riedmiller martin striving for simplicity the all convolutional net eprint arxiv srivastava rupesh kumar greff klaus and schmidhuber jrgen training very deep networks computer science welbl johannes casting random forests as artificial neural networks and profiting from it in german conference on pattern recognition pp springer xiao han hungarian layer logics empowered neural architecture arxiv preprint zeiler matthew adadelta an adaptive learning rate method computer science zhou zhi hua and feng ji deep forest towards an alternative to deep neural networks
9
alignment elimination from adams grammars institute of computer science university of tartu liivi tartu estonia jun abstract adams extension of parsing expression grammars enables specifying indentation sensitivity using two grammar constructs indentation by a binary relation and alignment this paper proposes a transformation of adams grammars for elimination of the alignment construct from the grammar the idea that alignment could be avoided was suggested by adams but no process for achieving this aim has been described before acm subject classification formal definitions and theory processors grammars and other rewriting systems keywords and phrases parsing expression grammars indentation grammar transformation introduction parsing expression grammars peg introduced by ford serve as a modern framework for specifying the syntax of programming languages and are an alternative to the classic grammars cfg the core difference between cfg and peg is that descriptions in cfg can be ambiguous while pegs are inherently deterministic a syntax specification written in peg can in principle be interpreted as a parser for that syntax in the case of left recursion this treatment is not straightforward but doable see formally a peg is a quadruple g n t s where n is a finite set of t is a finite set of terminals is a function mapping each to its replacement corresponding to the set of productions of cfg s is the start expression corresponding to the start symbol of cfg so n eg and s eg where the set eg of all parsing expressions writable in g is defined inductively as follows eg the empty string a eg for every a t the terminals x eg for every x n the pq eg whenever p eg q eg concatenation eg whenever p eg q eg choice p eg whenever p eg negation or lookahead p eg whenever p eg repetition all constructs of peg except for negation are direct analogues of constructs of the ebnf form of cfg but their semantics is always deterministic so p repeats parsing of p until failure and always tries to parse p first q is parsed only if p fails for example the expression consumes the input string ab entirely while only consumes its first character the corresponding ebnf expressions ab a and a ab are equivalent both can match either a or ab from the input string negation p tries to parse p and fails if p succeeds nestra licensed under creative commons license alignment elimination from adams grammars if p fails then p succeeds with consuming no input other constructs of ebnf like repetition p and optional occurrence p can be introduced to peg as syntactic sugar languages like python and haskell allow the syntactic structure of programs to be shown by indentation and alignment instead of the more conventional braces and semicolons handling indentation and alignment in python has been specified in terms of extra tokens indent and dedent that mark increasing and decreasing of indentation and must be generated by the lexer in haskell rules for handling indentation and alignment are more sophisticated both these languages enable to locally use a different layout mode where indentation does not matter which additionally complicates the task of formal syntax specification adams and proposed an extension of peg notation for specifying indentation sensitivity and argued that it considerably simplifies this task for python haskell and many other languages in this extension expression p for example denotes parsing of p while assuming a greater indentation than that of the surrounding block in general parsing expressions may be equipped with binary relations as was in the example that must hold between the baselines of the local and the current indentation block in addition denotes parsing of p while assuming the first token of the input being aligned positioned on the current indentation baseline for example the do expressions in haskell can be specified by doexp istmts stmts do istmts stmts stmt stmt stmt here istmts and stmts stand for statement lists in the indentation and relaxed mode respectively in the indentation mode a statement list is indented marked by in the second production and all statements in it are aligned marked by in the relaxed mode however relation is used to indicate that the indentation baseline of the contents can be anything technically is the binary relation containing all pairs of natural numbers terminals do and are also equipped with to meet the haskell requirement that subsequent tokens of aligned blocks must be indented more than the first token alignment construct provides fulcra for disambiguating the often large variety of indentation baseline candidates besides simplicity of this grammar extension and its use a strength of it lies in the fact that grammars can still serve as parsers the rest of the paper is organized as follows section formally introduces additional constructs of peg for specifying code layout defines their semantics and studies their semantic properties in sect a process of eliminating the alignment construct from grammars is described section refers to related work and sect concludes indentation extension of peg adams and extend pegs with the indentation and alignment constructs we propose a slightly different extension with three rather than two extra constructs our approach agrees with that implemented by adams in his indentation package for haskell whence calling the grammars in our approach adams grammars is justified all differences between the definitions in this paper and in are listed and discussed in subsect let n denote the set of all natural numbers and let b tt ff the boolean domain denote by x the set of all subsets of set x and let x denote the set of all binary relations on set x x x x standard examples are n consisting of all pairs n m of natural numbers such that n m and n the identity nestra relation consisting of all pairs of equal natural numbers the indentation extension also makes use of n the relation containing all pairs of natural numbers whenever x and y x denote y x x y y x the image of y under relation the inverse relation of is defined by x y y x and the composition of relations and by x z x y y z finally denote x x x x x x adams grammars extend the definition of eg given in sect with the following three additional clauses p eg for every p eg and n indentation p eg for every p eg and n token position eg for every p eg alignment parsing of an expression p means parsing of p while assuming that the part of the input string corresponding to p forms a new indentation block whose baseline is in relation to the baseline of the surrounding block baselines are identified with column numbers the position construct p missing in determines how tokens of the input can be situated the current indentation baseline finally parsing an expression means parsing of p while assuming the first token of the input being positioned on the current indentation baseline unlike the position operator this construct does not affect processing the subsequent tokens inspired by the indentation package we call the relations that determine token positioning the indentation baseline token modes in the token mode for example tokens may appear only to the right of the indentation baseline applying the position operator with relation to parts of haskell grammar to be parsed in the indentation mode avoids indenting every single terminal in the example in sect also indenting terminals with is inadequate for do expressions occurring inside a block of relaxed mode but the position construct can be easily used to change the token mode for such blocks to we call a peg extended with these three constructs a peg recall from sect that n and t denote the set of and terminal symbols of the grammar respectively and n eg is the production function concerning the semantics of peg each expression parses an input string of terminals w t in the context of a current set of indentation baseline candidates i n and a current alignment flag indicating whether the next terminal should be aligned or not b b assuming a certain token mode n parsing may succeed fail or diverge if parsing succeeds it returns as a result a new triple containing the rest of the input a new set i of baseline candidates updated according to the information gathered during parsing and a new alignment flag this result is denoted by i if parsing fails there is no result in a triple form failure is denoted by triples of the form w i b t n b are behaving as operation states of parsing as each parsing step may use these data and update them we will write state t n b as we never deal with different terminal sets dependence on t is not explicitly marked and denote by state the set of possible results of parsing s s state the assertion that parsing expression e in grammar g with input string w in the context of i and b assuming token mode results in o state is denoted by e g w i b o the formal definition below must be interpreted inductively an assertion of the form g e s o is valid iff it has a finite derivation by the following ten rules g s s for every a t a g w i b o holds in two cases alignment elimination from adams grammars if o i ff for i i such that w ai ai denotes a occurring at column i and either b ff and i i i i i or b tt and i i i i if o and there are no and i such that w ai with either b ff and i i or b tt and i i for every x n x g s o holds if x g s o holds for every p q eg pq g s o holds in two cases if there exists a triple such that p g s and q g o if p g s and o for every p q eg g s o holds in two cases if there exists a triple such that p g s and o if p g s and q g s o for every p eg p g s o holds in two cases if p g s and o s if there exists a triple such that p g s and o for every p eg p g s o holds in two cases if p g s and o s if there exists a triple such that p g s and p g o for every p eg and n p g w i b o holds in two cases if there exists a triple i such that p g w i b i and o i i if p g w i b and o for every p eg and n p g s o holds if p g s o holds for every p eg g w i b o holds in two cases if there exists a triple i such that p g w i tt i and o i b if p g w i tt and o the idea behind the conditions i i and i i occurring in clause is that any column i where a token may appear is in relation with the current indentation baseline known to be in i if no alignment flag is set and coincide with the indentation baseline otherwise for the same reason consuming a token in column i restricts the set of allowed indentations to i or i depending on the alignment flag in both cases the alignment flag is set to ff in clause for p the set i of allowed indentation is replaced by i as the local indentation baseline must be in relation with the current indentation baseline known to be in i after successful parsing of p with the resulting set of allowed local indentations being i the set of allowed indentations of the surrounding block is restricted to i clause similarly operates on the alignment flag for a toy example consider parsing of with the operation state n ff assuming the token mode for that we must parse with n ff by clause since n n for that in turn we must parse ab with n tt by clause by clause we have a g n tt ff as n and b g ff ff as therefore by clause ab g n tt ff finally g n ff ff and g n ff ff by clauses and the set in the final state shows that only and are still candidates for the indentation baseline outside the parsed part of the input before parsing the candidate set was the whole n note that this definition involves circular dependencies for instance if x x for some x n then x g s o if x g s o by clause there is no result of parsing in such cases not even we call this behaviour divergence nestra properties of the semantics ford proves that parsing in peg is unambiguous whereby the consumed part of an input string always is its prefix theorem below is an analogous result for peg besides the uniqueness of the result of parsing it states that if we only consider relations in n then the whole operation state in our setting is in a certain sense decreasing during parsing denote by the suffix order of strings w iff w for some u t and by w the implication order of truth values tt a ff denote by the pointwise order on operation states w i b i iff w i i and b w i theorem let g n t s be a peg e eg n and s state then e g s o for at most one o whereby o implies s also if s w i b and i then s implies both w and ff and i implies i proof by induction on the shape of the derivation tree of the assertion e g s j theorem enables to observe a common pattern in the semantics of indentation and alignment denoting by p either p or both clauses and have the following form parametrized on two mappings state state for p eg p g s o holds in two cases if there exists a state such that p g s and o s if p g s and o the meanings of indentation and alignment constructs are distinguished solely by and for many properties proofs that rely on this abstract common definition can be carried out assuming that is monotone preserves the largest element and follows together with the axiom x y x y the class of all meet semilattices l with top element equipped with mappings satisfying these three conditions contains identities semilattices l with idl and is closed under compositions of different defined on the same semilattice l and under direct products if n then the conditions hold for n n with i i i i similarly in the case if b b with b tt b b now the direct product of the identities of t and b with on n gives the indentation case and the direct product of the identities of t and n and the boolean lattice b with gives the alignment case if satisfy the conditions then x x since x x x adding dual conditions monotone and x y x y would make a galois connection in our cases the dual axioms do not hold semantic equivalence i definition let g n t s be a peg and p q eg we say that p and q are semantically equivalent in g and denote p q iff p g s o q g s o for every n s state and o state for example one can easily prove that p p qr pq r p pq for all p q r eg we are particularly interested in equivalences involving the additional operators of peg in sect they will be useful in eliminating alignment and position operators the following theorem states distributivity laws of the three new operators of peg other constructs i theorem let g n t s be a peg then alignment elimination from adams grammars pq p q p p p p p p p for all n p p p p p for all n proof the equivalences in claim hold as the token mode steadily distributes to each case of the semantics definition those in claims and have straightforward proofs using the joint form of the semantics of indentation and alignment and the axioms of j note that indentation does not distribute with concatenation pq g p q this is because pq assumes one indentation block with a baseline common to p and q while p q tolerates different baselines for p and q for example take p a t q b t let the token mode be and the input state be n ff recall that ai means terminal a occurring in column i we have a g n ff ff and b g ff since therefore ab g n ff and ab g a b n ff on the other hand a g n ff ff implies a g n ff ff since n and analogously b g ff ff since n and consequently a b g n ff ff we can however prove the following facts i theorem let g n t s be a peg identity indentation law for all p eg p composition law of indentations for all p eg and n p p distributivity of indentation and alignment for all p eg and n idempotence of alignment for all p eg cancellation of outer token modes for all p eg and n p p terminal alignment property for all a t proof for claim note that an indentation with the identity relation corresponds to being identity mappings hence g s s o s s p g s o or p g s o g s s o s or p g s o p g s o where s can be replaced with because s by theorem concerning claims let be two constructs whose semantics follow the common pattern of indentation and alignment with mapping pairs and respectively then p g s o p g s s o s s or p g s o g s s o s s s or p g s o nestra by monotonicity of and the fact that s s we have s s s s s by the third axiom of and we also have s s whence s s s consequently s s can be replaced with s hence the semantics of the composition of and follows the pattern of semantics of indentation and alignment for mappings and to prove claim it now suffices to observe that the mappings in the semantics of equal the compositions of the corresponding mappings for the semantics of and for claim it suffices to observe that the mappings given for an indentation and for alignment modify different parts of the operation state whence their order of application is irrelevant claim holds because the mappings in the alignment semantics are both idempotent finally claim is trivial and claim follows from a straightforward case study j theorems and enact bringing alignments through all syntactic constructs except concatenation alignment does not distribute with concatenation because in parsing of an expression of the form the terminal to be aligned can be in the part of the input consumed by p or if parsing of p succeeds with consuming no input by alignment can nevertheless be moved through concatenation if any successful parsing of the first expression in the concatenation either never consumes any input or always consumes some input i theorem let g n t s be a peg and p q eg if p g s implies s for all n s state then if p g s implies s for all n s state then proof straightforward case study j theorem holds also for indentation instead of alignment the same proof in terms of is valid finally the following theorem states that position and indentation of terminals are equivalent if the alignment flag is false and the token mode is the identity i theorem let g n t s be a peg let a t n and w t i n o state then g w i ff o g w i ff o proof straightforward case study j differences of our approach from previous work our specification of peg differs from the definition used by adams and by three essential aspects listed below the last two discrepancies can be understood as bugs in the original description that have been corrected in the haskell indentation package by adams this package also provides means for locally changing the token mode all in all our modifications fully agree with the indentation package the position operator p is missing in the treatment there assumes just one default token mode applying to the whole grammar whence token positions deviating from the default must be specified using the indentation operator the benefits of the position operator were shortly discussed in subsect according to the grammar semantics provided in the alignment flag is never changed at the end of parsing of an expression of the form this is not appropriate if p succeeds without consuming any token as the alignment flag would unexpectedly remain true during parsing of the next token that is out of scope of the alignment operator the value the alignment flag had before starting parsing should be restored in this case this is the purpose of conjunction in the alignment semantics described in this paper alignment elimination from adams grammars in an alignment is interpreted the indentation baseline of the block that corresponds to the parsing expression to which the alignment operator is applied indentation operators occurring inside this expression and processed while the alignment flag is true are neglected in the semantics described in our paper raising the alignment flag does not suppress new indentations alignments are interpreted the indentation baseline in force at the aligned token site this seems more appropriate than the former approach where the indentations cancelled because of an alignment do not apply even to the subsequent tokens distributivity of indentation and alignment fails in the semantics of note that alignment of a block nevertheless suppresses the influence of position operators whose scope extend over the first token of the block our grammar semantics has also two purely formal deviations from the semantics used by adams and and ford we keep track of the rest of the input in the operation state while both expose the consumed part of the input instead this difference was introduced for simplicity and to achieve uniform decreasing of operation states in theorem we do not have explicit step counts they were used in to compose proofs by induction we provide analogous proofs by induction on the shape of derivation trees elimination of alignment and position operators adams describes alignment elimination in the context of cfgs in adams and claim that alignment elimination process for pegs is more difficult due to the lookahead construct to our knowledge no concrete process of alignment elimination is described for pegs before we provide one below for grammars we rely on the existence of position operators in the grammar this is not an issue since we also show that position operators can be eliminated from grammars approximation semantics and expressions for defining we first need to introduce approximation semantics that consists of assertions of the form e g n where e eg and n this semantics is a decidable extension of the predicate that tells whether parsing of e may succeed with consuming no input result succeed with consuming some input result or fail result no particular input strings indentation sets etc are involved whence the semantics is not deterministic the following set of clauses define the approximation semantics inductively g for every a t a g and a g for every x n x g n if x g for every p q eg pq g n holds in four cases p g q g and n there exist such that p g q g and n there exists such that p g q g and n p g and n for every p q eg g n holds in two cases p g n and n p g and q g for every p eg p g n holds in two cases nestra p g and n there exists such that p g and n for every p eg p g n holds in two cases p g and n p g p g and n for every p eg and n p g n if p g for every p eg and n p g n if p g for every p eg g n if p g on the peg constructs our definition basically copies that given by ford except for the case p g where our definition requires p g besides p g this is sound since if parsing of p never fails then parsing of p can not terminate the difference does not matter in the grammar transformations below as they assume grammars i theorem let g n t s be a peg assume that e g s o for some n and s state o state then if o s then e g if o for some s then e g if o then e g proof by induction on the shape of the derivation tree of the assertion e g s j is a decidable conservative approximation of the predicate that is true iff parsing in g never diverges it definitely excludes grammars with left recursion but can exclude also some safe grammars of pegs was introduced by ford the following set of clauses is an inductive definition of predicate wf g of expressions for peg wf g for every a t a wf g for every x n x wf g if x wf g for every p q eg pq wf g if p wf g and in addition p g implies q wf g for every p q eg wf g if p wf g and in addition p g implies q wf g for every p eg p wf g if p wf g for every p eg p wf g if p g and p wf g for every p eg and n p wf g if p wf g for every p eg and n p wf g if p wf g for every p eg wf g if p wf g this definition rejects with directly or indirectly left recursive rules since for a concatenation pq to be p must be leading to an infinite derivation in the case of any kind of left recursion on the other hand requiring both p wf g and q wf g in the clause for pq wf g would be too restrictive since this would reject with meaningful recursive productions like x again clauses for peg constructs mostly copy the definition given by ford this time the choice case is an exception in is considered only if both p and q are which needlessly rejects with safe recursive rules like x we require q wf g only if q could possibly be executed if p g a grammar g n t s is called if p wf g for every expression p occurring as a subexpression in some x or ford proves by induction on the length alignment elimination from adams grammars of the input string that in grammars parsing of every expression whose all subexpressions are terminates on every input string we can prove an analogous result in a similar way but we prefer to generalize the statement to a stricter semantics which enables to occasionally construct easier proofs later the new semantics which we call strict is defined by replacing the choice clause in the definition of subsect with the following for every p q eg g s o holds in two cases there exists a triple such that p g s o and in addition p g implies q g s for some state p g s and q g s o the new semantics is more restrictive since to finish parsing of an expression of the form after parsing p successfully also q must be parsed if p g happens to be the case in the standard semantics parsing of does not have to try q if parsing of p is successful so if parsing of an expression terminates in the strict semantics then it terminates with the same result in the standard semantics but not necessarily vice versa therefore proving that parsing always gives a result in the strict semantics will establish this also for the standard semantics in the rest we sign strict semantics with exclamation mark parsing assertions will be of the form e g s i theorem let g n t s be a peg and let e eg assume that all subexpressions of e are then for every n and s state there exists o state such that e g s o proof by induction on the length of the input string the first component of s the induction step uses induction on the shape of the derivation tree of the assertion e wf g j splitting as the repetition operator can always be eliminated by adding a new ap with ap pap for each subexpression p that occurs under the star operator we may assume that the input grammar g is the first stage of our process also assumes that g is all negations are applied to atomic expressions and all choices are disjoint a choice expression is called disjoint if parsing of p and q can not succeed in the same input state and token mode achieving the last two preconditions can be considered as a preparatory and previously studied in as stage of negation elimination step of the process issues concerning this are discussed briefly in subsect we use in principle the same splitting algorithm as in stage of the negation elimination process described by ford adding clauses for the extra operators in peg the approach defines two functions wf g eg and eg eg as follows f is a metavariable denoting any expression that always fails a x pq p p p f x p q if p g f otherwise p q if p g p otherwise p p p p p a x a pq p q p q p q p q if p g p otherwise p f p p p p p nestra correctness of the definition of follows by induction on the shape of the derivation tree of the assertion e wf g in the negation case we use that negations are applied to atomic expressions whence the reference to can be eliminated by a replacement from its definition the definition of is sound by induction on the shape of the expression a new grammar n t s is defined using by equations x x s s s the equivalence of the input and output grammars relies on the splitting invariant established by theorem below which allows instead of each parsing expression e with negations in front of atoms and disjoint choices in g to equivalently use parsing expression e e in the claim is analogous to the splitting invariant used by but we can provide a simpler proof using the strict semantics an analogous proof using the standard semantics would fail in the choice case i theorem let e g s o where e eg n and s state o state assuming that all choices in the rules of g and expression e are disjoint and the negations are applied to atoms the following holds if o s then e s s and e s if o where s then e s and e s if o then e s and e s proof we don t use the repetition operator whence all expressions in grammars are this fact follows from an easy induction on the expression structure by theorems and e g s o the desired result follows by induction of the shape of the derivation tree of e g s o using the disjointness assumption in the choice case j as the result of this transformation the sizes of the sides of the productions can grow exponentially though the number of productions stays unchanged preprocessing the grammar via introducing new in such a way that all concatenations were applied to atoms similarly to ford would hinder the growth but the size in the worst case remains exponential the subsequent transformations cause at most a linear growth of sides alignment elimination in a grammar g n t s obtained via splitting we can eliminate alignments using the following three steps introduce a copy x of each x and define x x in all sides of productions and the start expression apply distributivity laws theorem theorem theorem and idempotence theorem to bring all alignment operators down to terminals and replace alignment of terminals by position theorem in all sides of productions and the start expression replace all subexpressions of the form with the corresponding new x for establishing the equivalence of the original and the obtained grammar the following general theorem can be used i theorem let n t s and n t s be peg if for every x n x x then e s o always implies e s o proof easy induction on the shape of the derivation tree of e s j alignment elimination from adams grammars denote by the function defined on eg that performs transformations of steps distributes alignment operators to the and replaces aligned with corresponding new denote by the grammar obtained after step note that step does not change the semantics of expressions written in the original grammar steps and replace the sides of productions with expressions that are semantically equivalent with them in the grammar obtained after step by theorem this implies that whenever parsing of some e eg in the final grammar produces some result then the same result is obtained when parsing e with the same input state and token mode in the original grammar in order to be able to apply theorem with grammars interchanged we need the equivalence of the sides of productions also in grammar for this it is sufficient to show x for every x n which in turn would follow from the statement x x consequently the equivalence of the initial and final grammars is implied by the following theorem i theorem for every e eg e proof the claim is a direct consequence of the following two lemmas both holding for arbitrary s state o state if s o then e s o and s o if e s o or s o then s o both lemmas are proven by induction on the shape of derivation trees the assertion with two alignments both outside and inside is needed in the case where e itself is of the form j elimination of position operators in an peg g n t s we can get rid of position operations using a process largely analogous to the alignment elimination consisting of the following four steps introduce a new hx for each existing x and relation used by a position operator with hx x apply distributivity laws theorem and cancellation theorem to bring all position operators down to terminals and replace all subexpressions of the form with corresponding new hx replace all subexpressions of the form with again denote by the function defined on eg that performs transformations of steps distributes position operators to the terminals and and replaces under position operators with corresponding new denote by the grammar obtained after step theorem applies here as well whence the equivalence of the grammar obtained after step and the initial grammar is implied by the following theorem i theorem for every e eg and n e e proof the claim is a direct consequence of the following two lemmas both holding for arbitrary s state o state and n if e s o then e s o and e s o if e s o or e s o then e s o both lemmas are proven by induction on the shape of the derivation trees the claim with position operator both outside and inside e is needed in the case when e itself is an application of the position operator j nestra correctness of step can be proven by induction on the shape of the derivation trees using theorem note that here we must assume that parsing according to the final grammar is performed with the alignment flag false a natural assumption as the grammar is and the token mode discussion on the preconditions alignment elimination was correctly defined under the assumption that the input grammar is has negations only in front of atoms and disjoint choices all these conditions are needed at stage only the second assumption can be easily established by introducing a new for each expression p such that p occurs in the productions or in the start expression this can be done in the lines of the first stage of the negation elimination process described by ford this transformation preserves of the grammar achieving disjoint choices is a more subtle topic a straightforward way would be replacing choices of the form with disjoint choices pq which seems to work well as and pq are equivalent in the standard semantics alas and pq are not equivalent in the approximation semantics because if p g p g q g but q g then pq g while g due to this replacing with pq can break take x n such that x x then x wf g due to wf g alone no recursive call to x wf g arises as g however if x x in then whence of x now recursively requires of x thus x wf an argument similar to this shows that the first stage of the negation elimination process in ford also can break as the second stage is correctly defined only for grammars the whole process fails one solution would be changing the approximation semantics by adding to the inductive definition in subsect a general clause e g if e g or e g this forces e g to hold whenever an assertion of the form e g n holds and in particular becomes equivalent to pq then replacing with pq preserves although predicate becomes more restrictive and rejects more safe grammars the loss seems to be little and acceptable in practice expressions q such that q g or q g while q g seem to occur not very commonly in influenced productions such as x x but a further investigation is needed to clarify this related work pegs were first introduced and studied by ford who also showed them to be closely related with the ts system and tdpl as well as to their generalized forms adams and adams and provide an excellent overview of previous approaches to describing languages and attempts of building indentation features into parser libraries our work is a theoretical study of the approach proposed in while some details of the semantics used in our paper were corrected in the lines of adams indentation package for haskell this package enables specifying indentation sensitivity within the parsec and trifecta parser combinator libraries a process of alignment operator elimination is previously described for cfgs by adams matsumura and kuramitsu develop a very general extension of peg that also enables to specify indentation their framework is powerful but complicated the approach proposed alignment elimination from adams grammars in and followed by us is in contrast with by focusing on indentation and aiming to maximal simplicity and convenience of usage conclusion we studied the extension of peg proposed by adams and for indentationsensitive parsing this extension uses operators for marking indentation and alignment besides the classic ones having added one more operator position for convenience we found a lot of useful semantic equivalences that are valid on expressions written in the extended grammars we applied these equivalences subsequently for defining a process that algorithmically eliminates all alignment and position operators from grammars references michael adams the indentation package url http michael adams principled parsing for languages revisiting landin s offside rule in roberto giacobazzi and radhia cousot editors the annual acm symposium on principles of programming languages popl rome italy january pages acm url http michael adams and parsing for parsec in wouter swierstra editor proceedings of the acm sigplan symposium on haskell gothenburg sweden september pages acm url http alfred aho and jeffrey ullman the theory of parsing translation and compiling upper saddle river nj usa alexander birman and jeffrey ullman parsing algorithms with backtrack information and control url http bryan ford parsing expression grammars a syntactic foundation in neil jones and xavier leroy editors proceedings of the acm symposium on principles of programming languages popl venice italy january pages acm url http tetsuro matsumura and kimio kuramitsu a declarative extension of parsing expression grammars for recognizing most programming languages jip url http medeiros fabio mascarenhas and roberto ierusalimschy left recursion in parsing expression grammars in francisco heron de carvalho junior and soares barbosa editors programming languages brazilian symposium sblp natal brazil september proceedings volume of lecture notes in computer science pages springer url http
6
bridging static and dynamic program analysis using fuzzy logic jacob lidman josef svenningsson chalmers university of technology lidman josefs static program analysis is used to summarize properties over all dynamic executions in a unifying approach based on logic properties are either assigned a definite value or unknown but in summarizing a set of executions a property is more accurately represented as being biased towards true or towards false compilers use program analysis to determine benefit of an optimization since benefit performance is justified based on the common case understanding bias is essential in guiding the compiler furthermore successful optimization also relies on understanding the quality of the information the plausibility of the bias if the quality of the static information is too low to form a decision we would like a mechanism that improves dynamically we consider the problem of building such a reasoning framework and present the fuzzy analysis our approach generalize previous work that use logic we derive fuzzy extensions of analyses used by the lazy code motion optimization and unveil opportunities previous work would not detect due to limited expressiveness furthermore we show how the results of our analysis can be used in an adaptive classifier that improve as the application executes introduction how can one reconcile static and dynamic program analysis these two forms of analysis complement each other static analysis summarizes all possible runs of a program and thus provide soundness guarantees while dynamic analysis provides information about the particular runs of a program which actually happen in practice and can therefore provide more relevant information being able to combine these two paradigms has applications on many forms on analyses such as alias analysis and dependence analysis compilers use program analysis frameworks to prove legality as well as determining benefit of transformations specifications for legality are composed of safety and liveness assertions universal and existentially quantified properties while specifications for benefit use assertions that hold in the common case this reason for adopting the common case is that few transformations improve performance in general for every input environment similarly most transformations could potentially improve performance in a least one case as such compiler optimizations are instead motivated based on an approximation of the majority case the weighted mean while determining legality has improved due to advances in the verification community the progress in establishing benefit has been slow in this paper we introduce fuzzy analysis a framework for static program analysis based on fuzzy logic the salient feature of our framework is that it can naturally incorporate dynamic information while still being a static analysis this ability comes thanks to a shift from crisp sets where membership is binary as employed in conventional static analysis to fuzzy sets where membership is gradual de vink and wiklicky eds qapl eptcs pp lidman svenningsson this work is licensed under the creative commons attribution license bridging static and dynamic program analysis using fuzzy logic we make the following contributions section introduces our main contribution the fuzzy framework section demonstrates the benefit of our framework by presenting a generalization of a wellknown code motion algorithm and we show how this generalization provides new opportunities for optimizations previous approaches would not discover section shows how fuzzy logic can benefit program analysis by using fuzzy sets to separate uncertainty in and and hence improve an analysis and using fuzzy regulators to refine the results of our static analysis hence improving the precision dynamically preliminaries we introduce and define fuzzy sets and the operators that form fuzzy logic these concepts will be used in section to define the transfer functions of our analysis fuzzy set elements of a crisp are either members or to a universe of discourse a fuzzy set fs instead allow partial membership denoted by a number from the unit interval the membership degree typically denotes vagueness the process to convert crisp membership to fuzzy grades is called fuzzification and the inverse is called defuzzification following dubois et al let s be a crisp set and s a membership function mf then hs is a fuzzy set as a convention if s is understood from context we sometimes refer to as a fuzzy set the membership function formalizes the fuzzification fuzzy sets are ordered s s s s s we can accommodate some notion about uncertainty of vagueness by considering a fuzzy set where the membership degree itself is a fuzzy set fs membership functions are composed of a primary js and secondary membership h s u s u i s s u js here uncertainty is represented by the secondary membership that define the possibility of the primary membership when for each x and u it holds x u the is called an interval gehrke et al showed that this can equivalently be described as an interval valued fuzzy sets ivfs where s l u l u ivfs are a special case of lattice valued fuzzy sets sets where the membership domain forms a lattice over defuzzification of often proceeds in two phases the first phase applies type reduction to transform the to a fs the second phase then applies a defuzzification fuzzy logic fuzzy logic defines formal systems to reason about truth in the presence of vagueness contrary to classical logic the law of excluded middle p and the law of p does not in general hold for these systems fuzzy logic uses and norms to which generalize the logical operators and we compactly represent a fuzzy logic by is sometimes called a de morgan system because it satisfies a generalization of de morgans laws q q p q and p q in the context of fuzzy logic crisp or boolean set refer to a classical set to avoid confusion with fuzzy sets one would expect the definition of a fuzzy logic to include a fuzzy implication operator in this work we do not consider it although lidman svenningsson fuzzy logic algebraic lukasiewicz nilpotent min x y xy max x y min x y x y otherwise max x y x y xy min x y max x y x y otherwise table common instantiations of fuzzy logics definition let u be a binary function that is commutative associative and increasing and has an identity element e if e then u is a triangular norm and if e then u is a triangular conorm definition a is a unary function n that is decreasing involutory n n x x with boundary conditions n n standard examples of fuzzy logics are shown in table examples are special cases and limits of the frank family of fuzzy logics that are central to our work and formally defined in definition definition let s then the frank family of is defined by min x y t s x y max x y s sx sy otherwise s the set of intervals in forms a bounded partial order hi v where lx ux ly uy lx ly ux uy and as per gehrke et al we can lift a fuzzy logic and l to a ivfs fuzzy logic lx ux ly uy lx ly ux uy u u l fuzzy analysis static analyses deduce values of semantic properties that are satisfied the dynamics of the application the dynamics is formalized as a system of monotone transfer functions and collector functions transfer functions describe how blocks alter the semantic properties collectors functions merge results from different possibly mutual exclusive branches of the application the solution of the analysis is obtained through kleene iteration and is a unique of the system of equations in a classical framework the domain of the values is binary either true or false the interpretation of these values depends on the type of analysis the value true means that the property can possibly hold in a it is impossible that the value is always false while it means that the property always holds in a the value false could mean either the opposite of true or that the result is inconclusive our fuzzy analysis instead computes the partial truth of the property values are elements of a value closer to means that the property is biased towards false and vice versa furthermore the transfer functions are logical formulas from a frank family fuzzy logic and the collector the general concept allowing any e is called a uninorm and is either orlike u u or andlike u u our work does not require the full generality this should not be confused with the partial order used in the interval abstraction bridging static and dynamic program analysis using fuzzy logic functions are weighted average functions where the constant is determined prior to performing the analysis in contrast to the classical framework kleene iteration proceeds until the results differ by a constant which is the maximal error allowed by the solution the error can be made arbitrarily small this section introduces the fuzzy framework and we prove termination using continuity properties and banach s theorem section then presents an example analysis to demonstrate the benefit of the framework the analysis is performed on a weighted g hv e where v is a set of logical formulas denoting the transfer function of each block e v v is a set of edges denoting control transfers and denotes the normalized contribution for each edge as a running example we will use figure left which shows a with four nodes and their corresponding logical formula the flow graph has four control edges denoting contributions between nodes for instance block receives of its contribution from and from and out out in out in out out out min max out out out out figure example left and its corresponding equation system middle and the analysis result and error as a function of iteration right definition let p be a finite set of properties and v s p a valuation for each property we use v s to denote the interpretation of the fuzzy formula given a v given a g hv e with a unique start node vstart the map gs v v s describes the value of each property at each node and fuzzy analysis is a kleene iteration of f gs gs vstart v vstart f s vi v s w otherwise hw figure middle shows the equation system as implied by definition interpreted in a fuzzy logic for the example the red colored text corresponds to the collector function the weighted average and the normal text is the interpretation of the logical formula in order to prove termination of a fuzzy analysis we need to introduce a continuity property definition a function f n is iff h f f where is the absolute value of if k then f is called a contraction mapping and if k then f is called a mapping in a sequence of applications of a contraction mapping the difference between two consecutive applications will decrease and in the limit reach zero by imposing a bounded error we guarantee that this our definition restricts the domain and metric of both metric spaces for the domain and of f compared to the more general and common definition of a lipschitz continuous function other l can be used but only if we restrict the logic to the fuzzy logic p lidman svenningsson sequence terminates in a bounded amount of time the analysis error and result of as a function of iteration for the example is shown in figure right note that the error red line is decreasing and the value of blue line tends towards a final value we next proceed to prove that any fuzzy analysis iteratively computes more precise results and terminates in a bounded amount of time for a finite maximum error from some q n we let q denote the maximal congruence set of elements from that are at least apart q i the set of intervals on i are defined analogously for this we prove the property of fuzzy formulas theorem let x y c wi q for some i n fi nq q be and gi nq q be ki functions x y x y xy min x y and abs x are constants are x is if wi then wi f i the composition ga gb is ka kb finally formulas defined in a frank family fuzzy logic are if f inq iq satisfies inq y x f y f x then f is in summary as per theorem transfer functions in a frank family fuzzy logic are mappings s vstart is constant and hence a contraction mapping the composition of two functions is a function and a nonexpansive and a contraction function is a contraction function as the analysis is performed on the unit interval which together with the forms a complete metric space we can guarantee termination by banach s theorem theorem banach theorem let x d be a complete metric space and f x x a contraction then f has a unique in x this concludes our development of fuzzy analysis lazy code motion improving performance often means removing redundant computations computations are said to be fully redundant but not dead if the operands at all points remain the same for two such computations it is enough to keep one and store away the result for later we can eliminate this redundancy using global common elimination gcse furthermore a computation that does not change on some paths is said to be partially redundant loop invariant code motion licm finds partially redundant computations inside loops and move these to the entry block of the loop lazy code motion is a compiler optimization that eliminate both fully and partially redundant computations and hence subsumes both cse and licm krs algorithm performs lcm in production compilers such as gcc when optimizing for speed bridging static and dynamic program analysis using fuzzy logic v o i d diffpcm b a b f o r i i n i i f i n i b b a b s i n i b transform b a incrate i o u t i a b i n n n in i b p p b abs a i b transform b a incrate i out i a b v o i d diffpcm b a b f o r i i n i anfis decision of updating b anfis decision of leaving b i f i n i b b a b s i n i if update leave decision error else if update leave decision error b transform b a incrate i o u t i a b figure diffpcm function left the corresponding middle and the version used in section which is annotated with anfis classifier invocations right it consists of a series of analyses and can be summarized in these four steps solve a very busy and an available expression problem introduce a set that describes the earliest block where an expression must be evaluated determine the latest control flow edge where the expression must be computed introduce insert and delete sets which describe where expressions should be evaluated the target domain of the analysis is the set of static expressions in a program input to the analysis is three predicates determining properties about the expressions in different blocks an expression e is downward exposed if it produces the same result if evaluated at the end of the block where it is defined we use dee b e to denote if e is downward exposed in block b an expression e is upward exposed if it produces the same result if evaluated at the start of the block where it is defined we use uee b e to denote this an expression e is killed in block b if any variable appearing in e is updated in b we use kill b e to denote this very busy expression analysis is a analysis that depends on uee and kill and computes the set of expressions that is guaranteed to be computed at some time in the future similarly available expression analysis is a analysis that depends on dee and kill and deduces the set of previously computed expressions that may be reused the system of these two analyses are shown in figure it is beyond the scope of this paper to further elaborate on the details of these analyses the interested reader should consider nielson et al here the lcm algorithm and the analyses it depends on are applications we use to demonstrate the benefit of our framework as such a rudimentary understanding is sufficient consider the simplified differential modulation routine diffpcm in figure left we assume that n and the relative number of times block denoted p is statically in each iteration diffpcm invokes the pure functions transform to encode the differential output and incrate to get a quantification rate we use the algorithm to determine if these invocations can be made prior to entering the loop and contrast this to a situation where the analyses are performed knoop in et al refer to this as anticipatable expression problem this demonstration we let p and n but our conclusions hold as n increases and p approaches lidman svenningsson block lcm available expression v avin b b avout b avout b dee b avin b b very busy expression v anin b b anout b anout b uee b anin b b dee uee block anin j i kill i anout i i earliest i j anin j otherwise v laterin j j laterout j j laterout i j earliest i j laterin i i dee uee kill block insert i j laterout j j delete k uee k k k block edge insert block expression index delete abs a i transform b incrate i a b edge in i b kill insert block delete figure lcm formulation middle using classical left and fuzzy analysis bridging static and dynamic program analysis using fuzzy logic in the fuzzy framework as we will show the fuzzy allows us to uncover opportunites the classical would miss static analysis the problems of the krs algorithm use expressions as domain the mapping between expressions of diffpcm and indexes are listed in figure bottom together with the values of dee uee and kill for each block top right the classical krs algorithm conclude that both calls must be evaluated in bottom light gray box delete matrix column and for the fuzzy analyses we use the fuzzy logic the corresponding fuzzy sets of dee uee and kill are given in figure top dark gray box step of the fuzzy is hence the to below system of equations available expression analysis system avout avout avout dee avout n dee avout avout dee avout dee pavout p avout avout dee avout very busy expression analysis system uee anout anout anout uee n uee panout p anout anout uee anout anout uee anout anout steps and introduce constant predicates and are performed outside the analysis framework step is done similarly to step figure bottom dark gray box shows the result from step in contrast to the classical lcm the result implies that it is very plausible that we can delete the invocation of transform delete matrix column from block and instead add it at the end of and or start of and however result for the invocation of incrate remains this is because the invocation depends on the value of i which is updated at the end of static analysis to increase analysis precision a function call is sometimes inlined at the call site the improvement can however be reduced if the analysis is inaccurate and multiple targets are considered for a particular call site we show how the uncertainty in and can be quantified in two different dimensions using interval fuzzy sets as per section we can lift an arbitrary fuzzy predicate to intervals here we assume no knowledge about the relative number of calls to each target and treat the different calls we assume two different incrate functions as in figure left have been determined as targets their respective uee and kill entries are the same but since i is updated at the end of block their dee entry will differ the result of depends on the variable i and therefore dee in contrast the entry for is dee where and the new lidman svenningsson b transform b incrate i i n t incrate i n t i r e t u r n i i n t incrate i n t i return a out i a b block edge kill dee uee insert block delete figure implementations of incrate inlined in block left dee uee and kill vectors of block and delete insert analysis result for expression incrate i right dee the new entry for block is given by dee dee kill dee and uee sets are given in figure right applying the fuzzy but with fuzzy logic lifted to interval minmax fuzzy logic gives the values of delete and insert for expression incrate i in figure right the result for invoking incrate prior to the loop is as opposed to from the analysis in section the added dimension in the result of the fuzzy analysis allows us to differentiate uncertain results from pessimistic results in the given example we showed that the result of section is a pessimistic and that the two paths need to be considered seperately to increase precision hybrid analysis the result from a fuzzy analysis is a set of fuzzy membership degrees this section shows how the result can automatically be improved following the static analysis using a fuzzy if more specific information is provided at a later point the classifier a fuzzy inference system shown in figure is composed of five layers lookup fuzzy membership degree of the input value compute the firing strength of a rule conjunction of all membership degrees from each rule normalize the firing strengths wi j w j weight the normalized firing strength to the consequent output of the rule fi x combine all rule classifiers f fi this classifier uses a polynomial the consequent part of the adaptive rules to decide the output membership the order of the is the order of the polynomial the classification accuracy of the can be improved by fitting the polynomial to the input data for a this can be implemented as follows offline affine least square ls optimization is a convex optimization problem that finds an affine function y ai xi which minimizes x where x and y are the input and output vectors of the training set online least mean square lms is an adaptive filter that gradually in steps of a given constant minimizes e f x where hx yi is an sample bridging static and dynamic program analysis using fuzzy logic if is and is then f c c c if is and is then f c c c n n f figure anfis with two rules and two variables left and four example fuzzy sets right to exemplify the functionality of the we consider the classification using the two rule from figure left let and membership functions be given as in figure right the membership degrees are marked in the figure as for the first rule and for the second rule hence the weight of the first rule is and the second rule is the normalized weights are then and as the consequence functions output and we produce the prediction we return to the diffpcm function and again consider if we can invoke transform b prior to entering the loop we saw in section that the fuzzy membership degree was to improve classification accuracy we let the also use the i variable and the first input value in these variables were not part of the analysis and so we conservatively assume the fuzzy membership degree to be the same for any value of these variables in our experiments as shown in figure right we inserted calls to compute the anfis decision of updating and keeping the variable b constant in the diffpcm function if the incorrect decision was made the error was noted and an error rate computed after handling all input samples we consider invoking the diffpcm function on four different input sets each input set defined as periods with input values in each period the input sets in is given in figure left we use the lms after each incorrect classification and the ls algorithm if the error rate of a period was larger than or equal to note that the values of a period is not always perfectly representable by a linear classifier and sometimes varies between different periods although periods are similar hence we do not expect the classifier to be monotonically improving with increasing period as shown in the result in figure right the classification error decreases fast with both period and input sample in two cases a small residual error remains after the final period this show that the can improve the analysis result dynamically and hence increase the accuracy of when transform can be invoked prior to entering the loop the constant for the four different runs was set to and respectively error rate input value lidman svenningsson sample period error rate input value sample period error rate input value sample period error rate input value sample period figure input values left and the corresponding classification error rate right bridging static and dynamic program analysis using fuzzy logic related work most systems include elements input values environment state where information is limited but probabilistic uncertainty can be formulated for these systems a most likely or even quantitative analysis of properties is possible often this analysis relies on a probability theory for logical soundness cousot and monerau introduced a unifying framework for probabilistic abstract interpretation much work have since although perhaps implicitly relied on their formulation often probabilistic descriptions are known with imprecision that manifests as uncertainty adje et al introduced an abstraction based on the zonotope abstraction for structures and di pierro et al developed a probabilistic abstract interpretation framework and demonstrated an alias analysis algorithm that could guide the compiler in this decision they later formulated problems liveness analysis in the same framework an important distinction between their or similar probabilistic frameworks and classical frameworks is the definition of the confluence operator in contrast to a classical or must framework they use the weighted average this is similar to the work by ramalingam that showed that the mop solution exists for such confluence operator with a transfer function defined in terms of min max and negation the fuzzy logic our work extends this to allow other transfer functions and integrates the static analysis with a dynamic refinement mechanism through fuzzy control theory conclusion a major problem for static program analysis is the limited input information and hence the conservative results to alleviate the situation dynamic program analysis is sometimes used here accurate information is available but in contrast to its static the results only cover a single or few runs to bridge the gap and find a promising program analysis frameworks have been proposed these frameworks can be considered to intersect both by being a static program analysis that uses dynamic information we have introduced an framework based on fuzzy sets that supports such analyses we solved problems of use for speculative compilation and showed how our analysis unveils opportunities that previous approaches could not express and reason about we furthermore showed that our framework based on fuzzy sets admit mechanisms from fuzzy control theory to enhance the analysis result dynamically allowing for a hybrid analysis framework references assale adje olivier bouissou jean eric goubault sylvie putot static analysis of programs with imprecise probabilistic inputs in verified software theories tools experiments lecture notes in computer science springer berlin heidelberg pp patrick cousot radhia cousot abstract interpretation past present and future in proceedings of the joint meeting of the eacsl annual conference on computer science logic csl and the annual symposium on logic in computer science lics acm pp lower and upper bounds on a cumulative probability distribution functions lidman svenningsson patrick cousot monerau probabilistic abstract interpretation in european symposium on programming esop lecture notes in computer science pp pierro a di h wiklicky probabilistic data flow analysis a linear equational approach in proceedings of the fourth international symposium on games automata logics and formal verification pp alessandra di pierro chris hankin herbert wiklicky a systematic approach to probabilistic pointer analysis in programming languages and systems lecture notes in computer science springer berlin heidelberg pp drechsler manfred stadel a variation of knoop and steffen s lazy code motion sigplan not pp dubois prade fuzzy sets and systems theory and applications academic press new york dubois prade prade fundamentals of fuzzy sets the handbooks of fuzzy sets springer us mai gehrke carol walker elbert walker some comments on interval valued fuzzy sets international journal of intelligent systems pp sici jang anfis fuzzy inference system systems man and cybernetics ieee transactions on pp roger jang sun and soft computing a computational approach to learning and machine intelligence upper saddle river nj usa jens knoop oliver bernhard steffen lazy code motion in proceedings of the acm sigplan conference on programming language design and implementation pldi acm pp maleki yaoqing gao garzaran wong padua an evaluation of vectorizing compilers in parallel architectures and compilation techniques pact international conference on pp mesiarov international journal of approximate reasoning pp special section aggregation operators markus mock manuvir das craig chambers susan eggers dynamic sets a comparison with static analyses and potential applications in program understanding and optimization in proceedings of the acm workshop on program analysis for software tools and engineering paste acm pp flemming nielson hanne nielson chris hankin principles of program analysis springerverlag new york petersen padua static and dynamic evaluation of data dependence analysis techniques parallel and distributed systems ieee transactions on pp ramalingam data flow frequency analysis in proceedings of the acm sigplan conference on programming language design and implementation pldi acm pp constantinog ribeiro marcelo cintra quantifying uncertainty in relations in languages and compilers for parallel computing lecture notes in computer science springer berlin heidelberg pp bridging static and dynamic program analysis using fuzzy logic appendix a omitted proofs of theorem let x y c wi for some i n fi nq q be and gi nq q be ki functions x y x y xy min x y and abs x are constants are let b x x h for some x x h a g x abs x x h g x x h abs x h by definition x h b g x y x y x y g x y x y x y by definition triangle inequality distributivity x y g x y x y x y by definition c g x y x y triangle inequality distributivity d g x y xy x y g x y g x g x x y x y substitution y by definition y x triangle inequality distributivity e g x y min x y x y g x y x y min x y x y y x y by definition min x y x y min triangle inequality lidman svenningsson f g x y c x h g x x is if wi then wi f i g g wi f x h wi wi f by definition f x h f associativitiy and commutativity wi ki triangle inequality distributivity and wi ki and wi i the composition ga gb is ka kb g fa fa x h g x fa fb x h fa fb by definition ka fb x h fb definition ka kb definition formulas defined in a frank family fuzzy logic are this follows from structural induction on the height of parse tree of the predicate p x by de morgan s laws it is enough to show the induction step for and base case v g x x x h g x h or g x c by induction step from the base case for constants and and cases and and assumption that is min x y from theorem case from theorem case xy max x y equal to min x y using de morgans law this expression is from theorem case and if f inq iq satisfy inq y x f y f x then f is f inq iq can be decomposed into two functions fl inq q and fu inq q such that f i fl i fu i fl i gives the infimum of f i and fu i gives the supremum of i we show that both fl and fu are continuous fl i inf f i assume i l u since i is finite we can rewrite the operation as wise applications of min min f l min f l min f l as per above case min is similarly the composition of two functions is also or in extension a finite number of compositions bridging static and dynamic program analysis using fuzzy logic fu i sup f i max x y is equivalent to min x y which is by above case and so proof follows in the same way as the fl i case
6
complexity of manipulation with partial information in voting jul palash dey tata institute of fundamental research mumbai neeldhara misra indian institute of technology gandhinagar mail narahari indian institute of science bangalore hari january abstract the coalitional manipulation problem has been studied extensively in the literature for many voting rules however most studies have focused on the complete information setting wherein the manipulators know the votes of the while this assumption is reasonable for purposes of showing intractability it is unrealistic for algorithmic considerations in most scenarios it is impractical to assume that the manipulators to have accurate knowledge of all the other votes in this work we investigate manipulation with incomplete information in our framework the manipulators know a partial order for each voter that is consistent with the true preference of that voter in this setting we formulate three natural computational notions of manipulation namely weak opportunistic and strong manipulation we say that an extension of a partial order is viable if there exists a manipulative vote for that extension we propose the following notions of manipulation when manipulators have incomplete information about the votes of other voters w eak m anipulation the manipulators seek to vote in a way that makes their preferred candidate win in at least one extension of the partial votes of the o pportunistic m anipulation the manipulators seek to vote in a way that makes their preferred candidate win in every viable extension of the partial votes of the nonmanipulators s trong m anipulation the manipulators seek to vote in a way that makes their preferred candidate win in every extension of the partial votes of the we consider several scenarios for which the traditional manipulation problems are easy for instance borda with a single manipulator for many of them the corresponding manipulative questions that we propose turn out to be computationally intractable our hardness results often hold even when very little information is missing or in other words even when the instances are very close to the complete information setting our results show that the impact of paucity of information on the computational complexity of manipulation crucially depends on the notion of manipulation under consideration our overall conclusion is that computational hardness continues to be a valid obstruction to manipulation in the context of a more realistic model keywords voting manipulation incomplete information algorithm computational complexity introduction in many real life and ai related applications agents often need to agree upon a common decision although they have different preferences over the available alternatives a natural tool used in these situations is voting some classic examples of the use of voting rules in the context of multiagent systems include clarke tax collaborative filtering and similarity search etc in a typical voting scenario we have a set of candidates and a set of voters reporting their rankings of the candidates called their preferences or votes a voting rule selects one candidate as the winner once all voters provide their votes a set of votes over a set of candidates along with a voting rule is called an election a central issue in voting is the possibility of manipulation for many voting rules it turns out that even a single vote if cast differently can alter the outcome in particular a voter manipulates an election if by misrepresenting her preference she obtains an outcome that she prefers over the honest outcome in a cornerstone impossibility result gibbard and satterthwaite show that every unanimous and voting rule with three candidates or more is manipulable we refer to for an excellent introduction to various strategic issues in computational social choice theory considering that voting rules are indeed susceptible to manipulation it is natural to seek ways by which elections can be protected from manipulations the works of bartholdi et al approach the problem from the perspective of computational intractability they exploit the possibility that voting rules despite being vulnerable to manipulation in theory may be hard to manipulate in practice indeed a manipulator is faced with the following decision problem given a collection of votes p and a distinguished candidate c does there exist a vote v that when tallied with p makes c win for a fixed voting rule r the manipulation problem has subsequently been generalized to the problem of c oalitional manipulation by conitzer et al where one or more manipulators collude together and try to make a distinguished candidate win the election the manipulation problem fortunately turns out to be in several settings this established the success of the approach of demonstrating a computational barrier to manipulation however despite having set out to demonstrate the hardness of manipulation the initial results in were to the contrary indicating that many voting rules are in fact easy to manipulate moreover even with multiple manipulators involved popular voting rules like plurality veto kapproval bucklin and fallback continue to be easy to manipulate while we know that the computational intractability may not provide a strong barrier even for rules for which the coalitional manipulation problem turns out to be in all other cases the possibility of manipulation is a much more serious concern motivation and problem formulation in our work we propose to extend the argument of computational intractability to address the cases where the approach appears to fail we note that most incarnations of the manipulation problem studied so far are in the complete information setting where the manipulators have complete knowledge of the preferences of the truthful voters while these assumptions are indeed the best possible for the computationally negative results we note that they are not reflective of typical scenarios indeed concerns regarding privacy of information and in other cases the sheer volume of information would be significant hurdles for manipulators to obtain complete information motivated by this we consider the manipulation problem in a natural partial mation setting in particular we model the partial information of the manipulators about the votes of the as partial orders over the set of candidates a partial order over the set of candidates will be called a partial vote our results show that several of the voting rules that are easy to manipulate in the complete information setting become intractable when the manipulators know only partial votes indeed for many voting rules we show that even if the ordering of a small number of pairs of candidates is missing from the profile manipulation becomes an intractable problem our results therefore strengthen the view that manipulation may not be practical if we limit the information the manipulators have at their disposal about the votes of other voters we introduce three new computational problems that in a natural way extend the question of manipulation to the partial information setting in these problems the input is a set of partial votes p corresponding to the votes of the a set of manipulators m and a preferred candidate the task in the w eak m anipulation wm problem is to determine if there is a way to cast the manipulators votes such that c wins the election for at least one extension of the partial votes in on the other hand in the s trong m anipulation sm problem we would like to know if there is a way of casting the manipulators votes such that c wins the election in every extension of the partial votes in we also introduce the problem of o pportunistic m anipulation om which is an intermediate notion of manipulation let us call an extension of a partial profile viable if it is possible for the manipulators to vote in such a way that the manipulators desired candidate wins in that extension in other words a viable extension is a y of the standard c oalitional m anipulation problem we have an opportunistic manipulation when it is possible for the manipulators to cast a vote which makes c win the election in all viable extensions note that any y of s trong m anipulation is also an y of o pportunistic m anipulation but this may not be true in the reverse direction as a particularly extreme example consider a partial profile where there are no viable extensions this would be a n for s trong m anipulation but a vacuous y of o pportunistic m anipulation the o pportunistic m anipulation problem allows us to explore a more relaxed notion of manipulation one where the manipulators are obliged to be successful only in extensions where it is possible to be successful note that the goal with s trong m anipulation is to be successful in all extensions and therefore the only interesting instances are the ones where all extensions are viable it is easy to see that y es instance of s trong m anipulation is also a y es instance of o ppor tunistic m anipulation and w eak m anipulation beyond this we remark that all the three problems are questions with different goals and neither of them render the other redundant we refer the reader to figure for a simple example distinguishing these scenarios all the problems above generalize c oalitional m anipulation and hence any computational intractability result for c oalitional m anipulation immediately yields a corresponding intractability result for w eak m anipulation s trong m anipulation and o pportunistic m a nipulation under the same setting for example it is known that the c oalitional m anipula tion problem is intractable for the maximin voting rule when we have at least two manipulators hence the w eak m anipulation s trong m anipulation and o pportunistic m anipulation problems are intractable for the maximin voting rule when we have at least two manipulators figure an example of a partial profile consider the plurality voting rule with one manipulator if the favorite candidate is a then the manipulator simply has to place a on the top of his vote to make a win in any extension if the favorite candidate is b there is no vote that makes b win in any extension finally if the favorite candidate is c then with a vote that places c on top the manipulator can make c win in the only viable extension extension related work a notion of manipulation under partial information has been considered by conitzer et al they focus on whether or not there exists a dominating manipulation and show that this problem is for many common voting rules given some partial votes a dominating manipulation is a vote that the manipulator can cast which makes the winner at least as preferable and sometimes more preferable as the winner when the manipulator votes truthfully the dominating manipulation problem and the w eak m anipulation o ppor tunistic m anipulation and s trong m anipulation problems do not seem to have any apparent connection for example the dominating manipulation problem is for all the common voting rules except plurality and veto whereas the s trong m anipulation problem is easy for most of the cases see table however the results in establish the fact that it is indeed possible to make manipulation intractable by restricting the amount of information the manipulators possess about the votes of the other voters elkind and study manipulation under voting rule uncertainty however in our work the voting rule is fixed and known to the manipulators two closely related problems that have been extensively studied in the context of incomplete votes are p ossible w inner and n ecessary w inner in the p ossible w inner problem we are given a set of partial votes p and a candidate c and the question is whether there exists an extension of p where c wins while in the n ecessary w inner problem the question is whether c is a winner in every extension of following the work in a number of special cases and variants of the p ossible w inner problem have been studied in the literature the flavor of the w eak m anipulation problem is clearly similar to p ossible w inner however we emphasize that there are subtle distinctions between the two problems a more elaborate comparison is made in the next section our contribution our primary contribution in this work is to propose and study three natural and realistic generalizations of the computational problem of manipulation in the incomplete information setting we summarize the complexity results in this work in table our results provide the following interesting insights on the impact of lack of information on the computational difficulty of manipulation we note that the number of undetermined pairs of candidates per vote are small constants in all our hardness results b we observe that the computational problem of manipulation for the plurality and veto voting rules remains polynomial time solvable even with lack of information irrespective of the notion of manipulation under consideration proposition theorem and and observation we note that the plurality and veto voting rule also remain vulnerable under the notion of dominating manipulation b the impact of absence of information on the computational complexity of manipulation is more dynamic for the bucklin borda and maximin voting rules only the w eak m anipulation and o pportunistic m anipulation problems are computationally intractable for the theorem and theorem and bucklin theorem and borda observation and theorem and maximin observation and theorem voting rules whereas the s trong m anipulation problem remains computationally tractable theorem to b table shows an interesting behavior of the fallback voting rule the fallback voting rule is the only voting rule among the voting rules we study here for which the w eak m anipulation problem is theorem but both the o pportunistic m anipulation and s trong m anipulation problems are polynomial time solvable theorem and observation this is because the o pportunistic m anipulation problem can be solved for the fallback voting rule by simply making manipulators vote for their desired candidate b our results show that absence of information makes all the three notions of manipulations intractable for the voting rule for every rational for the w eak m anipulation problem observation and for every for the o pportunistic m anipulation and s trong m anipulation problems theorem and our results see table show that whether lack of information makes the manipulation problems harder crucially depends on the notion of manipulation applicable to the situation under consideration all the three notions of manipulations are in our view natural extension of manipulation to the incomplete information setting and tries to capture different behaviors of manipulators for example the w eak m anipulation problem maybe applicable to an optimistic manipulator whereas for an pessimistic manipulator the s trong m anipulation problem may make more sense organization of the paper we define the problems and introduce the basic terminology in the next section we present our hardness results in section in section we present our polynomially solvable algorithms finally we conclude with future directions of research in section wm plurality veto bucklin fallback borda maximin wm om p om sm sm p p p p table summary of results denotes the number of manipulators the results in white follow immediately from the literature observation to our results for the voting rule hold for every rational for the w eak m anipulation problem and for every for the o pportunistic m anipulation and s trong m anipulation problems preliminaries in this section we begin by providing the technical definitions and notations that we will need in the subsequent sections we then formulate the problems that capture our notions of manipulation when the votes are given as partial orders and finally draw comparisons with related problems that are already studied in the literature of computational social choice theory notations and definitions let v vn be the set of all voters and c cm the set of all candidates if not specified explicitly n and m denote the total number of voters and the total number of candidates respectively each voter vi s vote is a preference i over the candidates which is a linear order over for example for two candidates a and b a i b means that the voter vi prefers a to b we denote the set of all linear orders over c by l c hence l c n denotes the set of all preference profile n a map r l c n is called a voting rule for some preference profile l c n if r w then we say w wins uniquely and we write r from here on whenever we say some candidate w wins we mean that the candidate w wins uniquely for simplicity we restrict ourselves to the unique winner case in this paper all our proofs can be easily extended in the case a more general setting is an election where the votes are only partial orders over candidates a partial order is a relation that is reflexive antisymmetric and transitive a partial vote can be extended to possibly more than one linear vote depending on how we fix the order of the unspecified pairs of candidates for example in an election with the set of candidates c a b c a valid partial vote can be a b this partial vote can be extended to three linear votes namely a b c a c b c a b in this paper we often define a partial vote like where l c and a c c by which we mean the partial vote obtained by removing the order among the pair of candidates in a from also whenever we do not specify the order among a set of candidates while describing a complete vote the is correct in whichever way we fix the order among them we now give examples of some common voting rules b positional scoring rules an vector rm with and naturally defines a voting rule a candidate gets score from a vote if it is placed at the ith position and the score of a candidate is the sum of the scores it receives from all the votes the winners are the candidates with maximum score scoring rules remain unchanged if we multiply every by any constant add any constant hence we assume without loss of generality that for any score vector there exists a j such that and for all k j we call such a a normalized score vector for m m we get the borda voting rule with k and else the voting rule we get is known as for the voting rule we have m k and else plurality is and veto is b bucklin and simplified bucklin let be the minimum integer such that at least one candidate gets majority within top positions of the votes the winners under the simplified bucklin voting rule are the candidates having more than votes within top positions the winners under the bucklin voting rule are the candidates appearing within top positions of the votes highest number of times b fallback and simplified fallback for these voting rules each voter v ranks a subset xv c of candidates and disapproves the rest of the candidates now for the fallback and simplified fallback voting rules we apply the bucklin and simplified bucklin voting rules respectively to define winners if there is no integer for which at least one candidate gets more than votes both the fallback and simplified fallback voting rules output the candidates with most approvals as winners we assume for simplicity that the number of candidates each partial vote approves is known b maximin for any two candidates x and y let d x y be n x y n y x where n x y respectively n y x is the number of voters who prefer x to y respectively y to x the election we get by restricting all the votes to x and y only is called the pairwise election between x and y the maximin score of a candidate x is d x y the winners are the candidates with maximum maximin score b the score of a candidate x is y x de x y y x de x y where that is the of a candidate x is the number of other candidates it defeats in pairwise election plus times the number of other candidates it ties with in pairwise elections the winners are the candidates with the maximum score problem definitions we now formally define the three problems that we consider in this work namely w eak m anipu lation o pportunistic m anipulation and s trong m anipulation let r be a fixed voting rule we first introduce the w eak m anipulation problem definition eak m anipulation given a set of partial votes p over a set of candidates c a positive integer denoting the number of manipulators and a candidate c do there exist votes l c such that there exists an extension l c of p with r c to define the o pportunistic m anipulation problem we first introduce the notion of an r c opportunistic voting profile where r is a voting rule and c is any particular candidate definition r c voting profile let be the number of manipulators and p a set of partial votes an profile i l c is called an r c voting if each extension p of p for which there exists an profile l c with r p c we have r p i in other words an profile is r c with respect to a partial profile if when put together with the truthful votes of any extension c wins if the extension is viable to begin with we are now ready to define the o pportunistic m anipulation problem definition pportunistic m anipulation given a set of partial votes p over a set of candidates c a positive integer denoting the number of manipulators and a candidate c does there exist an r c profile we finally define the s trong m anipulation problem definition trong m anipulation given a set of partial votes p over a set of candidates c a positive integer denoting the number of manipulators and a candidate c do there exist votes i l c such that for every extension l c of p we have r i c we use p c to denote instances of w eak m anipulation o pportunistic m anipulation and s trong m anipulation where p denotes a profile of partial votes denotes the number of manipulators and c denotes the desired winner for the sake of completeness we provide the definitions of the c oalitional m anipulation and p ossible w inner problems below definition oalitional m anipulation given a set of complete votes over a set of candidates c a positive integer denoting the number of manipulators and a candidate c do there exist votes i l c such that r i c definition ossible w inner given a set of partial votes p and a candidate c does there exist an extension of the partial votes in p to linear votes such that r c comparison with possible winner and coalitional manipulation for any fixed voting rule the w eak m anipulation problem with manipulators reduces to the p ossible w inner problem this is achieved by simply using the same set as truthful votes and introducing empty votes we summarize this in the observation below observation the w eak m anipulation problem reduces to the p ossible w inner problem for every voting rule proof let p c be an instance of w eak m anipulation let q be the set consisting of many copies of partial votes clearly the w eak m anipulation instance p c is equivalent to the p ossible w inner instance p q c however whether the p ossible w inner problem reduces to the w eak m anipulation problem or not is not clear since in any w eak m anipulation problem instance there must exist at least one manipulator and a p ossible w inner instance may have no empty vote from a technical point of view the difference between the w eak m anipulation and p ossible w inner problems may look marginal however we believe that the w eak m anipulation problem is a very natural generalization of the c oalitional m anipulation problem in the partial information setting and thus worth studying similarly it is easy to show that the c oalitional m anipulation problem with manipulators reduces to w eak m anipulation o pportunistic m anipulation and s trong m anipulation problems with manipulators since the former is a special case of the latter ones observation the c oalitional m anipulation problem with manipulators reduces to w eak m anipulation o pportunistic m anipulation and s trong m anipulation problems with manipulators for all voting rules and for all positive integers proof follows from the fact that every instance of the c oalitional m anipulation problem is also an equivalent instance of the w eak m anipulation o pportunistic m anipulation and s trong m anipulation problems finally we note that the c oalitional m anipulation problem with manipulators can be reduced to the w eak m anipulation problem with just one manipulator by introducing empty votes these votes can be used to witness a good extension in the forward direction in the reverse direction given an extension where the manipulator is successful the extension can be used as the manipulator s votes this argument leads to the following observation observation the c oalitional m anipulation problem with manipulators reduces to the w eak m anipulation problem with one manipulator for every voting rule and for every positive integer proof let p c be an instance of c oalitional m anipulation let q be the set of consisting of many copies of partial vote c others clearly the w eak m anipulation instance c is equivalent to the c oalitional m anipulation instance p this observation can be used to derive the hardness of w eak m anipulation even for one manipulator whenever the hardness for c oalitional m anipulation is known for any fixed number of manipulators for instance this is the case for the voting rules such as borda maximin and copeland however determining the complexity of w eak m anipulation with one manipulator requires further work for voting rules where c oalitional m anipulation is polynomially solvable for any number of manipulators such as plurality bucklin and so on hardness results in this section we present our hardness results while some of our reductions are from the p os sible w inner problem the other reductions in this section are from the e xact c over by ets problem also referred to as this is a problem and is defined as follows definition exact cover by given a set u and a collection s st of t subsets of u with t does there exist a t s with such that x u we use to refer to the complement of which is to say that an instance of is a y es instance if and only if it is a n o instance of the rest of this section is organized according to the problems being addressed weak manipulation to begin with recall that the c oalitional m anipulation problem is for the borda maximin and voting rules for every rational when we have two manipulators therefore it follows from observation that the w eak m anipulation problem is for the borda maximin and voting rules for every rational even with one manipulator for the and voting rules we reduce from the corresponding p ossible w in ner problems while it is natural to start from the same voting profile the main challenge is in undoing the advantage that the favorite candidate receives from the manipulator s vote in the reverse direction we begin with proving that the w eak m anipulation problem is for the voting rule even with one manipulator and at most undetermined pairs per vote theorem the w eak m anipulation problem is for the voting rule even with one manipulator for any constant k even when the number of undetermined pairs in each vote is no more than proof for simplicity of presentation we prove the theorem for we reduce from the p ossible w inner problem for which is even when the number of undetermined pairs in each vote is no more than let p be the set of partial votes in a p ossible w inner instance and let c cm c be the set of candidates where the goal is to check if there is an extension of p that makes c win for developing the instance of w eak m anipulation we need to reverse any advantage that the candidate c obtains from the vote of the manipulator notice that the most that the manipulator can do is to increase the score of c by one therefore in our construction we artificially increase the score of all the other candidates by one so that despite of the manipulator s vote c will win the new election if and only if it was a possible winner in the p ossible w inner instance to this end we introduce m many dummy candidates and the complete votes wi ci di others for every i m further we extend the given partial votes of the p ossible w inner instance to force the dummy candidates to be preferred least over the rest by defining for every vi p the corresponding partial vote as follows vi c this ensures that all the dummy candidates do not receive any score from the modified partial votes corresponding to the partial votes of the p ossible w inner instance notice that since the number of undetermined pairs in vi is no more than the number of undetermined pairs in is also no more than let q c denote this constructed w eak m anipulation instance we claim that the two instances are equivalent in the forward direction suppose c is a possible winner with respect to p and let p be an extension where c wins then it is easy to see that the manipulator can make c win in some extension by placing c and in the first two positions of her vote note that the partial score of is zero in q indeed consider the extension of q obtained by mimicking the extension p on the common partial votes vi p notice that this is since vi and have exactly the same set of incomparable pairs in this extension the score of c is strictly greater than the scores of all the other candidates since the scores of all candidates in c is exactly one more than their scores in p and all the dummy candidates have a score of at most one in the reverse direction notice that the manipulator puts the candidates c and in the top two positions without loss of generality now suppose the manipulator s vote c others makes c win the election for an extension q of q then consider the extension p obtained by restricting q to notice that the score of each candidate in c in this extension is one less than their scores in q therefore the candidate c wins this election as well concluding the proof the above proof can be imitated for any other constant values of k by reducing it from the p ossible w inner problem for and introducing m k dummy candidates we will use lemma in subsequent proofs which has been used before lemma let c cm d be a set of candidates and a normalized score vector of length then for any given x xm zm there exists r and a voting profile such that the of ci is xi for all i m and the score of candidates d d is less than pm moreover the number of votes is o poly note that the number of votes used in lemma is polynomial in m if and is polynomial in m for every i m which indeed is the case in all the proofs that use lemma we next show that the wm problem is for the voting rule theorem the w eak m anipulation problem for the voting rule is even with one manipulator for any constant k proof we reduce from the p ossible w inner problem for the voting rule which is known to be let p be the set of partial votes in a p ossible w inner problem instance and let c cm c be the set of candidates where the goal is to check if there is an extension that makes c win with respect to we assume without loss of generality that c s position is fixed in all the partial votes if not then we fix the position of c as high as possible in every vote we introduce k many dummy candidates dk the role of the first k dummy candidates is to ensure that the manipulator is forced to place them at the bottom k positions of her vote so that all the original candidates get the same score from the additional vote of the manipulator the most natural way of achieving this is to ensure that the dummy candidates have the same score as c in any extension note that we know the score of c since c s position is fixed in all the partial votes this would force the manipulator to place these k candidates in the last k positions indeed doing anything else will cause these candidates to tie with c even when there is an extension of p that makes c win to this end we begin by placing the dummy candidates in the top k positions in all the partial votes formally we modify every partial vote as follows w di others for every i k at this point we know the scores of c and di for every i k using lemma we add complete votes such that the final score of c is the same with the score of every di and the score of c is strictly more than the score of the relative score of every other candidate remains the same this completes the description of the construction we denote the augmented set of partial votes by we now argue the correctness in the forward direction if there is an extension of the votes that makes c win then we repeat this extension and the vote of the manipulator puts the candidate di at the position m i and all the other candidates in an arbitrary fashion formally we let the manipulator s vote be v c cm d d d k by construction c wins the election in this particular setup in the reverse direction consider a vote of the manipulator and an extension q of p in which c wins note that the manipulator s vote necessarily places the candidates di in the bottom k positions indeed if not then c can not win the election by construction we extend a partial vote w p by mimicking the extension of the corresponding partial vote p that is we simply project the extension of on the original set of candidates let q denote this proposed extension of we claim that c wins the election given by q indeed suppose not let ci be a candidate whose score is at least the score of c in the extension q note that the scores of ci and c in the extension q are exactly the same as their scores in q except for a constant offset importantly their scores are offset by the same amount this implies that the score of ci is at least the score of c in q as well which is a contradiction hence the two instances are equivalent we next prove by a reduction from that the w eak m anipulation problem for the bucklin and simplified bucklin voting rules is even with one manipulator and at most undetermined pairs per vote theorem the w eak m anipulation problem is for bucklin simplified bucklin fallback and simplified fallback voting rules even when we have only one manipulator and the number of undetermined pairs in each vote is no more than proof we reduce the problem to w eak m anipulation for simplified bucklin let u um s st be an instance of where each si is a subset of u of size three we construct a w eak m anipulation instance based on u s as follows candidate set c w x d u c w a b where m m we first introduce the following partial votes p in correspondence with the sets in the family as follows w x si c u si d x c si t notice that the number of undetermined pairs in every vote in p is we introduce the following additional complete votes q b t copies of u c others b copies of u a c others b copies of d b others the total number of voters including the manipulator is now we show equivalence of the two instances in the forward direction suppose we have an exact set cover t let the vote of the manipulator v be c d others we consider the following extension p of w si c x u si d on the other hand if si s t then we have w x si c u si d we claim that c is the unique simplified bucklin winner in the profile p w v notice that the simplified bucklin score of c is m in this extension since it appears in the top m positions in the votes corresponding to the set cover t votes from the complete profile q and one vote v of the manipulator for any other candidate ui u ui appears in the top m positions once in p and t m times in q thus ui does not get majority in top m positions making its simplified bucklin score at least m hence c is the unique simplified bucklin winner in the profile p w v similarly the candidate appears only t times in the top m positions the same can be argued for the remaining candidates in d w and in the reverse direction suppose the w eak m anipulation is a y es instance we may assume without loss of generality that the manipulator s vote v is c d others since the simplified bucklin score of the candidates in d is at least let p be the extension of p such that c is the unique winner in the profile p q v as every candidate in w is ranked within top m positions m in t m votes in q for c to win c x must hold in at least votes in in those votes all the candidates in si are also within top m positions now if any candidate in u is within top m positions in p more than once then c will not be the unique winner hence the si s corresponding to the votes where c x in p form an exact set cover the reduction above also works for the bucklin voting rule specifically the argument for the forward direction is exactly the same as the simplified bucklin above and the argument for the reverse direction is as follows every candidate in w is ranked within top m positions in m votes in q and c is never placed within top m positions in any vote in q hence for c to win c x must hold in at least m votes in in those votes all the candidates in si are also within top m positions notice that c never gets placed within top m positions in any vote in p q now if any candidate x u is within top m positions in p more than once then x gets majority within top m positions and thus c can not win the result for the fallback and simplified fallback voting rules follow from the corresponding results for the bucklin and simplified bucklin voting rules respectively since every bucklin and simplified bucklin election is also a fallback and simplified fallback election respectively strong manipulation we know that the c oalitional m anipulation problem is for the borda maximin and voting rules for every rational when we have two manipulators thus it follows from observation that s trong m anipulation is for borda maximin and voting rules for every rational for at least two manipulators for the case of one manipulator s trong m anipulation turns out to be solvable for most other voting rules for however we show that the problem is for every for a single manipulator even when the number of undetermined pairs in each vote is bounded by a constant this is achieved by a careful reduction from the following lemma has been used before lemma for any function f c c z such that b c f a b b a b c d c f a b f c d is even there exists a profile such that for all a b c a defeats b with a margin of f a b moreover x n is even and n o a b a b we have following intractability result for the s trong m anipulation problem for the rule with one manipulator and at most undetermined pairs per vote theorem s trong m anipulation is for voting rule for every even when we have only one manipulator and the number of undetermined pairs in each vote is no more than proof we reduce to s trong m anipulation for rule let u um s st is an instance we assume without loss of generality t to be an even integer if not replicate any set from s we construct a corresponding w eak m anipulation instance for as follows candidate set c u c w z d partial votes p t u si c z d si w z c si d w notice that the number of undetermined pairs in every vote in p is now we add a set q of complete votes with even and poly m t using lemma to achieve the following margin of victories in pairwise elections figure shows the weighted majority graph of the resulting election b dq d z dq z c dq c d dq w z b dq ui d dq c ui u b dq z ui t u b dq c w t b dq ui mod m u b dq a b for every a b c a b not mentioned above d uj w t c z figure weighted majority graph of the reduced instance in theorem the weight of all the edges not shown in the figure are for simplicity we do not show edges among um we have only one manipulator who tries to make c winner notice that the number of votes in the s trong m anipulation instance p q c including the manipulator s vote is odd since and are even integers therefore a b is never zero for every a b c a b in every extension of p and manipulators vote and consequently the particular value of does not play any role in this reduction hence we assume without loss of generality to be zero from here on and simply use the term copeland instead of now we show that the instance u s is a y es instance if and only if the s trong m anipu lation instance p q c is a n o instance a s trong m anipulation instance is a n o instance if there does not exist a vote of the manipulator which makes c the unique winner in every extension of the partial votes we can assume without loss of generality that manipulator puts c at first position and z at last position in her vote assume that the instance is a y es instance suppose by renaming that s forms an exact set cover we claim that the following extension p of p makes both z and c copeland extension p of p m i u si c z d si w m u si d si w c z i we have summarize the pairwise margins between z and c and the rest of the candidates from the profile p q v in table the candidates z and c are the with copeland score m c z z c c c c z ui u w d w ui u d table z and c for the other direction notice that copeland score of c is at least m since c defeats d and every candidate in u in every extension of also notice that the copeland score of z can be at most m since z loses to w and d in every extension of hence the only way c can not be the unique winner is that z defeats all candidates in u and w defeats this requires w c in at least m extensions of we claim that the sets si in the remaining of the extensions where c w forms an exact set cover for u s indeed otherwise some candidate ui u is not covered then notice that ui z in all t votes making d z ui opportunistic manipulation all our reductions for the for o pportunistic m anipulation start from we note that all our hardness results hold even when there is only one manipulator our overall approach is the following we engineer a set of partial votes in such a way that the manipulator is forced to vote in a limited number of ways to have any hope of making her favorite candidate win for each such vote we demonstrate a viable extension where the vote fails to make the candidate a winner leading to a n o instance of o pportunistic m anipulation these extensions rely on the existence of an exact cover on the other hand we show that if there is no exact set cover then there is no viable extension thereby leading to an instance that is vacuously a y es instance of o pportunistic m anipulation our first result on o pportunistic m anipulation shows that the o pportunistic m anipulation problem is for the voting rule for constant k even when the number of manipulators is one and the number of undetermined pairs in each vote is no more than theorem the o pportunistic m anipulation problem is for the voting rule for constant k even when the number of manipulators is one and the number of undetermined pairs in each vote is no more than proof we reduce to o pportunistic m anipulation for rule let u um s st is an instance we construct a corresponding o pportunistic m anipulation instance for voting rule as follows we begin by introducing a candidate for every element of the universe along with k dummy candidates denoted by w and special candidates c d x y formally we have candidate set c u c d x y now for every set si in the universe we define the following total order on the candidate set which we denote by w si y x u si c d using we define the partial vote pi as follows pi y x si x x we denote the set of partial votes pi i t by p and i t by we remark that the number of undetermined pairs in each partial vote pi is we now invoke lemma from which allows to achieve any scores on the candidates using only polynomially many additional votes using this we add a set q of complete votes with poly m t to ensure the following scores where we denote the score of a candidate from a set of votes v by sv sq sq sq y sq c sq d sq w sq c w sq x sq c uj sq c m our reduced instance is p q c the reasoning for this score configuration will be apparent as we argue the equivalence we first argue that if we had a y es instance of in other words there is no exact cover then we have a y es instance of o pportunistic m anipulation it turns out that this will follow from the fact that there are no viable extensions because as we will show next a viable extension implies the existence of an exact set cover to this end first observe that the partial votes are constructed in such a way that c gets no additional score from any extension assuming that the manipulator approves c without loss of generality the final score of c in any extension is going to be sq c now in any viable extension every candidate uj has to be pushed out of the top k positions at least once observe that whenever this happens y is forced into the top k positions since y is behind the score of c by only votes si s can be pushed out of place in only votes for every uj to lose one point these votes must correspond to an exact cover therefore if there is no exact cover then there is no viable extension showing one direction of the reduction on the other hand suppose we have a n o instance of that is there is an exact cover let si i forms an exact cover of u we will now use the exact cover to come up with two viable extensions both of which require the manipulator to vote in different ways to make c win therefore there is no single manipulative vote that accounts for both extensions leading us to a n o instance of o pportunistic m anipulation first consider this completion of the partial votes i w y x si u si c d i w y x si u si c d i t w si y x u si c d notice that in this completion once accounted for along with the votes in q the score of c is tied with the scores of all uj s x and y while the score of is one less than the score of therefore the only k candidates that the manipulator can afford to approve are w the candidates c d and however consider the extension that is identical to the above except with the first vote changed to w y x si u si c d here on the other hand the only way for c to be an unique winner is if the manipulator approves w c d and therefore it is clear that there is no way for the manipulator to provide a consolidated vote for both these profiles therefore we have a n o instance of o pportunistic m anipulation we next move on to the voting rule and show that the o pportunistic m anipulation problem for the is for every constant k even when the number of manipulators is one and the number of undetermined pairs in each vote is no more than theorem the o pportunistic m anipulation problem is for the voting rule for every constant k even when the number of manipulators is one and the number of undetermined pairs in each vote is no more than proof we reduce to o pportunistic m anipulation for rule let u um s st is an instance we construct a corresponding o pportunistic m anipulation instance for voting rule as follows candidate set c u c d x y a w where a k for every i t we define as follows t c a u si d si y x w using we define partial vote pi y x si x x for every i t we denote the set of partial votes pi i t by p and i t by we note that the number of undetermined pairs in each partial vote pi is using lemma we add a set q of complete votes with poly m t to ensure the following we denote the score of a candidate from a set of votes w by sw b c b ai uj w c a uj u w w b y c b x c we have only one manipulator who tries to make c winner now we show that the instance u s is a y es instance if and only if the o pportunistic m anipulation instance p q c is a n o instance in the forward direction let us now assume that the instance is a y es instance suppose by renaming that forms an exact set cover let us assume that the manipulator s vote v disapproves every candidate in w a since otherwise c can never win uniquely we now show that if v does not disapprove then v is not a vote suppose v does not disapprove then we consider the following extension p of i c a u si d y x si w i c a u si d y x si w i t c a u si d si y x w we have the following scores c x uj u hence both c and win for the votes v however the vote which disapproves makes c a unique winner for the votes p q hence v is not a vote similarly we can show that if the manipulator s vote does not disapprove then the vote is not hence there does not exist any vote and the o pportunistic m anipulation instance is a n o instance in the reverse direction we show that if the instance is a n o instance then there does not exist a vote v of the manipulator and an extension p of p such that c is the unique winner for the votes p q thereby proving that the o pportunistic m anipulation instance is vacuously y es and thus every vote is notice that there must be at least votes in p where the corresponding si gets pushed in bottom k positions since uj c a uj u however in each vote in y is placed within top m k many position and thus we have is exactly since y c now notice that there must be at least one candidate u u which is not covered by the sets si s corresponding to the votes because the instance is a n o instance hence c can not win the election uniquely irrespective of the manipulator s vote thus every vote is and the o pportunistic m anipulation instance is a y es instance we show next similar intractability result for the borda voting rule too with only at most undetermined pairs per vote theorem the o pportunistic m anipulation problem is for the borda voting rule even when the number of manipulators is one and the number of undetermined pairs in every vote is no more than proof we reduce to o pportunistic m anipulation for the borda rule let u um s st is an instance without loss of generality we assume that m is not divisible by if not then we add three new elements to u and a set to s we construct a corresponding o pportunistic m anipulation instance for the borda voting rule as follows candidate set c u c d y for every i t we define as follows t y si u si d c using we define partial vote pi y si for every i t we denote the set of partial votes pi i t by p and i t by we note that the number of undetermined pairs in each partial vote pi is using lemma we add a set q of complete votes with poly m t to ensure the following we denote the borda score of a candidate from a set of votes w by sw b y c m b c b c b ui c m i m b d c we have only one manipulator who tries to make c winner now we show that the instance u s is a y es instance if and only if the o pportunistic m anipulation instance c is a n o instance notice that we can assume without loss of generality that the manipulator places c at the first position d at the second position the candidate ui at m i th position for every i m and y at the last position since otherwise c can never win uniquely irrespective of the extension of p that it the manipulator s vote looks like c d um y in the forward direction let us now assume that the instance is a y es instance suppose by renaming that forms an exact set cover let the manipulator s vote v be c d um y we now argue that v is not a vote the other case where the manipulator s vote be c d um y can be argued similarly we consider the following extension p of i y si u si d c i y si u si d c i t y si u si d c we have the following borda scores v c v y v v v ui m hence c does not win uniquely for the votes p q v however c is the unique winner for the votes p q hence there does not exist any vote and the o pportunistic m anipulation instance is a n o instance in the reverse direction we show that if the instance is a n o instance then there does not exist a vote v of the manipulator and an extension p of p such that c is the unique winner for the votes p q thereby proving that the o pportunistic m anipulation instance is vacuously y es and thus every vote is notice that the score of y must decrease by at least for c to win uniquely however in every vote v where the score of y decreases by at least one in any extension p of p at least one of or must be placed at top position of the vote however the candidates and can be placed at top positions of the votes in p at most many times while ensuring c does not lose the election also even after manipulator places the candidate ui at m i th position for every i m for c to win uniquely the score of every ui must decrease by at least one hence altogether there will be exactly votes denoted by the set in any extension of p where y is placed at the second position however since the instance is a n o instance the si s corresponding to the votes in does not form a set cover let u u be an element not covered by the si s corresponding to the votes in notice that the score of u does not decrease in the extension p and thus c does not win uniquely irrespective of the manipulator s vote thus every vote is and thus the o pportunistic m anipulation instance is a y es instance thus every vote is and the o pportunistic m anipulation instance is a y es instance for the maximin voting rule we show intractability of o pportunistic m anipulation with one manipulator even when the number of undetermined pairs in every vote is at most theorem the o pportunistic m anipulation problem is for the maximin voting rule even when the number of manipulators is one and the number of undetermined pairs in every vote is no more than proof we reduce to o pportunistic m anipulation for the maximin rule let u um s st is an instance we construct a corresponding o pportunistic m anipulation instance for the maximin voting rule as follows candidate set c u c d x y for every i t we define as follows t si x d y u si using we define partial vote pi x si d y for every i t we denote the set of partial votes pi i t by p and i t by we note that the number of undetermined pairs in each partial vote pi is we define another partial vote p as follows p others using lemma we add a set q of complete votes with poly m t to ensure the following pairwise margins notice that the pairwise margins among and does not include the partial vote p figure shows the weighted majority graph of the resulting election b p d c b p x d b p y x b p d uj u b b p a b for every a b c not defined above we have only one manipulator who tries to make c winner now we show that the instance u s is a y es instance if and only if the o pportunistic m anipulation instance p q p c is a n o instance notice that we can assume without loss of generality that the manipulator s vote prefers c to every other candidate y to x x to d and d to uj for every uj u in the forward direction let us now assume that the instance is a y es instance suppose by renaming that forms an exact set cover notice that the manipulator s vote must prefer either to or to or to we show that if the manipulator s vote v prefers to then v is not a vote the other two cases are symmetrical consider the following extension p of p and p of i d y si x u si i t si x d y u si p others from the votes in p q v p the maximin score of c is of d x uj u are of are at most than and of is hence c is not the unique maximn winner d uj c y x figure weighted majority graph of the reduced instance in theorem solid line and dashed line represent pairwise margins in p and respectively the weight of all the edges not shown in the figure are within to for simplicity we do not show edges among um however the manipulator s vote c other makes c the unique maximin winner hence v is not a vote for the reverse direction we show that if the instance is a n o instance then there does not exist a vote v of the manipulator and an extension p of p such that c is the unique winner for the votes p q thereby proving that the o pportunistic m anipulation instance is vacuously y es and thus every vote is consider any extension p of notice that for c to win uniquely y x must be at least of the votes in p call these set of votes however d x in every vote in and d x can be in at most votes in p for c to win uniquely hence we have also for c to win each d uj must be at least one vote of p and d uj is possible only in the votes in however the sets si s corresponding to the votes in does not form a set cover since the instance is a n o instance hence there must exist a uj u for which uj d in every vote in p and thus c can not win uniquely irrespective of the vote of the manipulator thus every vote is and the o pportunistic m anipulation instance is a y es instance our next result proves that the o pportunistic m anipulation problem is for the voting rule too for every even with one manipulator and at most undetermined pairs per vote theorem the o pportunistic m anipulation problem is for the voting rule for every even when the number of manipulators is one and the number of undetermined pairs in each vote is no more than proof we reduce to o pportunistic m anipulation for the voting rule let u um s st is an instance we construct a corresponding o pportunistic m anipulation instance for the voting rule as follows candidate set c u c x y for every i t we define as follows t si x y c others using we define partial vote pi x c y for every i t we denote the set of partial votes pi i t by p and i t by we note that the number of undetermined pairs in each partial vote pi is we define another partial vote p as follows p others using lemma we add a set q of complete votes with poly m t to ensure the following pairwise margins notice that the pairwise margins among and does not include the partial vote p figure shows the weighted majority graph of the resulting election b p uj c u b p x y b p c y p x c p di c p zk c p uj x p x zk p di x p y uj p di y p y zk p zk uj p uj di p zk p zk p zk k j m b p uj u for at least many u u b b p a b for every a b c not defined above we have only one manipulator who tries to make c winner now we show that the instance u s is a y es instance if and only if the o pportunistic m anipulation instance p q p c is a n o instance since the number of voters is odd does not play any role in the reduction and thus from here on we simply omit notice that we can assume without loss of generality that the manipulator s vote prefers c to every other candidate and x to y in the forward direction let us now assume that the instance is a y es instance suppose by renaming that forms an exact set cover suppose the manipulator s vote v order and as we will show that v is not a vote symmetrically we can show that the manipulator s vote ordering and in any other order is not consider the following extension p of p and p of i y c si x others i t si x y c others p others from the votes in p q v p the copeland score of c is m defeating y zk uj j m of y is defeating zk uj j m of uj is at most defeating x di and at most many u u of x is defeating c y zk of is defeating y and c of is defeating y c zk of is m defeating di uj j m for every k of is m defeating uj i j m is m defeating uj y c x figure weighted majority graph of the reduced instance in theorem solid line and dashed line represent pairwise margins in q p and q respectively the weight of all the edges not shown in the figure are within to the weight of all unlabeled edges are for simplicity we do not show edges among um uj i j m is m defeating uj i j m hence c with with copeland score m however the manipulator s vote c makes c win uniquely hence v is not a vote and thus the o pportunistic m anipulation instance is a n o instance for the reverse direction we show that if the instance is a n o instance then there does not exist a vote v of the manipulator and an extension p of p such that c is the unique winner for the votes p q thereby proving that the o pportunistic m anipulation instance is vacuously y es and thus every vote is consider any extension p of notice that for c to win uniquely c must defeat each uj u and thus c is preferred over uj in at least one vote in p we call these votes however in every vote in y is preferred over x and thus because x must defeat y for c to win uniquely since the instance is a n o instance there must be a candidate u u which is not covered by the sets corresponding to the votes in and thus u is preferred over c in every vote in hence c can not win uniquely irrespective of the vote of the manipulator thus every vote is and the o pportunistic m anipulation instance is a y es instance for the bucklin and simplified bucklin voting rules we show intractability of the o pportunis tic m anipulation problem with at most undetermined pairs per vote and only one manipulator theorem the o pportunistic m anipulation problem is for the bucklin and simplified bucklin voting rules even when the number of manipulators is one and the number of undetermined pairs in each vote is no more than proof we reduce to o pportunistic m anipulation for the bucklin and simplified bucklin voting rules let u um s st is an instance we assume without loss of generality that m is not divisible by if not we introduce three elements in u and a set containing them in s and t is an even integer if not we duplicate any set in s we construct a corresponding o pportunistic m anipulation instance for the bucklin and simplified bucklin voting rules as follows candidate set c u c d w where m for every i t we define as follows t u si si d others using we define partial vote pi d si for every i t we denote the set of partial votes pi i t by p and i t by we note that the number of undetermined pairs in each partial vote pi is we introduce the following additional complete votes q b copies of w c others b copies of w c others b copies of w d c others b copies of w d c others b copies of w d c others b copies of u others b one u c others we have only one manipulator who tries to make c winner now we show that the instance u s is a y es instance if and only if the o pportunistic m anipulation instance c is a n o instance the total number of voters in the o pportunistic m anipulation instance is we notice that within top m positions of the votes in q c appears t times and appear t times appears times appears times every candidate in w appears t times every candidate in u appears t times also every candidate in u appears t times within top m positions of the votes in p q hence for both bucklin and simplified bucklin voting rules we can assume without loss of generality that the manipulator puts c every candidate in w and exactly one of and in the forward direction let us now assume that the instance is a y es instance suppose by renaming that forms an exact set cover suppose the manipulator s vote v puts c every candidate in w and within top m positions we will show that v is not the other case where the manipulator s vote puts c every candidate in w and within top m positions is symmetrical consider the following extension p of p i u si d si others i u si d si others i t u si si d others for both bucklin and simplified bucklin voting rules c with for the votes in p q v however c wins uniquely for the votes in p q hence v is not a vote and thus the o pportunistic m anipulation instance is a n o instance for the reverse direction we show that if the instance is a n o instance then there does not exist a vote v of the manipulator and an extension p of p such that c is the unique winner for the votes p q thereby proving that the o pportunistic m anipulation instance is vacuously y es and thus every vote is consider any extension p of notice that for c to win uniquely every candidate must be pushed out of top m positions in at least one vote in p we call these set of votes notice that however in every vote in at least one of and appears within top m many positions since the manipulator has to put at least one of and within its top m positions and and appear t times in the votes in q we must have and thus for c to win uniquely however there exists a candidate u u not covered by the si s corresponding to the votes in notice that u gets majority within top m positions of the votes and c can never get majority within top m positions of the votes hence c can not win uniquely irrespective of the vote of the manipulator thus every vote is and the o pportunistic m anipulation instance is a y es instance polynomial time algorithms we now turn to the polynomial time cases depicted in table this section is organized in three parts one for each problem considered weak manipulation since the p ossible w inner problem is in p for the plurality and the veto voting rules it follows from observation that the w eak m anipulation problem is in p for the plurality and veto voting rules for any number of manipulators proposition the w eak m anipulation problem is in p for the plurality and veto voting rules for any number of manipulators proof the p ossible w inner problem is in p for the plurality and the veto voting rules hence the result follows from observation strong manipulation we now discuss our algorithms for the s trong m anipulation problem the common flavor in all our algorithms is the following we try to devise an extension that is as adversarial as possible for the favorite candidate c and if we can make c win in such an extension then roughly speaking such a strategy should work for other extensions as well where the situation only improves for c however it is challenging to come up with an extension that is globally dominant over all the others in the sense that we just described so what we do instead is we consider every potential nemesis w who might win instead of c and we build profiles that are as good as possible for w and as bad as possible for each such profile leads us to constraints on how much the manipulators can afford to favor w in terms of which positions among the manipulative votes are safe for w we then typically show that we can determine whether there exists a set of votes that respects these constraints either by using a greedy strategy or by an appropriate reduction to a flow problem we note that the overall spirit here is similar to the approaches commonly used for solving the n ecessary w inner problem but as we will see there are differences in the details we begin with the and voting rules theorem the s trong m anipulation problem is in p for the and voting rules for any k and any number of manipulators proof for the time being we just concentrate on votes for each candidate c c calculate the maximum possible value of smax nm c c snm c snm c from nonmanipulators votes where snm a is the score that candidate a receives from the votes of the this can be done by checking all possible score combinations that c and can get in each vote v and choosing the one which maximizes sv sv c from that vote we now fix the position of c at the top position for the manipulators votes and we check if it is possible to place other candidates in the manipulators votes such that the final value of smax nm c c c c is negative which can be solved easily by reducing it to the max flow problem which is polynomial time solvable we now prove that the s trong m anipulation problem for scoring rules is in p for one manipulator theorem the s trong m anipulation problem is in p for any scoring rule when we have only one manipulator proof for each candidate c c calculate smax nm c c using same technique described in the proof of theorem we now put c at the top position of the manipulator s vote for each candidate c c can be placed at positions i m in the manipulator s vote which makes smax nm c c negative using this construct a bipartite graph with c c on left and m on right and there is an edge between and i iff the candidate can be placed at i in the manipulator s vote according to the above criteria now solve the problem by finding existence of perfect matching in this graph our next result proves that the s trong m anipulation problem for the bucklin simplified bucklin fallback and simplified fallback voting rules are in theorem the s trong m anipulation problem is in p for the bucklin simplified bucklin fallback and simplified fallback voting rules for any number of manipulators proof let c p m c be an instance of s trong m anipulation for simplified bucklin and let m denote the total number of candidates in this instance recall that the manipulators have to cast their votes so as to ensure that the candidate c wins in every possible extension of we use q to denote the set of manipulating votes that we will construct to begin with without loss of generality the manipulators place c in the top position of all their votes we now have to organize the positioning of the remaining candidates across the votes of the manipulators to ensure that c is a necessary winner of the profile p q to this end we would like to develop a system of constraints indicating the overall number of times that we are free to place a candidate w c c among the top positions in the profile q in particular let us fix w c c and let be the maximum number of votes of q in which w can appear in the top positions our first step is to compute necessary conditions for we use pw to denote a set of complete votes that we will construct based on the given partial votes intuitively these votes will represent the worst possible extensions from the point of view of c when pitted against these votes are engineered to ensure that the manipulators can make c win the elections pw for all w c c and m if and only if they can strongly manipulate in favor of more formally there exists a voting profile q of the manipulators so that c wins the election pw q for all w c c and m if and only if c wins in every extension of the profile p q we now describe the profile pw the construction is based on the following case analysis where our goal is to ensure that to the extent possible we position c out of the top positions and incorporate w among the top positions b let v p be such that either c and w are incomparable or w we add the complete vote to pw where is obtained from v by placing w at the highest possible position and c at the lowest possible position and extending the remaining vote arbitrarily b let v p be such that c w but there are at least candidates that are preferred over w in we add the complete vote to pw where is obtained from v by placing c at the lowest possible position and extending the remaining vote arbitrarily b let v p be such that c is forced to be within the top positions then we add the complete vote to pw where is obtained from v by first placing w at the highest possible position followed by placing c at the lowest possible position and extending the remaining vote arbitrarily b in the remaining votes notice that whenever w is in the top positions c is also in the top positions let denote this set of votes and let t be the number of votes in we now consider two cases let d c be the number of times c is placed in the top positions in the profile pw q and let d w be the number of times w is placed in the top positions in the profile pw let us now formulate the requirement that in pw q the candidate c does not have a majority in the top positions and w does have a majority in the top positions note that if this requirement holds for any w and then strong manipulation is not possible therefore to strongly manipulate in favor of c we must ensure that for every choice of w and we are able to negate the conditions that we derive the first condition from above simply translates to d c the second condition amounts to requiring first that there are at least votes where w appears in the top positions that is d w t further note that the gap between d w and majority will be filled by using votes from to push w forward however these votes contribute equally to w and c being in the top and positions respectively therefore the difference between d w and must be less than the difference between d c and summarizing the following conditions which we collectively denote by are sufficient to defeat c in some extension d c d w t d w d c from the manipulator s point of view the above provides a set of constraints to be satisfied as they place the remaining candidates across their votes whenever d c the manipulators place any of the other candidates among the top positions freely because c already has majority on the other hand if d c then the manipulators must respect at least one of the following constraints t d w and d c d w extending the votes of the manipulator while respecting these constraints or concluding that this is impossible to do can be achieved by a natural greedy strategy construct the manipulators votes by moving positionally from left to right for each position consider each manipulator and populate her vote for that position with any available candidate we output the profile if the process terminates by completing all the votes otherwise we say n o we now argue the proof of correctness suppose the algorithm returns n o this implies that there exists a choice of w c c and m such that for any voting profile q of the manipulators the conditions in are satisfied indeed if there exists a voting profile that violated at least one of these conditions then the greedy algorithm would have discovered it therefore no matter how the manipulators cast their vote there exists an extension where c is defeated in particular for the votes in p this extension is given by pw further we choose d w votes among the votes in and extend them by placing w in the top positions and extending the rest of the profile arbitrary we extend the remaining votes in by positioning w outside the top positions clearly in this extension c fails to achieve majority in the top positions while w does achieve majority in the top positions on the other hand if the algorithm returns y es then consider the voting profile of the manipulators we claim that c wins in every extension of p q suppose to the contrary that there exists an extension r and a candidate w such that the simplified bucklin score of c is no more than the simplified bucklin score of w in in this extension therefore there exists m for which w attains majority in the top positions and c fails to attain majority in the top positions however note that this is already impossible in any extension of the profile pw l because of the design of the constraints by construction the number of votes in which c appears in the top positions in r is only greater than the number of times c appears in the top positions in any extension of pw l and similarly for w this leads us to the desired contradiction for the bucklin voting rule we do the following modifications to the algorithm if d c d w for some w c c and m then we make the proof of correctness for the bucklin voting rule is similar to the proof of correctness for the simplified bucklin voting rule above for fallback and simplified fallback voting rules we consider the number of candidates each voter approves while computing we output y es if and only if for every w c c and every m since we can assume without loss of generality that the manipulator approves the candidate c only again the proof of correctness is along similar lines to the proof of correctness for the simplified bucklin voting rule we next show that the s trong m anipulation problem for the maximin voting rule is solvable when we have only one manipulator theorem the s trong m anipulation problem for the maximin voting rules are in p when we have only one manipulator proof for the time being just concentrate on votes using the algorithm for nw for maximin in we compute for all pairs w c n w w d and n w c for all d c c this can be computed in polynomial time now we place c at the top position in the manipulator s vote and increase all n w c by one now we place a candidate w at the second position if for all c w w d n w c for all d c c where w w d n w w d of the candidate d has already been assigned some position in the manipulator s vote and w w d n w w d else the correctness argument is in the similar lines of the classical greedy manipulation algorithm of opportunistic manipulation for the plurality fallback and simplified fallback voting rules it turns out that the voting profile where all the manipulators approve only c is a voting profile and therefore it is easy to devise a manipulative vote observation the o pportunistic m anipulation problem is in p for the plurality and fallback voting rules for a any number of manipulators for the veto voting rule however a more intricate argument is needed that requires building a system of constraints and a reduction to a suitable instance of the maximum flow problem in a network to show polynomial time tractability of o pportunistic m anipulation theorem the o pportunistic m anipulation problem is in p for the veto voting rule for a constant number of manipulators proof let p c be an input instance of o pportunistic m anipulation we may assume without loss of generality that the manipulators approve p we view the voting profile of the manipulators as a tuple na c n with c na where the na many manipulators disapprove a we denote the set of such tuples as t and we have o which is polynomial in m since is a constant a tuple na c t is not if there exists another tuple c t and an extension p of p with the following properties we denote the veto score of a candidate from p by s for every candidate a c c we define two quantities w a and d a as follows b s c s a for every a c c with na and we define w a s c d a b s c s a for every a c with na and we define w a s c d a b s a na s c s a for every a c c with na and we define w a s c d a s a na we guess the value of s c given a value of s c we check the above two conditions by reducing this to a max flow problem instance as follows we have a source vertex s and a sink we have a vertex for every a c call this set of vertices y and a vertex for every vote v p call this set of vertices x we add an edge from s to each in x of capacity one we add an edge of capacity one from a vertex x x to a vertex y y if the candidate corresponding to the vertex y can be placed at the last position in an extension of the partial vote corresponding to the vertex x we add an edge from a vertex y to t of capacity w a where a is the voter corresponding to the vertex y we also set the demand of every vertex y d a that is the total amount of flow coming into vertex y must be at least d a where a is the voter corresponding to the vertex y clearly the above three conditions are met if and only if there is a feasible amount of flow in the above flow graph since s c can have only possible values from to p and o we can iterate over all possible pairs of tuples in t and all possible values of s c and find a voting profile if there exists a one conclusion we revisited many settings where the complexity barrier for manipulation was and studied the problem under an incomplete information setting our results present a fresh perspective on the use of computational complexity as a barrier to manipulation particularly in cases that were thought to be because the traditional manipulation problem was polynomially solvable to resurrect the argument of computational hardness we have to relax the model of complete information but we propose that the incomplete information setting is more realistic and many of our hardness results work even with very limited incompleteness of information our work is likely to be the starting point for further explorations to begin with we leave open the problem of completely establishing the complexity of strong opportunistic and weak manipulations for all the scoring rules other fundamental forms of manipulation and control do exist in voting such as destructive manipulation and control by adding candidates it would be interesting to investigate the complexity of these problems in a partial information setting another exciting direction is the study of average case complexity as opposed to the worst case results that we have pursued these studies have already been carried out in the setting of complete information studying the problems that we propose in the averagecase model would reveal further insights on the robustness of the incomplete information setting as captured by our model involving partial orders our results showed that the impact of paucity of information on the computational complexity of manipulation crucially depends on the notion of manipulation under consideration we also argued that different notions of manipulation may be applicable to different situations maybe based of how optimistic or pessimistic the manipulators are one important direction of future research is to run extensive experimentations on real and synthetic data to know how people manipulate in the absence of complete information acknowledgement palash dey wishes to gratefully acknowledge support from google india for providing him with a special fellowship for carrying out his doctoral work neeldhara misra acknowledges support by the inspire faculty scheme dst india project references yoram bachrach nadja betzler and piotr faliszewski probabilistic possible winner determination in international conference on artificial intelligence aaai volume pages felix brandt vincent conitzer ulle endriss lang and ariel d procaccia handbook of computational social choice nadja betzler and britta dorn towards a dichotomy of finding possible winners in elections based on scoring rules in mathematical foundations of computer science mfcs pages springer dorothea baumeister piotr faliszewski lang and rothe campaigns for lazy voters truncated ballots in international conference on autonomous agents and multiagent systems aamas valencia spain june volumes pages john bartholdi iii and james orlin single transferable vote resists strategic voting soc choice john bartholdi iii tovey and trick the computational difficulty of manipulating an election soc choice nadja betzler rolf niedermeier and gerhard j woeginger unweighted coalitional manipulation under the borda rule is in ijcai volume pages dorothea baumeister magnus roos and rothe computational complexity of two variants of the possible winner problem in the international conference on autonomous agents and multiagent systems aamas pages dorothea baumeister magnus roos rothe lena schend and lirong xia the possible winner problem with uncertain weights in ecai pages steven j brams and m remzi sanver voting systems that combine approval and preference in the mathematics of preference choice and order pages springer yann chevaleyre lang nicolas maudet and monnot possible winners when new candidates are added the case of scoring rules in proc international conference on artificial intelligence aaai vincent conitzer tuomas sandholm and lang when are elections with few candidates hard to manipulate acm vincent conitzer toby walsh and lirong xia dominating manipulations in voting with partial information in international conference on artificial intelligence aaai volume pages palash dey computational complexity of fundamental problems in social choice theory in proc international conference on autonomous agents and multiagent systems pages international foundation for autonomous agents and multiagent systems jessica davies george katsirelos nina narodytska and toby walsh complexity of and algorithms for borda manipulation in proc international conference on artificial intelligence aaai pages ning ding and fangzhen lin voting with partial information what questions to ask in proc international conference on autonomous agents and systems aamas pages international foundation for autonomous agents and multiagent systems palash dey neeldhara misra and narahari detecting possible manipulators in elections in proc international conference on autonomous agents and multiagent systems aamas istanbul turkey may pages palash dey neeldhara misra and narahari kernelization complexity of possible winner and coalitional manipulation problems in voting in proc international conference on autonomous agents and multiagent systems aamas istanbul turkey may pages palash dey neeldhara misra and narahari kernelization complexity of possible winner and coalitional manipulation problems in voting theor comput palash dey and y narahari asymptotic of voting rules the case of large number of candidates in proc international conference on autonomous agents and multiagent systems aamas pages international foundation for autonomous agents and multiagent systems palash dey and y narahari asymptotic of voting rules the case of large number of candidates studies in microeconomics edith elkind and manipulation under voting rule uncertainty in proc international conference on autonomous agents and multiagent systems aamas pages international foundation for autonomous agents and multiagent systems eithan ephrati and jeffrey s rosenschein the clarke tax as a consensus mechanism among automated agents in proc ninth international conference on artificial intelligence aaai pages piotr faliszewski edith hemaspaandra and lane a hemaspaandra using complexity to protect elections commun acm piotr faliszewski edith hemaspaandra lane hemaspaandra and rothe llull and copeland voting computationally resist bribery and constructive control artif intell piotr faliszewski edith hemaspaandra and henning schnoor copeland voting ties matter in proc international conference on autonomous agents and multiagent systems aamas pages international foundation for autonomous agents and multiagent systems piotr faliszewski edith hemaspaandra and henning schnoor manipulation of copeland elections in proc international conference on autonomous agents and multiagent systems aamas pages international foundation for autonomous agents and multiagent systems ehud friedgut gil kalai and noam nisan elections can be manipulated often in ieee annual ieee symposium on foundations of computer science focs pages ieee ronald fagin ravi kumar and sivakumar efficient similarity search and classification via rank aggregation in proc acm sigmod international conference on management of data sigmod pages new york ny usa acm piotr faliszewski and ariel d procaccia ai s war on manipulation are we winning ai magazine allan gibbard manipulation of voting schemes a general result econometrica pages michael r garey and david s johnson computers and intractability volume freeman new york serge gaspers victor naroditskiy nina narodytska and toby walsh possible and necessary winner problem in social polls in proc international conference on autonomous agents and multiagent systems aamas pages international foundation for autonomous agents and multiagent systems isaksson kindler and mossel the geometry of manipulation a quantitative proof of the theorem combinatorica kathrin konczak and lang voting procedures with incomplete preferences in proc international joint conference on artificial multidisciplinary workshop on advances in preference handling volume david c mcgarvey a theorem on the construction of voting paradoxes econometrica pages vijay menon and kate larson complexity of manipulation in elections with partial votes corr nina narodytska and toby walsh the computational impact of partial votes on strategic voting in proc european conference on artificial intelligence august prague czech republic including prestigious applications of intelligent systems pais pages david pennock eric horvitz and lee giles social choice theory and recommender systems analysis of the axiomatic foundations of collaborative filtering in proc seventeenth national conference on artificial intelligence and twelfth conference on on innovative applications of artificial intelligence july august austin texas pages ariel d procaccia and jeffrey s rosenschein junta distributions and the averagecase complexity of manipulating elections in proc fifth international conference on autonomous agents and multiagent systems aamas pages acm ariel procaccia and jeffrey rosenschein tractability of manipulation in voting via the fraction of manipulators in proc international joint conference on autonomous agents and multiagent systems aamas honolulu hawaii usa may page mark allen satterthwaite and arrow s conditions existence and correspondence theorems for voting procedures and social welfare functions econ theory toby walsh an empirical study of the manipulability of single transferable voting in proc european conference on artificial intelligence ecai pages toby walsh where are the hard manipulation problems artif intell pages lirong xia and vincent conitzer generalized scoring rules and the frequency of coalitional manipulability in proc acm conference on electronic commerce ec pages acm lirong xia and vincent conitzer a sufficient condition for voting rules to be frequently manipulable in proc acm conference on electronic commerce ec pages acm lirong xia and vincent conitzer determining possible and necessary winners under common voting rules given partial orders volume pages ai access foundation lirong xia michael zuckerman ariel d procaccia vincent conitzer and jeffrey s rosenschein complexity of unweighted coalitional manipulation under some common voting rules in proc international joint conference on artificial intelligence ijcai volume pages
8
achieving the time of but the accuracy of dec lirong xue princeton university abstract we propose a simple approach which given distributed computing resources can nearly achieve the accuracy of prediction while matching or improving the faster prediction time of the approach consists of aggregating denoised predictors over a small number of distributed subsamples we show both theoretically and experimentally that small subsample sizes suffice to attain similar performance as without sacrificing the computational efficiency of introduction while neighbor classification or regression can achieve significantly better prediction accuracy that k practitioners often default to as it can achieve much faster prediction that scales better with large sample size in fact much of the commercial tools for nearest neighbor search remain optimized for rather than for further biasing practice towards unfortunately is statistically inconsistent its prediction accuracy plateaus early as sample size n increases while keeps improving longer for choices of k in this work we consider having access to a small number of distributed computing units and ask whether better tradeoffs between and can be achieved by harnessing parallelism at prediction time a simple idea is bagging multiple predictors computed over distributed subsamples however this tends to require large number of subsamples while the number of computing units is often constrained in practice in fact an infinite number of subsamples is assumed in all known consistency guarantees for the bagging approach biau et samworth et here we are samory kpotufe princeton university particularly interested in small numbers of distributed subsamples say to as a practical matter hence we consider a simple variant of the above idea consisting of aggregating a few denoised predictors with this simple change we obtain the same theoretical guarantees as for using few subsamples while individual processing times are of the same order or better than s computation time the main intuition behind denoising is as follows the increase in variance due to subsampling is hard to counter if too few predictors are aggregated we show that this problem is suitably addressed by denoising each subsample as a preprocessing step replacing the subsample labels with estimates based on the original data prediction then consists of aggregating by averaging or by majority the predictions from a few denoised subsamples of small size m n interestingly as shown both theoretically and experimentally we can let the subsampling ratio while achieving a prediction accuracy of the same order as that of such improved accuracy over vanilla is verified experimentally even for relatively small number of distributed predictors note that in practice we aim to minimize the number of distributed predictors or equivalently the number of computing units which is usually costly in its own right this is therefore a main focus in our experiments in particular we will see that even with a single denoised predictor one computer we can observe a significant improvement in accuracy over vanilla while maintaining the prediction speed of our main focus in this work is classification perhaps the most common form of nn prediction but our results readily extend to regression detailed results and related work while nearest neighbor prediction methods are among the oldest and most enduring in data analysis fix and hodges jr cover and hart kulkarni and posner their theoretical performance in practical settings is still being elucidated for statistical consistency it is well known that one needs a number k of neighbors the vanilla method manuscript under review by aistats is inconsistent for either regression or classification devroye et in the case of regression kpotufe shows that convergence rates excess error over bayes behave as o for lipschitz regression functions over data with intrinsic dimension d this then implies a rate of o for binary classification via known relations between regression and classification rates see devroye et al similar rates are recovered in cannings et under much refined parametrization of the marginal input distribution while a recent paper of moscovich et al recovers similar rates in semisupervised settings such classification rates can be sharpened by taking into account the noise margin the mass of data away from the decision boundary this is done in the recent work of chaudhuri and dasgupta which obtain faster rates of the form o where the regression function is assumed which can be much faster for large characterizing the noise margin however such rates require large number of neighbors k o growing as a root of sample size n such large k implies much slower prediction time in practice which is exacerbated by the scarcity of optimized tools for k nearest neighbor search in contrast fast commercial tools for search are readily available building on various space partitioning data structures krauthgamer and lee clarkson beygelzimer et gionis et in this work we show that the classification error of the proposed approach namely aggregated denoised s is of the same optimal order plus a term where m n is the subsample size used for each denoised this additional term due to subsampling is of lower order provided m in other words we can let the sampling ratio while achieving the same rate as we emphasize that the smaller the subsampling ratio the faster the prediction time rather than just maintaining the prediction time of vanilla we can actually get considerably better prediction time using smaller subsamples while at the same time considerably improving prediction accuracy towards that of finally notice that the theoretical subsampling ratio of is best with smaller d the intrinsic dimension of the data which is not assumed to be known a priori such intrinsic dimension d is smallest for structured data in ird data on an unknown manifold or sparse data and therefore suggests that much smaller subsamples hence faster prediction times are possible with structured data while achieving good prediction accuracy as mentioned earlier the even simpler approach of bagging predictors is known to be consistent biau and devroye biau et samworth et however only in the case of an infinite bag size corresponding to an infinite number of computing units in our setting we assume one subsample per computing unit so as to maintain or beat the prediction time of interestingly as first shown in biau and devroye biau et the subsampling ratio can also tend to as n while achieving optimal prediction rates for fixed albeit assuming an infinite number of subsamples in contrast we show optimal rates on par with those of for even one denoised subsample this suggests as verified experimentally that few such denoised subsamples are required for good prediction accuracy the recent work of kontorovich and weiss of a more theoretical nature considers a similar question as ours and derives a penalized approach shown to be statistically consistent unlike vanilla the approach of kontorovich and weiss roughly consists of finding a subsample of the data whose induced achieves a significant margin between classes two classes in that work unfortunately finding such subsample can be prohibitive computable in time o in the large data regimes of interest here in contrast our training phase only involves random subsamples and over a denoising parameter k our training time is akin to the usual training time finally unlike in the above cited works our rates are established for multiclass classification for the sake of completion and depend logarithmically on the number of classes furthermore as stated earlier our results extend beyond classification to regression and in fact are established by first obtaining regression rates for estimating the regression function e y paper outline section presents our theoretical setup and the prediction approach theoretical results are discussed in section and the analysis in section experimental evaluations on datasets are presented in section preliminaries distributional assumptions our main focus is classification although our results extend to regression henceforth we assume we are n given an sample x y xi yi where x d x ir and y l l the conditional distribution py is fully captured by the regression function defined as x manuscript under review by aistats l where x p y x we assume the following on px y assumption intrinsic dimension and regularity of px first for any x x and r define the ball b x r x kx k r we assume there exists an integer d and a constant cd such that for all x x r we have px b x r cd rd in this work d is unknown to the procedure however as is now understood from previous work see kpotufe the performance of nn methods depends on such intrinsic we will see that the performance of the approach of interest here would also depends on such unknown in particular as is argued in kpotufe d is low for manifolds or sparse data so we would think of d d for structured data note that the above assumption also imposes regularity on px namely by ensuring sufficient mass locally on x so that nns of a point x are not arbitrarily far from it assumption smoothness of the function is for some x x x x kx x k we will use the following version of tsybakov s noise condition audibert and tsybakov adapted to the multiclass setting assumption tsybakov noise condition for any x x let l x denote the l th largest element in x l there exists and such that p x x t classification procedure for any classifier h x l we are interested in the classification error err h px y h x y it is well known that the above error is mimimized by the bayes classifier x argmaxl x therefore for any estimated classifier we are interested in the excess error err err we first recall the following basic nearest neighbor estimators definition prediction given k n let x denote the indices of the k nearest neighbors of x in the sample x assume for simplicity that ties are resolved so that x the classifier can be defined via the regression estimate x l where x k x x yi l the classifier is then obtained as x argmax x l finally we let rk x denote the distance from x to its nearest neighbor we can now formally describe the approach considered in this work definition denoised consider a random subsample without replacement of x of size m for any x x let nn x denote the nearest neighbor of x in the denoised estimate at x is given as x nn x where is as defined above for some fixed this estimator corresponds to over a sample where each is prelabeled as the resulting estimator which we denote subnn for simplicity is defined as follows definition subnn let denote denoised estimators defined over i independent subsamples of size m the i sets of indices corresponding to each subsample are picked independently although the indices in each set is picked with replacement in n at any x x the subnn estimate x is the majority label in x it is clear that the subnn estimate can be computed in parallel over i machines while the final step namely computation of the majority vote takes negligible time thus we will view the prediction time complexity at any query x as the average time over i machines it takes to compute the of x on each subsample this time complexity gets better as furthermore we will show that even with relatively small i increasing variability we can let get small while attaining an excess error on par with that of here this is verified experimentally overview of results our main theoretical result theorem below concerns the statistical performance of subnn the main technicality involves characterizing the effect of subsampling and denoising on performance interestingly the rate below does not depend on the number i of subsamples this is due to the averaging effect of taking majority vote accross the i submodels and is discussed in detail in section see proof and discussion of lemma in particular the rate is bounded in terms of a bad event that is unlikely for a random submodel and therefore unlikely to happen for a majority manuscript under review by aistats theorem let let v denote the vc dimension of balls on x with probability at least there exists a choice of k n such that the estimate satisfies v ln err err h n v ln m for constants depending on px y the first term above is a function of the size n of the original sample and recovers the recent optimal bounds for classification of chaudhuri and dasgupta we note however that the result of chaudhuri and dasgupta concerns binary classification while here we consider the more general setting of multiclass matching lower bounds were established earlier in audibert and tsybakov the second term a function of the subsample size m characterizes the additional error over vanilla due to subsampling and due to using s at prediction time as discussed earlier in the introduction the first term dominates we recover the same rates as for whenever the subsampling ratio which goes to as n this is remarkable in that it suggests smaller subsample sizes are sufficient for good accuracy in the large sample regimes motivating the present work we will see later that this is supported by experiments as mentioned earlier similar vanishing subsampling ratios were shown for bagged in biau and devroye biau et samworth et but assuming a infinite number of subsamples in contrast the above result holds for any number of subsamples and the improvements over are supported in experiments over varying number of subsamples along with varying subsampling ratios the main technicalities and insights in establishing theorem are discussed in section below with some proof details relegated to the appendix analysis overview the proof of theorem combines the statements of propositions and below the main technicality involved is in establishing proposition which brings together the effect of noise margin smoothness and the overall error due to denoising over a subsample we overview these supporting results in the next subsection followed by the proof of theorem supporting results theorem relies on first establishing a rate of convergence for the regression estimate used in denoising the subsamples while such rates exist in the literature under various assumptions see kpotufe we require a rate that holds uniformly over all x x this is given in proposition below and is established for our particular setting where y takes discrete multiclass values and are both multivariate functions its proof follows standard techniques adapted to our particular aim and is given in the appendix supplementary material proposition uniform regression error let let denote the regression estimate of definition the for a choice of d nl ncd with probability at k o ln least over x y we have simultaneously for all x x v ln x x c ncd where c is a function of the above statement is obtained by first remarking that under structural assumptions on px namely that there is sizable mass everywhere locally on x nearest neighbor distances can be uniformly bounded with such nearest neighbor distances control the bias of the estimator while its variance behaves like o such a uniform bound on nn distances is given in lemma below and follows standard insights lemma uniform bound on nn distances rk as in definition let rk x denote the distance from x x n to its k th nearest neighbor in a sample x px then with probability at least over x the following holds for all k n sup rk x cd max k v ln ln n n subnn convergence we are ultimately interested in the particular regression estimates induced by subsampling the denoised estimates over a subsample can be viewed as l for a regression estimate x nn x x evaluated at the nearest neighbor nn x of x in our first step is to relate the error of to that of here again the bound on nn distances of lemma above comes in handy since can be viewed as introducing additional bias to a bias which is in turn controlled by the distance from a manuscript under review by aistats query x to its nn in the subsample by the above lemma this distance is of order introducing a bias of order given the smoothness of thus combining the above two results yields the following regression error on denoised estimates proposition uniform convergence of denoised regression let let denote the regression estimate of definition let denote a subsample without replacement of x define the denoised estimate x nn x the following holds for a choice of d nc k o v ln nl with probability d at least over x y and we have simultaneously for all x x x x v ln ncd v ln d mcd where c is a function of proof define nn x so that x we then have the two parts decomposition x x x lemma uniform convergence of aggregate regresi sion given independent subsamples from x y define x nn x the regression estimate evaluated at the nearest neighbor nn x of x in suppose there exists n m such that max px y x x x i for some then let denote the subnn i estimate using subsamples with probability at i least over the randomness in x y and the following holds simultaneously for all x x x x x x remark notice that in the above statement the probability of error goes from to but does not depend on the number i of submodels this is because of the averaging effect of the majority vote for intuition suppose b is a bad event and is whether b happens for submodel i suppose further that e for all i then the likelihood of b happening for a majority of more than of models is x x e i i i by a markov inequality we use this type of intuition in the proof however over a sequence of related bad events and using the fact that the submodels estimates are independent conditioned on x where the last inequality follows with probability from proposition proof the result isnobtained by appropriately boundo ing the indicator x x x x to bound the second term in inequality notice that can be viewed as m samples from px therefore xk can be bounded using lemma therefore by smoothness condition on in assumption we have with probability at least simultaneously for all x x let denote the denoised classifier on sample or for short the i th submodel first notice that if the majority vote x l for some label l l then at least submodels x predict l at x in other words we have x v ln x ncd i x ln ln d v ln m mcd mcd combining and yields the statement next we consider aggregate regression error the discrepancy x x x x between the coordinates of given by the labels x and x this will be bounded in terms of the error n m attainable by the individual denoised regression estimates as bounded in the above proposition o lx n x l i therefore fix x x and let x l we then have x x x x x x x i o lx n x l x x x i i o lx n x x x x i manuscript under review by aistats we bound the above as follows suppose x li and h x l for some labels li and l in l now if x x then x l x and x l x also by definition we know that i x is the maximum entry of x so l x l x therefore i x x x x l x l x i l x l x i in other words x x x x only when x x thus bound to obtain n o x x x x i o lx n x x i finally we use the fact that for an event a x we have x a x a x combine this fact with the above inequality to get n o x x x x x n o sup x x x x sup i o lx n x x i i n o lx sup x x i i o lx n x x x i proof since for any classifier h x x p y h x the excess error h of the classii fier can be written as e x x x x x thus the assumption in the proposition statement that for all x x x x x x with probability at least yields a trivial bound of on the excess error we want to refine this bound let l x be the largest entry in the vector x l and define x x x then at any fixed point x we can refine the bound on excess error at x namely x x x x by separately considering the following exhaustive conditions a x and b x on x a x in which case the excess error is this follows from x x x and that x x x x x x x in other words x x is larger than so equals x x x b x in which case the excess error can not be refined at x however the total mass of such x s is at most by tsybakov s noise condition assumption combining these conditions we have with probability at least that the excess error satisfies h i err err e x x x x x e x a x b x e b x p x the final statement is obtained by integrating both sides of the inequality over the randomness in x y and i the conditionally independent subsamples next proposition below states that the excess error of the subnn estimate can be bounded in terms of the aggregate regression error considered in lemma in particular the proposition serves to account for the effect of the noise margin parameter towards obtaining faster rates than those in terms of smoothness only combining the results of this section yield the main theorem whose proof is given next proof of theorem our main result follows easily from propositions and this is given below proposition suppose there exists n m such that with probability at least over the randomi ness in x y and the subsamples we have simultaneously for all x x x x x x proof fix any note that the conditions of lemma are verified in proposition namely that with probability all regression errors of submodels are bounded by then with probability at least the excess classification error of the estimate satisfies c err err v ln n v ln m manuscript under review by aistats table datasets used in evaluating subnn name miniboone twitterbuzz letterbng yearpredmsd winequality train test dimension classes regression regression where c is a constant depending on and next the conditions of proposition are obtained in lemma with the same setting of it follows that with probability at least we have err err with as in given that g x is a convex function we conclude by applying jensen s inequality viewing as an average of two terms experiments experimental setup data is standardized along each coordinate of x fitting subnn we view the subsample size m and the number i of subsamples as exogenous parameters determined by the practical constraints of a given application domain namely smaller m yields faster prediction and is driven by prediction time requirement while larger i improves prediction error but is constrained by available computing units however much of our experiments concern the sensitivity of subnn to m and i and yield clear insights into tradeoffs on these choices thus for any fixed choice of m and i we choose k by the search for k is done in two stages first the best value k minimizdlog ne ing validation error is picked on a then a final choice for k is made on a refined linear range dk fitting k is also chosen in two stages as above description particle identification roe buzz in social media kawala english alphabet ml document classification mitchell release year of songs quality of wine cortez that of for versions of subnn i where is the subsampling ratio used and i is the number of subsamples for regression datasets the error is the mse while for classification we use error the prediction time reported for the subnn methods is the maximum time over the i subsamples plus aggregation time reflecting the effective prediction time for the settings motivating this work the results support our theoretical insights namely that subnn can achieve accuracy close or matching that of while at the same time achieving fast prediction time on par or better than those of sensitivity to m and i as expected better times are achievable with smaller subsample sizes while better prediction accuracy is achievable as more subsamples tend to reduce variability this is further illustrated for instance in figure where we vary the number of subsamples interestingly in this figure for the miniboone dataset the larger subsampling ratio yields the best accuracy over any number of subsamples but the gap essentially disappears when enough subsamples are used we thus have the following prescription in choosing m and i while small values of i work generally well large values of i can only improve accuracy on the other hand subsampling ratios of yield good tradeoffs accross datasets table describes the datasets used in the experiments we use for fast nn search from python for all datasets but for which we perform a direct search due to highdimensionality and sparsity as explained earlier our main focus is on classification however theoretical insights from previous sections extend to regression as substantiated in this section the code in python can be found at https benefits of denoising in figure we compare subnn with pure bagging of models as suggested by theory we see that the bagging approach does indeed require considerably more subsamples to significantly improve over the error of vanilla in contrast the accuracy of subnn quickly tends to that of in particular for twitterbuzz where or subsamples are sufficient to statistically close the gap even for small subsampling ratio this could be due to hidden but beneficial structural aspects of this data in all cases the experiments further highlights the benefits of our simple denoising step as a variance reduction technique this is further supported by the std over repetitions as shown in figure results our main experimental results are described in table showing the relative errors error of the method divided by that of vanilla and relative prediction time prediction time divided by conclusion we propose a procedure with theoretical guarantees which is easy to implement over distributed computing resources and which achieves good tradeoffs for nearest neighbor methods manuscript under review by aistats table ratios of error rates and prediction times over corresponding errors and times of data miniboone twitterbuzz letterbng yearpredmsd winequality relative error subnn subnn subnn subnn subnn knn time subnn subnn subnn knn error rate miniboone prediction time miniboone prediction error relative time subnn subnn number of subsamples number of subsamples figure comparing the effect of subsampling ratios on prediction and time performance of subnn shown are subnn estimates using subsampling ratios and error rate subnn knn subnn knn error rate miniboone subnn miniboone subnn number of subsamples number of subsamples subnn knn error rate twitterbuzz subnn subnn knn number of subsamples twitterbuzz subnn error rate number of subsamples figure bagged compared with subnn using subsampling ratios left and right manuscript under review by aistats references audibert and alexandre tsybakov fast learning rates for classifiers the annals of statistics yearpredictionmsd data https yearpredictionmsd set beygelzimer kakade and langford cover trees for nearest neighbors international conference on machine learning icml biau and luc devroye on the layered nearest neighbour estimate the bagged nearest neighbour estimate and the random forest method in regression and classification journal of multivariate analysis biau and arnaud guyader on the rate of convergence of the bagged nearest neighbor estimate journal of machine learning research feb timothy i cannings thomas b berrett and richard j samworth local nearest neighbour classification with applications to learning arxiv preprint kamalika chaudhuri and sanjoy dasgupta rates of convergence for nearest neighbor classification in advances in neural information processing systems pages kenneth clarkson searching and metric space dimensions methods for learning and vision theory and practice paulo cortez wine quality data set https thomas cover and peter hart nearest neighbor pattern classification ieee transactions on information theory devroye gyorfi and lugosi a probabilistic theory of pattern recognition springer luc devroye laszlo gyorfi adam krzyzak and lugosi on the strong universal consistency of nearest neighbor regression function estimates the annals of statistics pages evelyn fix and joseph l hodges jr discriminatory discrimination consistency properties technical report dtic document et al kawala buzz in social media data set https aristides gionis piotr indyk rajeev motwani et al similarity search in high dimensions via hashing in vldb volume pages aryeh kontorovich and roi weiss a bayes consistent classifier in aistats kpotufe regression adapts to local intrinsic dimension advances in neural information processing systems nips krauthgamer and lee navigating nets simple algorithms for proximity search symposium on discrete algorithms soda sanjeev r kulkarni and steven e posner rates of convergence of nearest neighbor estimation under arbitrary sampling ieee transactions on information theory tom mitchell twenty newsgroups data set https newsgroups open ml open ml machine learning platform https amit moscovich ariel jaffe and boaz nadler minimaxoptimal regression on unknown manifolds arxiv preprint byron roe miniboone particle identification data set https richard j samworth et al optimal weighted nearest neighbour classifiers the annals of statistics manuscript under review by aistats a proof of main results in this section we show the proof of proposition uniform bound on knn regression error the proof is done by decomposing the regression error into bias and variance lemma and bound each of them separately in lemma and lemma will be proved first as a the decomposition is the following define x as x x k x xi x p bx k p b x rk x cd d rkd x therefore with probability at least the following holds simultaneously for all x x rk x p bx k d cd k v ln ln d max cd n n viewing as the expectation of conditioning sample x we can have the following variance and bias decomposition of error of let in the above inequality and conclude the proof x x x x x x using lemma uniform rk x bound we can get a uniform bound on the bias of we start by introducing a known result on relative vc bound in lemma then in lemma we use it to give a uniform bound on rk x the distance between query point x and its nearest neighborhood in data after that we use lemma to bound bias and variance separately in lemma and proposition is concluded by combing the two bounds lemma bias of let v be the vc dimension of the class of balls on x for and k v ln ln with probability at least over the randomness in the choice of x the following inequality holds simultaneously for all x x k d x x cd n lemma relative vc bound vapnik let b be a set of subsets of x b has finite vc dimension for n drawn sample xn the empirical probpn ability measure is defined as pn b define v ln ln let with probability at least over the randomness in xn the following holds simultaneously for all b b p p b pn b pn b proof first for a fixed sample x by assumption smoothness of we have x x x xi x k x using the above result we can prove lemma bound on rk proof of lemma by lemma let v ln ln with probability at least over the randomness in x and for any x x and closed ball bx k b x rk x we have p p bx k pn bx k pn bx k max pn bx k k v ln ln max n n by assumption intrinsic dimension for any x x and any p bx k is as below k k k x xi x x x x x x x x it follows by lemma that with high probability at least over x the following holds simultaneously for all x x k d x x x cd n lemma variance of let v be the vc dimension of balls on x for with probability at least over the randomness in x y the following inequality holds simultaneously for all x x manuscript under review by aistats x x r v nl ln k proof consider the value of x x x x l k x yi l xi x fix x x and consider the randomness in y conditioned on x use hoeffding s inequality there are k independent terms in the above summation and e yi l xi so the following holds with probability at least over the randomness in y r x x l ln apply the above analysis to l l and combine them by union bound so the following inequality holds with probability at least l over the randomness in y r x x ln then consider variations in x given x fixed the left hand side of the above inequality can be seen as a function of x this is a subset of x covered by ball b x rk x by sauer s lemma the number of such subsets of x covered by a ball is bounded by en v en v so there are at most v many different variations of the above inequality when x varies we combine the variations by union bound let l en the v following happens with probability at least over the randomness in y r ln sup x x r v en ln ln v r nl v ln k the above inequality holds for any fixed sample x and the right hand side do not depend on x so it continue to hold for any drawn x thus we conclude the proof combining the bias and variance of we have the bound on uniform knn regression error proof of proposition apply lemma bias and variance to inequality with probability at least x x cd r k nl ln n k the aboved is minimized at k nc where c depend only on c v ln nl d and plug in this value of k into the statement above we obtain x x c v ln ncd where c depend solely on and b additional tables and plots in this section we present supplemental plots and tables table shows the same experiment as in table but reports average prediction time over the i subsamples rather than the maximum prediction times plus the aggregation time comparing the two tables one can see that the differences between average and maximum times are small in other words prediction time is rather stable over the subsamples which is to be expected as these times are mostly controlled by subsample size and computing resource figure presents the same experiments as in figure for an additional dataset twitterbuzz it compares the error and prediction time of subnn models as a function of the number i of subsamples used again as we can see subnn yields error rates similar to those of knn even for a small number of subsamples as expected the best prediction times are achieved with the smaller subsample ratio of manuscript under review by aistats table ratios of error rates and average prediction times over corresponding errors and times of data miniboone twitterbuzz letterbng yearpredmsd winequality relative error subnn subnn twitterbuzz prediction time subnn subnn subnn knn time subnn subnn subnn knn error rate twitterbuzz prediction error relative average time subnn subnn number of subsamples number of subsamples figure comparing the effect of subsampling ratio and number of models twitterbuzz we find that all the subnn predictors reach error similar to that of even when using only a few subsamples or as expected subnn with a subsampling ratio of results in the best prediction times
10
complete intersections of quadrics and the weak lefschetz property feb alzati and re abstract we consider graded artinian complete intersection algebras a c xm with i generated by homogeneous forms of degree d we show that the general multiplication by a linear form ad is injective we prove that the weak lefschetz property for holds for any algebra a as above with d and m previously known for m introduction the weak lefschetz property in short wlp of a graded algebra a presented as the quotient a k xmp with i an homogeneous ideal asserts that for any general linear form l ai xi and any k the multiplication map ak has maximal rank it has been conjectured that if char k then any complete intersection an algebra as above with i generated by a regular sequence has the wlp it is also conjectured that such an algebra should have the strong lefschetz property the analogous maximal rank property with l replaced by ld for any the most significant case of these conjectures is the case when a is artinian these are considered to be very challenging problems despite affirmative answers for m see and in the monomial case see and in this article we examine the case of a artinian algebra a with i generated in degree d the first main result of the present article is an easy and geometrical proof of a generalization of the so called injectivity lemma of proposition of more precisely we show the injectivity of the general multiplication map ad this result is stated in corollary that result is also used in remark to obtain a new and simple proof of the wlp for d and m already covered in then we examine the case d and m for which we prove the wlp by introducing some geometrical methods that seem to be new in the existing literature this is the content of theorem of section which is the other main result of this paper despite we are proposing only limited progress toward the wlp conjecture we hope that the methods of the present article can shed some light on the geometrical aspects of the general problem and inspire further investigations the plan of the article is the following in section we state the general results on artinian gorenstein and algebras that are common knowledge for commutative algebraists and algebraic geometers these will be the only prerequisites of this paper since we have chosen to provide an essentially exposition of our results then in section we study some interesting stratifications of the date february mathematics subject classification primary secondary key words and phrases weak lefschetz property artinian algebra complete intersection this work has been done within the framework of the national project geometry on algebraic varieties prin cofin of miur alzati and re projective space p id associated to generating piece of the ideal i these results are immediately used to give a prof of the injectivity lemma mentioned above in section we introduce our general geometrical method for approaching the wlp conjecture that is we study the projectivization of variety of pairs z q as such zq we introduce some differential methods to this purpose in the final section and we consider the case d of the wlp conjecture for complete intersections in equal degrees and give its solution for m as already mentioned above general setup notations let fm be homogeneous polynomials in c xm with no common zeros and of equal degree d a regular sequence we set i fm c xm and we denote a c xm the artinian quotient ring we set u xm i so that c xm s u s i u observe that u since we have d given a space v we will denote p v the projective space of the vector subspaces of v it v v is a element we will denote v p v its associated point definition a is said to have the weak lefschetz property wlp if for any k n and for any fixed general linear form l u the multiplication map ak has maximal rank note that the property holds trivially if k d and if k since in these cases either ik or definition we say that a fails the wlp because of surjectivity in degree k if one has dim ak dim but the multiplication ak is not surjective for any linear form in analogous way one defines the failure of wlp because of injectivity in general we say that a has the wlp in degree k if for a general linear form l the multiplication map ak has maximal rank we collect a few well known facts in the following proposition proposition the following facts hold if ak is injective then aj is injective for any j a is as it is gorenstein as a consequence it is enough to check the wlp in the case of injectivity resp in the case of surjectivity if a is a algebra of type d d then socle a a and wlp holds for a if for a general linear form l the multiplication map as is injective with s m d the macaulay inverse system i it is appropriate also to mention at this point that the dual is a sub s u module of s u with respect the derivation of action of s u on s u indeed the action of a linear form l u defined as the traspose of the multiplication map turns out to be a derivation and indeed the duality more generally identifies s k u with the space of homogeneous linear differential operators of degree k with constant coefficients acting on s u and conversely by exchanging the roles of u and u it is then clear that ann i and it is usually indicated with i which is known as the macaulay inverse system associated to i the inverse system of any gorenstein algebra a is generated by a single element g s m u as a s u with m equal to the socle degree of a that is i is generated by derivatives of g of any order in our case we have m m d a basic and certainly well known result for i that will be useful for us is the following complete intersections of quadrics and the weak lefschetz property proposition the ideal i fm c xm with fi independent and of degree d is a ideal if and only if does not contain any power of a linear form pd s d u with p u and in this case has dimension m d proof i is if and only if v fm if and only if there is no p p u such that p fm p but it is well known that d fi p hfi pd i which shows that i is if and only if there is no pd p moreover ann id with respect to the duality pairing h i s d u s d u c and this fact explains the dimension formula remark observe that the homogeneous polynomials q s u define hypersurfaces v q p u to any l u we associate a derivative q l q defined by the action of s u on s u and therefore l q means that the hypersurface v q is a cone with a vertex point in l p u remark the study of the inverse system i in association to the study of the wlp of the algebra a s u is the main theme of the article it would be interesting to fully understand the connections between the result above and the approach in the cited article stratifications of id in this section we collect some easy results on the homogeneous part of the ideal i in degree we will denote p u z d s d u z p u s q p s d u u s u q zp more generally for any r we denote s zr pr zi u pi s u proposition if i qm is a a ideal generated by forms of degree d then p id p u is a finite set and dim p id s more generally one has dim p id s min m proof assume dim p p u consider a irreducible curve c contained in that set and consider a local analytic parametrization of this curve at a general point of the form o d o then id impossible because otherwise they could be completed to a basis of id so they should be a but they vanish on v a contradiction similarly assume that dim p id s then we would have a local analytic parametrization of a surface at a general point of the form o contained in p id which would give id this produces a contradiction by similar reasons as above since the three elements above are not a as they vanish on v in the general case let us assume m dim p id s and let us write a bijective local analytic parametrization of such a family at a general point r x zi pi id alzati and re with then one may expand p zi zi j zij o p pi pi j pij o and find the relations p pi zi pi id i zi pij zij pi id j now observe that the forms above are independent by construction and they all vanish on v zr pr which has codimension in pm a contradiction since they should be a complete intersection an immediate consequence of the proposition above significantly extends the injectivity lemma of proposition of proved in that paper in the case d corollary injectivity lemma let a c xm an artinian algebra with i generated by a regular sequence of forms of degree then for general z the multiplication map ad is injective proof otherwise for any general z p u there exists some p s u such that zp id note that such factorization is unique hence dim p id s m dim p u but this contradicts the second dimension statement of proposition remark in it was observed that as a consequence of the result above in the case d the wlp holds for d and m indeed in those cases one has s m d remark proposition actually gives more precise information than corollary on the dimension non lefschetz locus that is the locus of z p such that the moltiplication map ad is not injective by showing that this locus has dimension at most we refer the reader to the recent article for many other deep results in this context in the case of d since quadrics are classified by their rank up to projective transformations we can give a more precise and alternative form of proposition which will be useful later to cover the case d and m proposition set rr q p s u rk q r for any r m that is rr is the projectivized set of quadrics of rank at most let i a ideal generated by m quadrics then dim p rr r proof we can use induction on the case r is covered already in proposition assume that dim p rr r and let q be a general point in a r dimensional irreducible subvariety of that closed set hence by the inductive hypothesis it must be rk q r and therefore we can write q since the set of quadratic forms of rank r is a single orbit under the action of gl m we can find a local analytical parametrization of the given component of dim p rr with parameters such that r x li lir p rr moreover by construction we can assume that q fr are independent then computing the derivatives above we find that x x x li li r li fr i complete intersections of quadrics and the weak lefschetz property are r independent elements in p that vanish on v lr since r m and any base of must be a we obtain a contradiction remark the results of this section although very simple strongly depend on the complete intersection hypothesis they can not in general be extended to the case of gorenstein algebras a even when these algebras are presented by quadrics indeed infinite series of counterexamples to the result in corollary in the gorenstein case have been provided in see for example their corollary differential lemmas set s m d as above assuming that wlp does not hold for the algebra a that is the general multiplication map as is not injective we consider a subvariety of p p as w z q p p as zq such that w is the unique component of with w p the uniqueness of w comes from the fact that the fibers of the projections of the variety are linear spaces the variety w is endowed with the two projections and to p and p as respectively then for general z p and q w we define the vector spaces q z q as zq z q z zq this means that z pq z and q pz q we set the following notations for general z p and general q p such that zq dim q dim z q dim z dim q z we also set n dim w m dim w now consider a general point w and a system of parameters for w centered at we denote z q a point in a neighborhood of with the understanding that z z and q q are rational functions defined at finally we denote zi z qi q i lemma under the notations above one has zn i n m and qn i n in particular one has hz zn i u moreover the following relations hold for any i j n i if q z then ii q iii z iv for any h and for any ih n one has z in particular for any i one has z i alzati and re proof the first assertions of the lemma are clear by the fact that w p u and by the surjectivity of the tangent maps for and at a general point z q p by applying to the relation zq cj fi x with x xm i a fixed base of u and fj x a generating set for s u we immediately see that zi q z multiplying the relation above by and using i we obtain zi and since hz zm i we see that by our choice of s we have socle a therefore we find this proves i next setting q by derivations of the last relation we get q j we compute z q using this completes the proof of i ii i j and iii we prove iv by induction on h we assume that it is true for h and we prove it for h the base case being the starting relation zq let us assume by inductive hypothesis z f f hq by derivation with respect to we obtain then the inductive step is immediately proved by multiplying this last relation by z and using that z f h z h zi f z complete intersection algebras presented by quadrics we recall the following simple formula for hilbert function of an artinian algebra presented by quadrics fact let a be the artinian algebra obtained from a complete intersection of quadrics in pm then one has hf a k dim ak for any k k indeed one can compute dim ak from the koszul resolution u s a and the fact that this resolution gives as result dim ak can be seen directly k considering the special case i in which case ak has basis given by the classes mod i of all squarefree monomials with exponents whose number is exactly k s m u s the following technical lemma will be useful later lemma let a be a artinian algebra as above with m if z is general in and z w are linearly independent then dimhz moreover one has dimhz unless the following properties hold m and dim p the incidence variety f gr m p u f p v f has dim and there exist irreducible components such that denoting f and f f the two projections complete intersections of quadrics and the weak lefschetz property one has dim dim any v z w with z general and such that dimhz is such that for one of these components proof one has hz hz wiu hz wiu hz wiu note that dimhz wiu as it degree piece of the ideal generated by z therefore to get the general bound dimhz it is sufficient to show that dim hz wiu but this is clear because is generated by a complete intersection and therefore at most independent element of can contain the linear space v z w the only possibility for obtaining dimhz is dim hz wiu note that setting v z w then p hz wiu with the projections from introduced above note also that for f one has f zl wm hence f that is p if for some it were then one has dim p hz wiu dim if and only if v z w is contained in a pencil of rank and hence reducible quadrics of the form either l or in any case the given pencil has a fixed hyperplane component and therefore it can not be contained in since this latter is generated by a complete intersection so this case is impossible assume now that for v z w with z general one has dim and but not in the fibres have dimension at least and their union form some irreducible subvariety of such that a general element of is f with f of rank it is well known that such a quadric f has spaces contained in it moreover by proposition we know that dim p hence dim dim f and therefore dim dim since there are hyperplanes v z pm containing a fixed we see that if m a general v z can not contain any a contradiction we are left with the case when for v z w with z general one has dim and but not in as above the fibres have dimension at least and their union form some irreducible subvariety of with the following properties and as by proposition we know dim p we have dim as a consequence since a general f has rank and it contains spaces one has dim on the other hand as any is contained in hyperplanes one must have m dim dim therefore m and all the preceding inequalities are actually equalities in particular dim dim and by applying the same arguments as above to every irreducible component of one has dim then is a irreducible component of moreover dim and therefore it is a irreducible component of p which also has the maximal possible dimension dim p application the case d and m in this last section we apply the results obtained so far to prove that the wlp holds for algebras presented by quadrics in that is we assume d and m even this simple case appears not to be covered in the existing literature for d and m the dimensions of ai for i are alzati and re by the results stated in proposition we only need to examine the general multiplication map as in the present case one has s m d note that by letting z vary in p we can build an exact sequence of sheaves k op op n where is defined fiberwise by z assuming that the wlp does not hold let z q w be a general element we consider the multiplication map under the notations of section we have the following result lemma dim coker dim z q q q proof the map is dual to the map by the perfect pairings and defined by the multiplication hence coker z q which proves the statement we also have the following formula relating z q with the vector space spanned by the derivatives of q with respect to the parameters with n dim w introduced in the previous section lemma for arbitrary m one has dimhq qn i m in particular for m we have dimhq qn i proof the embedded tangent space to the image of the map w p defined by z q q p is given by hq qn hence it has dimension equal to dimhq qn i by lemma the same tangent space has dimension n m from which the statement follows we introduce one last preliminary result about the dimension of z q this will be the only place in this paper where it turns out very useful to consider the inverse system i mentioned in section lemma if m one has that is dim z q proof assume that dim z q then consider g s u such that is generated by g as a s u since zq for any z z q then by remark the cubic q g s u is a cone with vertex space pz q and we are assuming dim pz q but then in a suitable coordinate system q g is defined by a degree three homogeneous polynomial in two variables and it is easy to see that the vector space generated by its partial derivatives always contain the square of a linear form obtaining a contradiction by proposition the spaces z q and hq qn i are connected with in the following way lemma one has z q hq qn i ker proof this is clear by definition of z q and by lemma i and ii we can finally prove the following theorem the wlp holds for m and d proof assume that wlp does not hold in view of lemma we have two cases complete intersections of quadrics and the weak lefschetz property case then by lemma we have dim ker dim dim dim coker since z q ker and dim z q by lemma we have dim z q note that also the second part of lemma applies to this case the planes v z w with hz wi z q with q varying in w form a irreducible family of dimension dim w but by the second part of lemma we know that such a family can be at most hence in this case we have dim w let us then consider the varieties y z p dim z y note that one necessarily has codim y indeed if dim y then dim dim w hence w impossible as we have shown hence a general line l p does not intersect y and the restriction of the exact sequence to l gives an exact sequence of vector bundles k n with k and n line that symmetric that is identifying by means of the multiplication pairing c then one can write as a sheaf map and then by dualizing the exact sequence above one immediately sees that n and a degree calculation gives deg k impossible case then z q hzi and by lemma we have hq qn i ker since z is general and hence we can apply the result of corollary we have dim moreover by lemma we have dim ker dim dim and finally by lemma we have dimhq qn i then it is easy to conclude that dim hq qn i now we consider a subspace v z qn i with dim v claim v proof of the claim recalling that u u the claim is equivalent to assert that v u s u first of all using the fact that v is the space of linear forms vanishing on one point p pm p u that is v h ip we see that v u h ip and it has dimension dim s u then v u s u if and only if dim v u dim dim v u dim s u but this means that in v u h ip which is impossible then we have zv hq qn which by lemma iv implies z z hq qn hence since the socle of a is generated in degree one finds z but this is impossible for general z since the z s with z general generate alzati and re references boji migliore nagel the locus gondim lefschetz properties for artinian gorenstein algebras presented by quadrics proc amer math soc harima migliore nagel and watanabe the weak and strong lefschetz properties for artinian algebra mezzetti ottaviani laplace equations and the weak lefschetz property canadian journal of mathematics migliore nagel gorenstein algebras presented by quadrics collect math stanley weyl groups the hard lefschetz theorem and the sperner property siam algebraic discrete methods watanabe the dilworth number of artinian rings and finite posets with rank function commutative algebra and combinatorics advanced studies in pure math vol kinokuniya north holland amsterdam alberto alzati dipartimento di matematica di milano via saldini milano italy address riccardo re dipartimento di scienza e alta tecnologia dell insubria via valleggio como italy address
0
recurrent neural network training with dark knowledge transfer may zhiyuan dong zhiyong center for speech and language technologies cslt riit tsinghua university tsinghua national laboratory for information science and technology chengdu institute of computer applications chinese academy of sciences tangzy zhangzy corresponding author abstract recurrent neural networks rnns particularly long memory lstm have gained much attention in automatic speech recognition asr although some successful stories have been reported training rnns remains highly challenging especially with limited training data recent research found that a model can be used as a teacher to train other child models by using the predictions generated by the teacher model as supervision this knowledge transfer learning has been employed to train simple neural nets with a complex one so that the final performance can reach a level that is infeasible to obtain by regular training in this paper we employ the knowledge transfer learning approach to train rnns precisely lstm using a deep neural network dnn model as the teacher this is different from most of the existing research on knowledge transfer learning since the teacher dnn is assumed to be weaker than the child rnn however our experiments on an asr task showed that it works fairly well without applying any tricks on the learning scheme this approach can train rnns successfully even with limited training data index recurrent neural network long shortterm memory knowledge transfer learning automatic speech recognition introduction deep learning has gained significant success in a wide range of applications for example automatic speech recognition asr a powerful deep learning model that has been reported effective in asr is the recurrent neural network rnn an obvious advantage of rnns compared to conventional deep neural networks dnns is that rnns can model temporal properties and thus are suitable for modeling speech signals a simple training method for rnns is the backpropagation through time algorithm this approach this work was supported by the national natural science foundation of china under grant no and the mestdc phd foundation project no this paper was also supported by huilan and sinovoice however is rather inefficient due to two main reasons the twists of the objective function caused by the high nonlinearity the vanishing and explosion of gradients in backpropagation in order to address these difficulties mainly the second a modified architecture called the long memory lstm was proposed in and has been successfully applied to asr in the echo state network esn architecture proposed by the weights are not learned in the training so the problem of odd gradients does not exist recently a special variant of the hf optimization approach was successfully applied to learn rnns from random initialization a particular problem of the hf approach is that the computation is demanding another recent study shows that a carefully designed momentum setting can significantly improve rnn training with limited computation and can reach the performance of the hf method although these methods can address the difficulties of rnn training to some extent they are either too tricky the momentum method or less optimal the esn method particularly with limited data rnn training remains difficult this paper focuses on the lstm structure and presents a simple yet powerful training algorithm based on knowledge transfer this algorithm is largely motivated by the recently proposed logit matching and dark knowledge distiller the basic idea of the knowledge transfer approach is that a model involves rich knowledge of the target task and can be used to guide the training of other models current research focuses on learning simple models in terms of structure from a powerful yet complex model or an ensemble of models based on the idea of model compression in asr this idea has been employed to train small dnn models from a large and complex one in this paper we conduct an opposite study which employs a simple dnn model to train a more complex rnn different from the existing research that tries to distill knowledge from the teacher model we treat the teacher model as a regularization so that the training process of the child model is smoothed or a step so that the supervised training can be located at a good starting point this in fact leads to a new training approach that is easy to perform and can be extended to any model architecture we employ this idea to address the difficulties in rnn training the experiments on an asr task with the database verified that the proposed method can significantly improve rnn training the reset of the paper is organized as follows section briefly discusses some related works and section presents the method section presents the experiments and the paper is concluded by section related to prior work this study is directly motivated by the work of dark knowledge distillation the important aspect that distinguishes our work from others is that the existing methods focus on distilling knowledge of complex model and use it to improve simple models whereas our study uses simple models to teach complex models the teacher model in our work in fact knows not so much but it is sufficient to provide a rough guide that is important to train complex models such as rnns in the present study another related work is the knowledge transfer between dnns and rnns as proposed in however it employs knowledge transfer to train dnns with rnns this still follows the conventional idea described above and so is different from ours rnn training with knowledge transfer dark knowledge distiller the idea that a dnn model can be used as a teacher to guide the training of other models was proposed by several authors almost at the same time the basic assumption is that the teacher model encodes rich knowledge for the task in hand and this knowledge can be distilled to boost the child model which is often simpler and can not learn many details without the teacher s guide there are a few ways to distill the knowledge the logit matching approach proposed by teaches a child model by encouraging its logits activations before softmax close to those of the teacher model in terms of the norm and the dark knowledge distiller model proposed by encourages the posterior probabilities softmax output of the child model close to those of the teacher model in terms of cross entropy this transfer learning has been applied to learn simple models to approach the performance of a complex model or a large model ensemble for example learning a small dnn from a large dnn or a dnn from a more complex rnn we focus on the dark knowledge distiller approach as it showed better performance in our experiments basically a dnn model plays the role of a teacher and generates posterior probabilities of the training samples as new targets for training other models these posterior probabilities are called soft targets since the class identities are not as deterministic as the original hard targets to make the targets softer a temperature t can be applied to scale the logits in the softmax formulated as pi ezi p zj j e where i j index the output units the introduction of t allows more information of to be distilled for example a training sample with the hard target does not involve any rank information for the second and third class with the soft targets the rank information of the second and third class is reflected additionally with a large t applied the target is even softer which allows the classes to be more prominent in the training note that the additional rank information on the classes is not available in the original target but is distilled from the teacher model additionally a larger t boosts information of classes but at the same time reduces information of target classes if t is very large the soft target falls back to a uniform distribution and is not informative any therefore t controls how the knowledge is distilled from the teacher model and hence needs to be set appropriately according to the task in hand dark knowledge for complex model training dark knowledge in the form of soft targets can be used not only for boosting simple models but also for training complex models we argue that training with soft targets offers at least two advantages it provides more information for model training and it makes the training more reliable these two advantages are particularly important for training complex models especially when the training data is limited firstly soft targets offer probabilistic class labels which are not so definite as hard targets on one hand this matches the real situation where uncertainty always exists in classification tasks for example in speech recognition it is often difficult to identify the phone class of a frame due to the effect of on the other hand this uncertainty involves rich but less discriminative information within a single example for example the uncertainty in phone classes indicates phones are similar to each other and easy to get confused making use of this information in the form of soft targets posterior probabilities helps improve statistical strength of all phones in a collaborative way and therefore is particularly helpful for phones with little training data secondly soft targets blur the decision boundary of classes which offers a smooth training the smoothness associated with soft targets has been noticed in which states that soft targets result in less variance in the gradient between training samples this can be easily verified by looking at the gradients backpropagated to the logit layer which is ti yi for the logit where ti is the target and yi is the output of the child model in training the accumulated this argument should be not confused with the conclusion in where it was found that when t is also applied to the child net a large t is equal to logit matching the assumption of this equivalence is that t is large compared to the magnitude of the logit values but not infinitely large in fact if t is very large the gradient will approach zero so no knowledge is distilled from the teacher model variance is given by x ex ti yi ex ti ex yi v ar t i where the expectation ex is conducted on the training data x if we assume that ex ti is identical for soft and hard targets which is reasonable if the teacher model is well trained on the same data then the variance is given by x ex ti yi const v ar t i where const is a constant term if we assume that the child model can well learn the teacher model the gradient variance approaches to zero with soft targets which is impossible with hard targets even if when the training has converged the reduced gradient variance is highly desirable when training deep and complex models such as rnns we argue that it can mitigate the risk of gradient vanishing and explosion that is well known to hinder rnn training leading to a more reliable training regularization view it has been known that including both soft and hard targets improves performance with appropriate setting of a weight factor to balance their relative contributions this can be formulated as a regularized training problem with the objective function given by l ls xx pij ln yij i j where represents the parameters of the model lh and ls are the cost associated with the hard and soft targets respectively and is the weight factor additionally tij and pij are the hard and soft targets for the sample on the class respectively note that lh is the objective function of the conventional supervised training and so ls plays a role of regularization the effect of the regularization term is to force the model under training child model to mimic the teacher model a way of knowledge transfer in this study a dnn model is used as the teacher model to regularize the training of an rnn with this regularization the rnn training looks for optima which produce similar targets as the dnn does so the risk of and can be largely reduced view instead of training the model with soft and hard targets altogether we can first train a reasonable model with soft targets and then refine the model with hard targets by this way the transfer learning plays the role of and the conventional supervised training plays the role of the rationale is that the soft targets results in a reliable training so can be used to conduct model initialization however since the information involved in soft targets is less discriminative refinement with hard targets tends to be helpful this can be informally interpreted as teaching the model with less but important discriminative information firstly and once the model is strong enough more discriminative information can be learned this leads to a new strategy based on dark knowledge transfer in the conventional approaches based on either restricted boltzmann machine rbm or ae simple models are trained and stacked to construct complex models the dark knowledge functions in a different way it makes a complex model trainable by using less discriminative information soft targets while the model structure does not change this approach possesses several advantages it is totally supervised and so more it the model as a whole instead of layer by layer so tends to be fast it can be used to any complex models for which the layer structure is not clear such as the rnn model that we focus on in this paper the view is related to the curriculum training method discussed in where training samples that are easy to learn are firstly selected to train the model while more difficult ones are selected later when the model has been fairly strong in the dark knowledge the soft targets can be regarded as easy samples for and hard targets as difficult samples for interestingly the regularization view and the view are closely related the is essentially a regularization that places the model to some location in the parameter space where good local minima can be easily reached this relationship between regularization and has been discussed in the context of dnn training experiments to verify the proposed method we use it to train rnn acoustic models for an asr task which is known to be difficult note that all the rnns we mention in this section are indeed lstms the experiments are conducted on the database in noisy conditions and the data profile is largely standard utterances for model training utterances for development and utterances for testing the kaldi toolkit is used to conduct the model training and performance evaluation and the process largely follows the recipe for dnn training specifically the training starts from constructing a system based on gaussian mixture models gmm with the standard mfcc features plus the first and second order derivatives a dnn system is then trained with the alignment provided by the gmm system the feature used for the dnn system is the fbanks a symmetric window is applied to concatenate neighboring frames and an lda transform is used to reduce the feature dimension to which forms the dnn input the dnn architecture involves hidden layers and each layer consists of units the output layer is composed of units equal to the total number of gaussian mixtures in the gmm system the cross entropy is used as the training criterion and the stochastic gradient descendent sgd algorithm is employed to perform the training in the dark knowledge transfer learning the trained dnn model is used as the teacher model to generate soft targets for the rnn training the rnn architecture involves layers of lstms with cells per layer the unidirectional lstm has a recurrent projection layer as in while the one is discarded the input features are the fbanks and the output units correspond to the gaussian mixtures as in the dnn the rnn is trained with streams and each stream contains continuous frames the momentum is empirically set to and the starting learning rate is set to by default the experimental results are reported in table the performance is evaluated in terms of two criteria the frame accuracy fa and the word error rate wer while fa is more related to the training criterion cross entropy wer is more important for speech recognition in table the fas are reported on both the training set tr fa and the cross validation set cv fa and the wer is reported on the test set in table is the rnn baseline trained with hard targets and are trained with dark knowledge transfer where the temperature t is set to and respectively for each dark knowledge transfer model the soft targets are employed in three ways in the soft way only soft targets are used in rnn training in the way the soft and hard targets are used together and the soft targets play the role of regularization where the gradients of the soft s are scaled up with t in the pretrain way the soft targets and the hard targets are used sequentially and the soft targets play the role of the weight factor in the regularization approach is empirically set to targets dnn soft reg pretrain soft reg pretrain hard hard soft soft hard soft hard soft soft hard soft hard fa tr fa cv wer table results with different models and training methods it can be observed that the rnn baseline can not beat the dnn baseline in terms of wer although much effort has been devoted to calibrate the training process including various trials on different learning rates and momentum values this is consistent with the results published with the kaldi recipe note that this does not mean rnns are inferior to dnns from the fa results it is clear that the rnn model leads to better quality in terms of the training objective unfortunately this advantage is not propagated to wer on the test set additionally the results shown here can not be interpreted as that rnns are not suitable for asr in terms of wer in fact several researchers have reported better wers with rnns our results just say that with the database the rnn with the basic training method does not generalize well in terms of wer although it works well in terms of the training criterion this problem can be largely solved by the dark knowledge transfer learning as demonstrated by the results of the and systems it can be seen that with the soft targets only the rnn system obtains equal or even better performance in comparison with the dnn baseline which means that the knowledge embedded in the dnn model has been transferred to the rnn model and the knowledge can be arranged in a better form within the rnn structure paying attention to the fa results it can be seen that the knowledge transfer learning does not improve accuracy on the training set but leads to better or close fas on the cv set compared to the dnn and rnn baseline this indicates that transfer learning with soft targets sacrifices the fa performance on the training set a little but leads to better generalization on the cv set additionally the advantage on wer indicates that the generalization is improved not only in the sense of data sets but also in the sense of evaluation metrics when combining soft and hard targets either in the way of regularization or the performance in terms of both fa and wer is improved this confirms the hypothesis that the knowledge transfer learning does play roles of regularization and note that in all these cases the fa results on the training set are lower than that of the rnn baseline which confirms that the advantage of the knowledge transform learning resides in improving generalizability of the resultant model when comparing the two dark knowledge rnn systems with different temperatures t we see leads to little worse fas on the training and cv set but slightly better wers this confirms that a higher temperature generates a smoother direction and leads to better generalization conclusion we proposed a novel rnn training method based on dark knowledge transfer learning the experimental results on the asr task demonstrated that knowledge learned by simple models can be effectively used to guide the training of complex models this knowledge can be used either as a regularization or for and both approaches can lead to models that are more generalizable a desired property for complex models the future work involves applying this technique to more complex models that are difficult to train with conventional approaches for example deep rnns knowledge transfer between heterogeneous models is under investigation as well between probabilistic models and neural models references deng and yu deep learning methods and applications foundations and trends in signal processing vol no pp online available http graves mohamed and hinton speech recognition with deep recurrent neural networks in proceedings of ieee international conference on acoustics speech and signal processing icassp ieee pp sutskever martens dahl and hinton on the importance of initialization and momentum in deep learning in proceedings of the international conference on machine learning pp ba and caruana do deep nets really need to be deep in advances in neural information processing systems pp hinton vinyals and j dean distilling the knowledge in a neural network in nips deep learning workshop graves and jaitly towards speech recognition with recurrent neural networks in proceedings of the international conference on machine learning pp bucilu caruana and model compression in proceedings of the acm sigkdd international conference on knowledge discovery and data mining acm pp sak a senior and beaufays long memory recurrent neural network architectures for large scale acoustic modeling in proceedings of the annual conference of international speech communication association interspeech li zhao huang and gong learning dnn with criteria in proceedings of the annual conference of international speech communication association interspeech rumelhart hinton and williams learning representations by errors nature vol no pp online available http chan ke and i lane transferring knowledge from a rnn to a dnn arxiv preprint bengio simard and frasconi learning longterm dependencies with gradient descent is difficult neural networks ieee transactions on vol no pp hochreiter and schmidhuber long memory neural computation vol no pp graves and schmidhuber framewise phoneme classification with bidirectional lstm and other neural network architectures neural networks vol no pp jaeger and haas harnessing nonlinearity predicting chaotic systems and saving energy in wireless communication science vol no pp martens deep learning via optimization in proceedings of the international conference on machine learning pp martens and sutskever learning recurrent neural networks with optimization in proceedings of the international conference on machine learning pp hinton and salakhutdinov reducing the dimensionality of data with neural networks science vol no pp bengio lamblin popovici larochelle et greedy training of deep networks advances in neural information processing systems vol romero ballas kahou chassang gatta and bengio fitnets hints for thin deep nets arxiv preprint erhan bengio courville manzagol vincent and bengio why does unsupervised pretraining help deep learning the journal of machine learning research vol pp povey ghoshal boulianne burget glembek goel hannemann motlicek qian schwarz silovsky stemmer and vesely the kaldi speech recognition toolkit in ieee workshop on automatic speech recognition and understanding ieee signal processing society ieee catalog no
9
implementation of a distributed coherent quantum observer feb ian petersen and elanor huntington this paper considers the problem of implementing a previously proposed distributed direct coupling quantum observer for a closed linear quantum system by modifying the form of the previously proposed observer the paper proposes a possible experimental implementation of the observer plant system using a parametric amplifier and a chain of optical cavities which are coupled together via optical interconnections it is shown that the distributed observer converges to a consensus in a time averaged sense in which an output of each element of the observer estimates the specified output of the quantum plant i ntroduction in this paper we build on the results of by providing a possible experimental implementation of a direct coupled distributed quantum observer a number of papers have recently considered the problem of constructing a coherent quantum observer for a quantum system see in the coherent quantum observer problem a quantum plant is coupled to a quantum observer which is also a quantum system the quantum observer is constructed to be a physically realizable quantum system so that the system variables of the quantum observer converge in some suitable sense to the system variables of the quantum plant the papers considered the problem of constructing a direct coupling quantum observer for a given quantum system in the papers the quantum plant under consideration is a linear quantum system in recent years there has been considerable interest in the modeling and feedback control of linear quantum systems see such linear quantum systems commonly arise in the area of quantum optics see in addition the papers have considered the problem of providing a possible experimental implementation of the direct coupled observer described in for the case in which the quantum plant is a single quantum harmonic oscillator and the quantum observer is a single quantum harmonic oscillator for this case show that a possible experimental implementation of the augmented quantum plant and this work was supported by the air force office of scientific research afosr this material is based on research sponsored by the air force research laboratory under agreement number the government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright notation thereon the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements either expressed or implied of the air force research laboratory or the government this work was also supported by the australian research council arc ian petersen and elanor huntington are with the research school of engineering the australian national university canberra act australia email quantum observer system may be constructed using a nondegenerate parametric amplifier ndpa which is coupled to a beamsplitter by suitable choice of the amplifier and beamsplitter parameters see for a description of an ndpa in this paper we consider the issue of whether a similar experimental implementation may be provided for the distributed direct coupled quantum observer proposed in the paper proposes a direct coupled distributed quantum observer which is constructed via the direct connection of many quantum harmonic oscillators in a chain as illustrated in figure it is shown that this quantum network can be constructed so that each output of the direct coupled distributed quantum observer converges to the plant output of interest in a time averaged sense this is a form of time averaged quantum consensus for the quantum networks under consideration however the experimental implementation approach of can not be extended in a straightforward way to the direct coupled distributed quantum observer this is because it is not feasible to extend the ndpa used in to allow for the multiple direct couplings to the multiple observer elements required in the theory of hence in this paper we modify the theory of to develop a new direct coupled distributed observer in which there is direct coupling only between the plant and the first element of the observer all of the other couplings between the different elements of the observer are via optical field couplings this is illustrated in figure also all of the elements of the observer except for the first one are implemented as passive optical cavities the only active element in the augmented plant observer system is a single ndpa used to implement the plant and first observer element these features mean that the proposed direct coupling observer is much easier to implement experimentally that the observer which was proposed in distributed quantum observer quantum plant zp fig zon distributed quantum observer of we establish that the distributed quantum observer proposed in this paper has very similar properties to the distributed quantum observer proposed in in that each output of the distributed observer converges to the plant output of quantum plant observer wna y a observer n observer fig ii q uantum l inear s ystems in the distributed quantum observer problem under consideration both the quantum plant and the distributed quantum observer are linear quantum systems see also the quantum mechanical behavior of a linear quantum system is described in terms of the system observables which are operators on an underlying infinite dimensional complex hilbert space the commutator of two scalar operators x and y on h is defined as x y xy yx also for a vector of operators x on h the commutator of x and a scalar operator y on h is the vector of operators x y xy yx and the commutator of x and its adjoint is the matrix of operators x x xt t where x t and denotes the operator adjoint the dynamics of the closed linear quantum systems under consideration are described by differential equations of the form ax t w b ynb zon distributed quantum observer proposed in this paper interest in a time averaged sense however an important difference between the observer proposed in and the observer proposed in this paper is that in the output for each observer element corresponded to the same quadrature whereas in this paper different quadratures are used to define the outputs with a phase rotation as we move from observer element to element along the chain of observers t observer zp x where a is a real matrix in and x t t xn t t is a vector of system observables see here n is assumed to be an even number and n is the number of modes in the quantum system the initial system variables x are assumed to satisfy the commutation relations xj xk j k n where is a real matrix with components in the case of a single quantum harmonic oscillator we will choose x t where q is the position operator and p is the momentum operator the commutation relations are q p in general the matrix is assumed to be of the form diag j j j where j denotes the real matrix the system dynamics are determined by the system hamiltonian which is a operator on the underlying hilbert space for the linear quantum systems under consideration the system hamiltonian will be a quadratic form h x t rx where r is a real symmetric matrix then the corresponding matrix a in is given by a where is defined as in see in this case the system variables x t will satisfy the commutation relations at all times x t x t t for all t that is the system will be physically realizable see remark note that that the hamiltonian h is preserved in time for the system indeed xt xt since r is symmetric and is iii d irect c oupling d istributed c oherent q uantum o bservers in our proposed direct coupling coherent quantum observer the quantum plant is a single quantum harmonic oscillator which is a linear quantum system of the form described by the differential equation t zp t ap xp t cp xp t xp where zp t denotes the vector of system variables to be estimated by the observer and ap cp it is assumed that this quantum plant corresponds to plant qp hamiltonian hp xp t rp xp here xp pp where qp is the plant position operator and pp is the plant momentum operator as in in the sequel we will assume that ap we now describe the linear quantum system of the form which will correspond to the distributed quantum observer see also this system is described by a differential equation of the form t ao xo t zo t co xo t xo where the observer output zo t is the distributed observer no estimate vector and ao rno co r also xo t is a vector of system variables see we assume the distributed observer order no is an even number with n being the number of elements in the distributed quantum observer we also assume that the plant variables commute with the observer variables we will assume that the distributed quantum observer has a chain structure and is coupled to the quantum plant as shown in figure furthermore we write zo iv i mplementation of a d istributed q uantum o bserver we will consider a distributed quantum observer which has a chain structure and is coupled to the quantum plant as shown in figure in this distributed quantum observer there a direct coupling between the quantum plant and the first quantum observer this direct coupling is determined by a coupling hamiltonian which defines the coupling between the quantum plant and the first element of the distributed quantum observer hc xp t rc however in contrast to there is field coupling between the first quantum observer and all other quantum observers in the chain of observers the motivation for this structure is that it would be much easier to implement experimentally than the structure proposed in indeed the subsystem consisting of the quantum plant and the first quantum observer can be implemented using an ndpa and a beamsplitter in a similar way to that described in see also for further details on ndpas and beamsplitters this is illustrated in figure zon where zoi coi xoi for i note that coi the augmented quantum linear system consisting of the quantum plant and the distributed quantum observer is then a quantum system of the form described by equations of the form where t xp t t t t aa t t zp t zo t where xon t cp xp t co xo t con we now formally define the notion of a direct coupled linear quantum observer definition the distributed linear quantum observer is said to achieve consensus convergence for the quantum plant if the corresponding augmented linear quantum system is such that z t zp t zo t dt lim t t ndpa w fig ndpa coupled to a beamsplitter representing the quantum plant and first quantum observer co y beamsplitter also the remaining quantum observers in the distributed quantum observer are implemented as simple cavities as shown in figure wia yib ith cavity yia wna wib ynb nth cavity fig optical cavity implementation of the remaining quantum observers in the distributed quantum observer the proposed quantum optical implementation of a distributed quantum observer is simpler than that of however its dynamics are somewhat different than those of the distributed quantum observer proposed in we now proceed to analyze these dynamics indeed using the results of we can write down quantum stochastic differential equations qsdes describing the observer system shown in figure dxp t dt dt dt xp dt dt qp is the vector of position and momentum where xp pp is the operators for the quantum plant and vector of position and momentum operators for the first quantum observer here and are parameters which depend on the parameters of the beamsplitter and the ndpa the parameters and define the coupling hamiltonian matrix defined in as follows rc t in addition the parameters of the beamsplitter and the ndpa need to be chosen as described in in order to obtain qsdes of the required form the qsdes describing the ith quantum observer for i n are as follows xoi dt dxoi jxoi dt dwia dwib dyia xoi dt dwib dyib xoi dt dwia qi is the vector of position and momentum where xoi pi operators for the ith quantum observer see here and are parameters relating to the reflectivity of each of the partially reflecting mirrors which make up the cavity the qsdes describing the n th quantum observer are as follows a dxon jxon dt xon dt a dwn a dyn b a xon dt dwn a qn is the vector of position and momenwhere xon pn tum operators for the n th quantum observer here a is a parameter relating to the reflectivity of the partially reflecting mirror in this cavity in addition to the above equations we also have the following equations which describe the interconnections between the observers as in figure w a wib y b for i n in order to describe the augmented system consisting of the quantum plant and the quantum observer we now combine equations and indeed starting with observer n we have from dyn b a xon dt dy n a but from with i n dy n a n b xo n dt dw n b therefore dyn b a xon dt n b xo n dt dw n b a xon dt n b xo n dt dyn b using hence n b a dyn b xon dt xo n dt from this it follows using that dwn a a xon dt dyn b n b a xon dt xo n dt then using we obtain the equation n b a dxon jxon dt xo n dt we now consider observer n indeed it follows from and with i n that dxo n jxo n dt n a n b xo n dt n a dw n a n b dyn b jxo n dt n a n b xo n dt n a dw n a a n b xon dt n b xo n dt n a xo n dt jxo n dt a n b xon dt n a dw n a using now using and with i n it follows that dy n a n b xo n dt dw n b n b xo n dt dy n b n b xo n dt n a xo n dt n a using with i n hence using with i n it follows that n b xo n dt n a xo n dt dy n a n a where therefore dy n a n b n a xo n dt xo n dt ao substituting this into we obtain dxo n jxo n dt n b a xon dt n b n a xo n dt continuing this process we obtain the following qsdes for the variables xoi dxoi jxoi dt a xo dt b xo dt i j i j i dt dt xp dt we now observe that the plant equation dxp t dt j i i j and i i bo b for i n to construct a suitable distributed quantum observer we will further assume that for i n finally for we obtain i j cp where and co implies that the quantity this choice of the matrix co means that different quadratures are used for the outputs of the elements of the distributed quantum observer with a phase rotation as we move from observer element to element along the chain of observers in order to construct suitable values for the quantities and we require that zp xp satisfies dzp t dt since j is a matrix therefore zp t zp zp ao bo zp for all t we now combine equations and write them in form indeed let xo xon then we can write where j n this will ensure that the quantity zp xe xo will satisfy the differential equation ao xo bo zp ao xe this combined with the fact that t co where j n n t zp zp will be used in establishing condition for the distributed quantum observer now we require ao bo n j ro i j j i j xto ro xo xton kxon an an an ao where ao an cn and for i n and to show that the above candidate distributed quantum observer leads to the satisfaction of the condition we first note that xe defined in will satisfy if we can show that z t lim xe t dt t t then it will follow from and that is satisfied in order to establish we first note that we can write ao i j qi for i n also define pi the complex scalars ai qi for i n then it is straightforward to verify that where xoi that is we will assume that n j i xon j j we will now show that the symmetric matrix ro is positivedefinite lemma the matrix ro is positive definite proof in order to establish this lemma let xo j n j n this will be satisfied if and only if n j i here denotes the complex conjugate transpose of a vector from this it follows that the real symmetric matrix ro is if and only if the complex hermitian matrix is to prove that is we first substitute the equations and into the definition of to obtain where and now we can write ao an an an an an thus furthermore ao if and only if an that is the null space of is given by n span n the fact that and implies that in order to show that suppose that ao is a vector in n it follows that ao ao ao since and ao must be contained in the null space of and the null space of therefore ao must be of the form ao n where however then ao and hence ao can not be in the null space of thus we can conclude that the matrix is positive definite and hence the matrix ro is positive definite this completes the proof of the lemma we now verify that the condition is satisfied for the distributed quantum observer under consideration this proof follows along very similar lines to the corresponding proof given in we recall from remark that the quantity t xe t ro xe t remains constant in time for the linear system ao xe xe that is xe t t ro xe t xe t ro xe however xe t t xe and ro therefore it follows from that p p ro t xe k ro kxe k for all xe and t hence t k s ro ro for all t now since and ro are z t t dt t e ro ro and therefore it follows from that z t t dtk k t t k e ro ro k t t ke kkro k k s ro kr k ro o k as t hence k t t lim t z xe t dtk z t k t xe dtk lim t t z t lim t dtkkxe k k t t this implies t t lim z t xe t dt and hence it follows from and z t lim zo t dt t t also implies t t lim z t zp t dt that zp zp therefore condition is satisfied thus we have established the following theorem theorem consider a quantum plant of the form where ap then the distributed direct coupled quantum observer defined by equations achieves consensus convergence for this quantum plant r eferences petersen time averaged consensus in a direct coupled distributed coherent quantum observer in proceedings of the american control conference chicago il july vladimirov and petersen coherent quantum filtering for physically realizable linear quantum plants in proceedings of the european control conference zurich switzerland july miao espinosa petersen ugrinovskii and james coherent quantum observers for quantum systems in australian control conference perth australia november miao james and petersen coherent observers for linear quantum stochastic systems automatica vol pp petersen a direct coupling coherent quantum observer in proceedings of the ieee on systems and control antibes france october also available arxiv a direct coupling coherent quantum observer for a single qubit finite level quantum system in proceedings of australian control conference canberra australia november also arxiv time averaged consensus in a direct coupled coherent quantum observer network for a single qubit finite level quantum system in proceedings of the asian control conference kota kinabalu malaysia may james nurdin and petersen h control of linear quantum stochastic systems ieee transactions on automatic control vol no pp nurdin james and petersen coherent quantum lqg control automatica vol no pp shaiju and petersen a frequency domain condition for the physical realizability of linear quantum systems ieee transactions on automatic control vol no pp gardiner and zoller quantum noise berlin springer bachor and ralph a guide to experiments in quantum optics ed weinheim germany petersen and huntington a possible implementation of a direct coupling coherent quantum observer in proceedings of australian control conference gold coast australia november petersen and huntington implementation of a direct coupling coherent quantum observer including observer measurements in proceedings of the american control conference boston ma july gough and james the series product and its application to quantum feedforward and feedback networks ieee transactions on automatic control vol no pp zhang and james direct and indirect couplings in coherent feedback control of linear quantum systems ieee transactions on automatic control vol no pp
3
valuation theory generalized ifs attractors and fractals mar jan dobrowolski and kuhlmann abstract using valuation rings and valued fields as examples we discuss in which ways the notions of topological ifs attractor and fractal space can be generalized to cover more general settings given functions fn on a set x we will associate to them an iterated function system ifs denoted by f fn where we view f as a function on the power set p x defined by p x s f s n fi s one of the basic approaches to calling a space x fractal is to ask that there is an iterated function system f such that f x x and that the functions in the system satisfy certain additional forms of being contracting definition a compact metric space x d is called fractal if there is a system f fn with f x x where the functions fi are contracting that is d fi x fi y d x y for any distinct x y x in the absence of a metric one has to find other ways of encoding what is meant by contracting in banakh and nowak give a topological analogue for the common definition of fractal that uses iterated function systems for a detailed continuation of this approach see definition a compact topological space x is called fractal if there is an iterated function system f fn consisting of continuous functions fi x x such that f x x and the following shrinking condition is satisfied sc for every open covering c of x there is some k n such that for every sequence ik n k there is u c with fik x u date november mathematics subject classification primary secondary jan dobrowolski and kuhlmann clearly it suffices to check sc only for finite coverings if we fix a basis for the open sets then it suffices to check sc only for finite coverings consisting of basic sets as every finite covering can be refined to such a covering for the topology induced by a valuation v on a field k with value group vk one can take the collection of ultrametric balls a b k v a b where vk and a k as a basis note that by the ultrametric triangle law this set is closed under nonempty intersections for the same reason if v a b then a b the same works when we restrict v to a subring r of k except that then the values of the elements in r just form a linearly ordered subset of vk example we take a prime p and denote by fp the finite field with p elements then fp consists of the elements i i pz i we consider the laurent series ring x j r fp t ij t ij fp the valuation vt on fp t is defined by x ij tj if vt where z for i p we define a function fi by x x x j j i ij t t i t ij tj fi where i is understood to be an element of fp then fi r i tr and therefore the iterated function system f satisfies f r s p i tr each ultrametric ball in r with respect to the valuation of fp t is of the form x x ij tj b r t divides b ij tj bm for some integer m which we call the radius of the ball the empty sum is understood to be given any finite open covering of r consisting of ultrametric balls we take m to be the maximum of the radii of all balls in the covering then the covering can be refined to a covering of the form x bm p ij tj for every m and p we have that x r bm ij tj since the functions fi are continuous in the topology induced by the ultrametric an argument will be given below in the more general case of discrete valuation rings we see that fp t with its ultrametric balls is fractal in the sense of definition here is an obvious generalization of the previous example example we work in the same situation as in the last example but now fix an integer and for every p we set x x x j j ij t i j t t ij tj then r x j tj r and therefore the iterated function system f fs s s satisfies f r x stj r r p we now have that x x j k j t r bm t for every m and we generalize our observations to discrete valuation rings which in general can not be presented in power series form in particular not in mixed characteristic we take a discrete valuation ring r with maximal ideal m and choose a uniformizing parameter t r the value of t is the smallest positive element in the value set of further we choose a system of representatives s r for the residue field then for every s s we define a function fs by fs a s ta for a o then fs r s tr and therefore fs r s tr r jan dobrowolski and kuhlmann for every m and s we have that r bm x sj tj if a b r with a b bm then fs a fs b t a b this shows that each fs is contracting and hence continuous in the topology induced by the ultrametric if is finite then we have finitely many functions and obtain proposition every discrete valuation ring with finite residue field and equipped with the canonical ultrametric is fractal under both definitions given above note that for any topological space x the existencesof a continuous ifs f fn satisfying conditions sc and x i fi x from definition implies that x is by the following example it can be seen that these conditions do not imply that x is hausdorff so definition could be also considered for quasicompact spaces example let x be equipped with the topology in which the open sets are and the cofinite sets define x x by x x then the system consists of continuous functions and satisfies conditions sc and x i fi x the following definition seems to be the weakest reasonable generalization of definition to possibly infinite function systems definition let x be a topological space and fi i i any set of continuous mappings x x satisfying sc for any finite open covering u of x there is a natural number l such that for any gl fi i i the image x is contained in some u we will say that x is a topological attractor for fi i i if x cl fi x for any cardinal number we will say that x is a topological if x is an attractor for some set of continuous functions satisfying sc of cardinality at most for normal spaces the property of being a implies a bound on the weight the minimal cardinality of a basis of the topology proposition suppose x is a normal space which is a then w x proof choose a system of functions f fi i i of cardinality at s most satisfying sc such that x is an attractor for f x cl fi x claim for any natural number l we have that x cl gl x gl proof of the claim we proceed by induction on suppose that holds then for every i i we get by the continuity of fi that fi x cl fi gl x gl cl fi gl x gl cl s x cl gl x thus we obtain that x cl fi x cl this completes the proof of the claim define s x gl x l i f l clearly we will show that b is a basis of x so take any open subset u of x and x u since x is normal we can choose open sets such that x cl cl u let l be as in the condition sc for f and the covering x cl of x define j gl f l gl x x cl and s w x cl gl gl x b since is disjoint from s to check that w gl gl x we get that x w it remains s u take any y x u we will show that y cl gl gl x take any open neighbourhood z of y by the claim gl x meets z x cl for some hl f then the image gl x is not contained in vs so it is contained in x cl and hl j therefore z meets gl gl x and we are done the above proposition applies in particular to compact spaces which are known to be normal in particular we obtain that every topological ifsattractor has a countable basis thus by the urysohn metrization theorem we get corollary every topological is metrizable jan dobrowolski and kuhlmann condition sc is not satisfied in some natural examples where the metric shrinking condition is satisfied liml il diam x example let x be the baire space which is homeomorphic to k t considered with the valuation topology where k is any field of cardinality for any i define fi x x as follows fi x i and fi x n x n for n then sc is not satisfied s for fi i which is witnessed by the covering u where u n x x x n x x n thus we want to consider another topological shrinking condition in which we are allowed to choose a basis from which the covering sets are taken however to make it possible to cover in this way the whole space which is not assumed to be compact we allow one of the covering sets to be not in the fixed basis this leads to the following definition definition a family of functions fi on a topological space x satisfies if there is a basis b of x such that for every finite open covering c of x containing at most one set which is not in b there is some k n such that for every sequence ik i k there is u c with fik x u every space is an attractor for the set of all constant functions from x to x is covered by their images we will say that x is a weak attractor if it is an attractor for a set of functions satisfying of a cardinality smaller than we will say that x is a attractor if it is an attractor for a finite set of functions satisfying clearly we have remark if x is a compact space then it is a attractor if and only if it is a topological ifs attractor by the following example it can be seen that being a attractor does not imply compactness example let x be considered with the discrete topology define x x by n and n n then x is a attractor for so x is a attractor proof we choose a basis b consisting of all singletons consider any covering of x of a form u nl then it is sufficient to take k max nl example let x and let fi x x i be as in example then fi i satisfies so x is a weak attractor more generally for any cardinal number the space is an attractor for a set of functions of cardinality so it is a weak attractor if this holds for example for all cardinals with countable cofinality so for unboundedly many cardinals proof for any define by x and x n x n for n we choose the standard basis of b ax x k where ax y x y write k if x choose any open covering of of the form u axn put k max for any sequence we have that ay where y i for all i k so this image is either contained in one of the sets axn or disjoint from all of them and thus contained in u proposition suppose a is a densely ordered abelian group and a a a a is the associated absolute value consider a collection of functions fi a a i i suppose that there is a sequence ai i of positive elements of a which converges to and that for every k and any sequence ik i k we have diam fik a ai then fi a a i i satisfies where we consider a with the order topology proof we choose a basis b of the order topology on a consisting of all open intervals we consider any covering of a of the form u an bn for any i there is ci such that each of the intervals ai ci ai ci bi ci bi ci is contained in one of the sets from the covering now choose k such that for every sequence ik i k we have diam a c min cn bn then for a a a c a c is a subset of some of the sets from the covering otherwise by the choice of ci s a would be at distance c from all ai s and bi s and hence a could not belong any of the intervals ai bi as in that case we would have a c a c ai bi but that would mean that a c a c u by the above any set of diameter smaller than c is contained in one of the sets from the covering so we are done corollary r is a weak attractor proof take a continuous bijection r which is lipschitz with constant define fn x x for any integer then clearly the family fn n z satisfies the assumptions of proposition so we obtain that it satisfies of course r is an attractor for that family and has a bigger cardinality a fractal space is compact if we have a space that is only locally compact one can ask whether it is locally fractal that is whether every element is contained in a fractal subspace example we consider the laurent series field x k fp t ij tj z ij fp jan dobrowolski and kuhlmann the valuation vt on fp t is defined by where is now an arbitrary integer for every k z the function tk fp t c c fp t is a homeomorphism topology induced by the valuation on the other hand k tk fp t so we see that fp t is the union over an increasing chain of mutually homeomorphic fractal spaces however we wish to show that fp t is locally fractal in a stronger sense the idea is to write fp t as a union over a collection of mutually homeomorphic fractal subspaces and extend the functions we have used for fp t in a suitable way so that they work simultaneously for all of these subspaces to this end we observe that for any two a b k the function a c c a b b is a homeomorphism note that since there are only finitely many elements in that are s modulo we can write it as a finite union of the form j aj example we extend the functions fi we used for fp t by setting x x x x x j j j j fi ij t ij t i t ij t i t ij tj aj tj fp t with aj fp we have that x x j aj tj fi fp t aj t fp t fi a fi for every a and therefore n fi a x j aj t n fi fp t x aj tj fp t a on the other hand for every m and p we have that x x a bm aj tj ij tj a for arbitrary discretely valued fields k v with valuation ring r and valuation ideal m we can proceed as follows as before we choose a uniformizing parameter t k and a system of representatives s r for the residue field kv we set x k sj tj z snd s then for every a r there is a unique element k such that a for every s s we define a function fs k k by fs a s t a for every a k we obtain that fs a s tr and therefore fs a s tr r a on the other hand for every m and s we have that x a bm sj tj a take b c a then b c and if b c bm with m then and fs b fs c s t b s t c t b c this shows that each fs is contracting and hence continuous in the topology induced by the ultrametric we define definition a locally compact metric space x d is locally fractal if it is the union over a collection of mutually homeomorphic subspaces xj j j and there is a system f fn of functions fi x x such that for every j j xj is fractal the restrictions of the functions fi to xj definition a locally compact topological space x is locally fractal if it is the union over a collection of mutually homeomorphic subspaces xj j j and there is a system f fn of functions fi x x such that for every j j xj is topologically fractal the restrictions of the functions fi to xj note that we do not require the functions fi to be continuous or contracting or to satisfy sc on all of x indeed the functions we constructed above have the property that if then fi a b we have proved proposition every discretely valued field with finite residue field is locally fractal under both definitions references banakh and nowak a peano continuum which is not an ifs attractor proc amer math soc banakh novosad nowak and strobin contractive function systems their attractors and metrization topological methods in nonlinear analysis jan dobrowolski and kuhlmann faculty of mathematics and physical sciences university of leeds leeds uk address institute of mathematics ul wielkopolska szczecin poland address fvk
0
optimal weighted methods aug albert and giovanni august abstract we consider the problem of reconstructing an unknown bounded function u defined on a domain x rd from noiseless or noisy samples of u at n points xi n we measure the reconstruction error in a norm x for some given probability measure given a linear space vm with dim vm m n we study in general terms the weighted approximations from the spaces vm based on independent random samples it is well known that approximations can be inaccurate and unstable when m is too close to n even in the noiseless case recent results from have shown the interest of using weighted least squares for reducing the number n of samples that is needed to achieve an accuracy comparable to that of best approximation in vm compared to standard least squares as studied in the contribution of the present paper is twofold from the theoretical perspective we establish results in expectation and in probability for weighted least squares in general approximation spaces vm these results show that for an optimal choice of sampling measure and weight w which depends on the space vm and on the measure stability and optimal accuracy are achieved under the mild condition that n scales linearly with m up to an additional logarithmic factor in contrast to the present analysis covers cases where the function u and its approximants from vm are unbounded which might occur for instance in the relevant case where x rd and is the gaussian measure from the numerical perspective we propose a sampling method which allows one to generate independent and identically distributed samples from the optimal measure this method becomes of interest in the multivariate setting where is generally not of tensor product type we illustrate this for particular examples of approximation spaces vm of polynomial type where the domain x is allowed to be unbounded and high or even infinite dimensional motivated by certain applications to parametric and stochastic pdes ams classification numbers keywords multivariate approximation weighted least squares error analysis convergence rates random matrices conditional sampling polynomial approximation introduction let x be a borel set of rd we consider the problem of estimating an unknown function u x r from pointwise data y i n which are either noiseless or noisy observations of u at points xi n from x in numerous applications of interest some prior information is either established or assumed on the function u such information may take various forms such as this research is supported by institut universitaire de france and the erc adv project bread upmc univ paris cnrs umr laboratoire lions place jussieu paris france email cohen sorbonne upmc univ paris cnrs umr laboratoire lions place jussieu paris france email migliorati sorbonne i regularity properties of u in the sense that it belongs to a given smoothness class ii decay or sparsity of the expansion of u in some given basis iii approximability of u with some prescribed error by given spaces note that the above are often related to one another and sometimes equivalent since many smoothness classes can be characterized by prescribed approximation rates when using certain spaces or truncated expansions in certain bases this paper uses the third type of prior information taking therefore the view that u can be well approximated in some space vm of functions defined everywhere on x such that dim vm we work under the following mild assumption f or any x x there exists v vm such that v x this assumption holds for example when vm contains the constant functions typically the space vm comes from a family vj of nested spaces with increasing dimension such as algebraic or trigonometric polynomials or piecewise polynomial functions on a hierarchy of meshes we are interested in measuring the error in the x norm kvk x where is a given probability measure on x we denote by the associated inner product one typical strategy is to pick the estimate from a space vm such that dim vm the ideal estimator is given by the x orthogonal projection of u onto vm namely pm u argmin ku vk in general this estimator is not computable from a finite number of observations the best approximation error em u min ku vk ku pm uk thus serves as a benchmark for a numerical method based on a finite sample in the subsequent analysis we make significant use of an arbitrary x orthonormal basis lm of the space vm we also introduce the notation em u min ku where l is meant with respect to and observe that em u em u for any probability measure the weighted method consists in defining the estimator as n uw argmin i w xi y i n where the weights wi are given in the noiseless case y i u xi this also writes argmin ku vkn where the discrete seminorm is defined by n kvkn i w xi n this seminorm is associated with the product if we expand the solution to as pm vj lj the vector v vj m is the solution to the normal equations gv d where the matrix g has entries gj k hlj lk in and where the data vector d dj m is given pn by dj wi y i lj xi this system always has at least one solution which is unique when g is nonsingular when g is singular we may define uw as the unique minimal norm solution to note that g is nonsingular if and only if k kn is a proper norm on the space vm then if the data are noisefree that is when y i u xi we may also write n uw pm u n where pm is the orthogonal projection onto vm for the norm k kn in practice for the estimator to be easily computable it is important that the functions lm have explicit expressions that can be evaluated at any point in x so that the system can be assembled let us note that computing this estimator by solving only requires that lm is a basis of the space vm not necessarily orthonormal in x yet since our subsequent analysis of this estimator makes use of an x orthonormal basis we simply assume that lm is of such type in our subsequent analysis we sometimes work under the assumption of a known uniform bound we introduce the truncation operator z z sign z min and we study the truncated weighted approximation defined by ut uw note that in view of we have ut uw in the pointwise sense and therefore ku ut k ku uw the truncation operator aims at avoiding unstabilities which may occur when the matrix g is in this paper we use randomly chosen points xi and corresponding weights wi w xi distributed in such a way that the resulting random matrix g concentrates towards the identity i as n increases therefore if no bound is known an alternative strategy consists in setting to zero the estimator when g deviates from the identity by more than a given value in the spectral norm we recall that for m m matrices x this norm is defined as more precisely we introduce the conditioned approximation defined by uw if kg uc otherwise the choice of as a threshold for the distance between g and i in the spectral norm is related to our subsequent analysis however the value could be be replaced by any real number in up to some minor changes in the formulation of our results note that kg cond g it is well known that if n m is too much close to m weighted methods may become unstable and inaccurate for most sampling distributions for example if x and vm is the space of algebraic polynomials of degree m then with m n the estimator coincides with the lagrange polynomial interpolation which can be highly unstable and inaccurate in particular for equispaced points the question that we want to address here in general terms is therefore given a space vm and a measure how to best choose the samples y i and weights wi in order to ensure that the x error ku is comparable to em u with n being as close as possible to m for uw ut uc we address this question in the case where the xi are randomly chosen more precisely we draw independently the xi according to a certain probabiity measure defined on x a natural prescription for the success of the method is that kvkn approaches kvk as n tends to therefore one first obvious choice is to use and wi i n that is sample according to the measure in which we plan to evaluate the error and use equal weights when using equal weights wi the weighted estimator becomes the standard leastsquares estimator as a particular case the strategy was analyzed in through the introduction of the function m x x x km x which is the diagonal of the integral kernel of the projector pm this function only depends on vm and it is strictly positive in x due to assumption its reciprocal function is characterized by min km x v x and is called christoffel function in the particular case where vm is the space of algebraic polynomials of total degree m see obviously the function km satisfies z km x we define km km vm kkm and recall the following results from for the standard method with the weights and the sampling measure chosen as in theorem for any r if m and n are such that the condition km n ln with r ln n is satisfied then the following hold i the matrix g satisfies the tail bound pr kg ii if u x satisfies a uniform bound then the truncated estimator satisfies in the noiseless case e ku ut n em u where n ln n as n and as in iii if u x then the truncated and nontruncated estimators satisfy in the noiseless case ku ut k ku uw k em u with probability larger than the second item in the above result shows that the optimal accuracy em u is met in expectation up to an additional term of order when em u has polynomial decay o we are ensured that this additional term can be made negligible by taking r strictly larger than which amounts in taking r small enough condition imposes a minimal number of samples to ensure stability and accuracy of standard least squares since implies that km m the fulfillment of this condition requires that n is at least of the order m ln m however simple examples show that the restriction can be more severe for example if vm on x and with being the uniform probability measure in this case one choice for the lj are the legendre polynomials with proper normalization klj so that km and therefore condition imposes that n is at least of order ln m other examples in the multivariate setting are discussed in which show that for many relevant approximation spaces vm and probability measures the behaviour of km is superlinear in m leading to a very demanding regime in terms of the needed number n of samples in the case of multivariate downward closed polynomial spaces precise upper bounds for km have been proven in for measures associated to jacobi polynomials in addition note that the above theory does not cover simple situations such as algebraic polynomials over unbounded domains for example x r equipped with the gaussian measure since the orthonormal polynomials lj are unbounded for j and thus km if m main results in the present paper we show that these limitations can be overcome by using a proper weighted leastsquares method we thus return to the general form of the discrete norm used in the definition of the weighted estimator we now use a sampling measure which generally differs from and is such that r where w is a positive function defined everywhere on x and such that x and we then consider the weighted method with weights given by wi w xi with such a choice the norm kvkn again approaches kvk as n increases the particular case and w corresponds to the standard method analyzed by theorem note that changing the sampling measure is a commonly used strategy for reducing the variance in monte carlo methods where it is referred to as importance sampling with lj again denoting the x orthonormal basis of vm we now introduce the function x km w x m x w x x which only depends on vm and w as well as km w km w vm w kkm w r note that since the wlj are an x orthonormal basis of wvm we find that x km w m and thus km w we prove in this paper the following generalization of theorem theorem for any r if m and n are such that the condition ln n with ln n pr kg km w is satisfied then the following hold i the matrix g satisfies the tail bound ii if u x satisfies a uniform bound then the truncated weighted estimator satisfies in the noiseless case e ku ut n em u where n ln n as n and as in iii if u x then the nontruncated weighted estimators satisfy in the noiseless case ku uw k em u with probability larger than iv if u x then the conditioned weighted estimator satisfies in the noiseless case e ku uc n em u where n ln n as n and as in let us mention that the quantity km w has been considered in where similar stability and approximation results have been formulated in a slightly different form see in particular theorem therein in the specific framework of total degree polynomial spaces the interest of theorem is that it leads us in a natural way to an optimal sampling strategy for the weighted method we simply take w m m pm km and with such a choice for w one readily checks that km m r is a probability measure on x since x km in addition we have for this particular choice that km w wkm m and therefore km w we thus obtain the following result as a consequence of theorem which shows that the above choice of w and allows us to obtain estimates for the truncated weighted estimator under the minimal condition that n is at least of the order m ln m corollary for any r if m and n are such that the condition n ln with ln n is satisfied then the conclusions i ii iii and iv of theorem hold for weighted least squares with the choice of w and given by and one of the interests of the above optimal sampling strategy is that it applies to polynomial approximation on unbounded domains that were not covered by theorem in particular x r equipped with the gaussian measure in this case the relevant target functions u are often nonuniformly bounded and therefore the results in items ii and iii of theorem do not apply the result in item iv for the conditioned estimator uc remains valid since it does not require uniform boundedness of u let us remark that all the above results are independent of the dimension d of the domain x however raising d has the unavoidable effect of restricting the classes of functions for which the best approximation error em u or em u have some prescribed decay due to the curse of dimensionality note that the optimal pair w described by and depends on vm that is w wm and this raises a difficulty for properly choosing the samples in settings where the choice of vm is not fixed such as in adaptive methods in certain particular cases it is known that wm and admit limits and as m and are globally equivalent to these limits one typical example is given by the univariate polynomial spaces vm when x and where is a jacobi weight and dx is the lebesgue measure on x in this case is the pluripotential equilibrium measure dx see and one has m for some fixed constants c c thus in such a case the above corollary also holds for the choice w and under the condition m cc lnnn the development of sampling strategies in cases of varying values of m without such asymptotic equivalences is the object of current investigation a closely related weighted strategy was recently proposed and analyzed in in the polynomial framework there the authors propose to use the renormalized christoffel function in the definition of the weights however sampling from the fixed pluripotential equilibrium measure due to the fact that differs from the main estimate obtained in see therein does not have the same simple form of a direct comparison between ku ut k and em u as in ii of theorem in particular it involves an extra term d f which does not vanish even as n one intrinsic difficulty when using the optimal pair w wm described by and is the effective sample generation in particular in the multivariate framework since the measure is generally not of tensor product type one possible approach is to use markov chain monte carlo methods such as the algorithm as explored in in such methods the samples are mutually correlated and only asymptotically distributed according to the desired sampling measure one contribution of the present paper is to propose a straightforward and effective sampling strategy for generating an arbitrary finite number n of independent samples identically distributed according to this strategy requires that has tensor product structure and that the spaces vm are spanned by tensor product bases such as for multivariate polynomial spaces in which case is generally not of tensor product type the rest of our paper is organized as follows the proof of theorem is given in in a concise form since it follows the same lines as the original results on standard least squares from we devote to analog results in the case of samples affected by additive noise proving that the estimates are robust under condition the proposed method for sampling the optimal measure is discussed in and we illustrate its effectiveness in by numerical examples proof of theorem the proof is structurally similar to that of theorem given in for items i and ii and in for item pn iii therefore we only sketch it we observe that g xi where the xi are copies of the rank random matrix x x x w x lj x lk x j m with x a random variable distributed over x according to one obviously has e x i we then invoke the chernov bound from to obtain that if r almost surely then for any pr kg c exp r with ln taking and observing that kx x m x km w x x w x n n k which yields in item i we may thus take r m w n for the proof of in item ii we first consider the event where kg in this case we write n n ku ut u uw ku uw ku pm kpm n where we have used that pm pm u pm u and that g is orthogonal to vm and thus ku ut em u where a aj m is solution of the system ga b m x g u pm u and b hg lk in m since it follows that ku ut em u in the event where kg m x lk in we simply write ku ut k it follows that e ku ut em u m x e lk in for the second term we have n n xx e w xi w xj g xi g xj lk xi lk xj n n w x g x lk x ne x g x lk x n z lk x x x n n x z w x x x n x e lk in where we have used the fact that g is x to vm and thus to lk summing over k we obtain m x e lk in km w em u n ln n and we therefore obtain for the proof of in item iii we place ourselves in the event where kg this property also means that hgv v rm which can be expressed as a norm equivalence over vm v vm we then write that for any v vm n ku pnm uk ku vk kv pm uk n ukn ku vk pm ku vk vkn ku n n where we have used the pythagorean identity ku ku pm kv pm and the fact that both k k and k kn are dominated by k since v is arbitrary we obtain finally in item iv is proven in a very similar way as in item ii by writing that in the event kg we have ku uc k kuk so that e ku uc em u m x and we conclude in the same way e lk in the noisy case in a similar way as in we can analyze the case where the observations of u are affected by an additive noise in practical situations the noise may come from different sources such as a discretization error when u is evaluated by some numerical code or a measurement error the first one may be viewed as a perturbation of u by a deterministic funtion h that is we observe y i u xi h xi the second one is typically modelled as a stochastic fluctuation that is we observe y i u xi i where i are independent realizations of the centered random variable y u x here we do not necessarily assume and x to be independent however we typically assume that the noise is centered that is e and we also assume uniformly bounded conditional variance sup e note that we may also consider consider a noncentered noise which amounts in adding the two contributions that is y i u xi i i h xi i with h x e the following result shows that the estimates in theorem are robust under the presence of such an additive noise theorem for any r if m and n are such that condition is satisfied then the following hold for the noise model i if u x satisfies a uniform bound then the truncated weighted estimator satisfies k m w e ku ut n em u n n ii if u x then the conditioned weighted estimator satisfies k m w n r as n with as in and k m w x km w e ku uc n em u n where in both cases n ln n proof we again first consider the event where kg in this case we write ku ut k ku uw k n and use the decomposition u uw g pm g h where g u pm u as in the proof of theorem and h stands for the solution to the problem for the noise data i n therefore n n n ku uw kpm g m x where n nj m is solution to gn b b n x n i w xi lk xi m bk m since it follows that ku ut em u m x lk in m x compared to the proof of theorem we need to estimate the expectation of the third term on the right side for this we simply write that e n n xx e i w xi lk xi j w xj lk xj for i j we have e i w xi lk xi j w xj lk xj e x lk x e h x w x lk x z hwlk x lk note that the first and second expectations are with respect to the joint density of x and the third one with respect to the density of x that is for i j we have e i w xi lk xi e x lk x z e x lk x x z e x lk x zx e w x x x z x e w x x x z x w x x x summing up on i j and k and using condition we obtain that m x k m w k m w km w e n n n log n n for the rest we proceed as for item ii and iv in the proof of theorem using that in the event we have ku ut k and ku uc k kuk remark note that for the standard method corresponding to the case where w we know that k m w the noise term thus takes the stardard form n as seen for example in theorem or in theorem of note that in any case condition implies that this term is bounded by log the conclusions of theorem do not include the estimate in probability similar to item iii in theorem we can obtain such an estimate in the case of a bounded noise where we assume that h x and is a bounded random variable or equivalently assuming that is a bounded random variable that is we use the noise model with d for this bounded noise model we have the following result theorem for any r if m and n are such that condition is satisfied then the following hold for the the noise model under if u x then the nontruncated weighted estimator satisfies ku uw k em u with probability larger than proof similar to the proof of iii in theorem we place ourselves in the event where kg use the norm equivalence we then write that for any v vm and n n ku uw k ku vk kv pm uk kpm the first two terms already appeared in the noiseless case and can be treated in the same way the new n term pm corresponds to the weighted approximation from the noise vector and satisfies n n kpm this leads to random sampling from the analysis in the previous sections prescribes the use of the optimal sampling measure defined in for drawing the samples xn in the weighted method in this section we discuss numerical methods for generating independent random samples according to this measure in a specific relevant multivariate setting here we make the assumption that x xi is a cartesian product of univariate real domains xi and that is a product measure that is d o where each is a measure defined on xi we assume that each is of the form t t dt for some nonnegative continuous function and therefore x x dx x d y xi x xd x in particular is absolutely continuous with respect to the lebesgue measure we consider the following general setting for each i d we choose a univariate basis orthonormal in xi we then define the tensorized basis x d y xi which is orthonormal in x we consider general subspaces of the form vm span for some set such that thus we may rename the as lj m after a proper ordering has been chosen for example in the lexicographical sense for the given set of interest we introduce max and max d the measure is thus given by x x dx where x m x x x x x x m x x we now discuss our sampling method for generating n independent random samples xn identically distributed according to the multivariate density note that this density does not have a product structure despite is a product density there exist many methods for sampling from multivariate densities in contrast to markov chain monte carlo methods mentioned in the introduction the method that we next propose exploits the particular structure of the multivariate density in order to generate independent samples in a straightforward manner and sampling only from univariate densities given the vector x xd of all the coordinates for any a d we introduce the notation xa xi d a xi and dxa o dxi o xa y xi in the following we mainly use the particular sets xa xi aq q and q d so that any x x may be written as x xaq using such a notation for any q d we associate to the joint density its marginal density of the first q variables namely z xaq xaq since is an orthonormal basis of xi for any q d and any we obtain that z xaq xaq xaq q y xi xaq xaq therefore the marginal density can be written in simple form as q xaq xy xi xaq sequential conditional sampling based on the previous notation and remarks we propose an algorithm which generates n samples xk xkd x with k n that are independent and identically distributed realizations from the density in in the multivariate case the coordinates can be arbitrarily reordered start with the first coordinate and sample n points from the univariate density r t t t t x t which coincides with the marginal of calculated in in the univariate case d the algorithm terminates in the multivariate case d by iterating q from to d consider the qth coordinate xq and sample n points xnq xq in the following way for any k n given the values that have been calculated at the previous q steps sample the point xkq xq from the univariate density j k p q xj t k xq r t t p j k xj the expression on the side of is continuous at any t xq and at any assumption ensures that the denominator of is strictly positive for any possible choice of and also ensures that the marginal is strictly positive at any point such that for any t xq and any such that the density satisfies t where the densities and are the marginals defined in and evaluated at the points t x q and respectively from using and simplifying the term k xj one obtains the side of the side of equation is well defined for any t xq and any such that and it is not defined at the points such that where vanishes nonetheless has finite limits at any point t xaq and these limits equal expression according to technical terminology the side of equation is the conditional density of xq given with respect to the density and is the continuous extension to xaq of this conditional density the densities defined in can be concisely rewritten for any q d as x t t where the nonnegative weights are defined as if q q zj p if q d q j zj p for any since each density in is a convex combination of the densities note that if the orthonormal basis have explicit expressions and can be evaluated at any point in xq then the same holds for the univariate densities in particular in the polynomial case for standards univariate densities such as uniform chebyshev or gaussian the orthonormal polynomials have expressions which are explicitely computable for example by recursion formulas in algorithm we summarize our sampling method that sequentially samples the univariate densities to generate independent samples from the multivariate density in the univariate case d the algorithm does not run the innermost loop and only samples from in the multivariate case d the algorithm runs also the innermost loop and conditionally samples also from our algorithm therefore relies on accurate sampling methods for the relevant univariate densities algorithm sequential conditional sampling for input n d for i output xn for k to n do for any sample from t t t for q to d do q j k xj p q xkj p t for any sample xkq from t t t end for xk xkd end for p t we close this section by discussing two possible methods for sampling from such densities rejection sampling and inversion transform sampling both methods equally apply to any univariate density and therefore we present them for any q arbitrarily chosen from to rejection sampling rs for applying this method one needs to find a suitable univariate density whose support contains the support of and a suitable real mq such that t mq t t supp the density should be easier to sample than efficient pseudorandom number generators for sampling from are available the value of mq should be the smallest possible for sampling one point from using rs sample a point z from and sample u from the standard uniform u then check if u z z if this is the case then accept z as a realization from otherwise reject z and restart sampling z and u from beginning on average acceptance occurs once every mq trials therefore for a given q sampling one point from by rs requires on average mq evaluations of the function t t x t t mq t mq t this amounts in evaluating mq times the terms and a subset of the terms depending on the coefficients depend on the terms for j q which have been already evaluated when sampling the previous coordinates q thus if we use rs for sampling the univariate densities the overall computational cost of algorithm for sampling n points xn x is on average pd proportional to n mq when the basis functions form a bounded orthonormal system an immediate and simple choice of the parameters in the algorithm is mq max and t t with such a choice we can quantify more precisely the average computational cost of sampling n points in dimension when are the chebyshev polynomials whose norms satisfy we pd obtain the bound when are the legendre polynomials pd whose norms satisfy we have the crude estimate in general when are jacobi polynomials similar upper bounds can be derived and the dependence of these bounds on n and d is linear inversion transform sampling its let xq be the cumulative distribution function associated to the univariate density in the following only when using the its method we make the further assumption that vanishes at most a finite number of times in xq such an assumption is fulfilled in many relevant situations when is the density associated to jacobi or hermite polynomials orthonormal in xq together with assumption this ensures that the function t t is continuous and strictly increasing on xq hence is a bijection between xq and and it has a unique inverse xq which is continuous and strictly increasing on sampling from q using its can therefore be performed as follows sample n independent realizations un identically distributed according to the standard uniform u and obtain the n independent samples from as n q u u for any u computing z q u xq is equivalent to find the unique solution z xq to z u this can be executed by elementary numerical methods the bisection method or newton s method in alternative to using methods one can build an interpolant operator iq of q and then approximate u iq u for any u such an interpolant iq can be constructed for example by piecewise linear interpolation from the data tqsq tqsq at sq suitable points tqsq in xq both methods and the interpolation method require evaluating the function pointwise in xq in general these evaluations can be computed using standard univariate quadrature formulas when are orthogonal polynomials the explicit expression of the primitive of can be used for directly evaluating the function finally we discuss the overall computational cost of algorithm for sampling n points xn x when using its for sampling the univariate densities with the bisection method this overall cost amounts pd to n wq where is the maximum number of iterations for locating the zero in xq up to some desired tolerance and wq is the computational cost of each iteration with the interpolation of q the overall cost amounts to n evaluations of each interpolant iq in addition to the cost for building the interpolants which does not depend on examples and numerical illustrations this section presents the numerical performances of the weighted method compared to the standard method in three relevant situations where can be either the uniform measure the chebyshev measure or the gaussian measure in each one of these three cases we choose w and in the weighted method from and as prescribed by our analysis in corollary for standard least squares we choose w and as in our tests focus on the condition number of the gramian matrix that quantifies the stability of the linear system and the stability of the weighted and standard estimators a meaningful quantity is therefore the probability pr cond g where through the value three of the threshold is related to the parameter in the previous analysis for any n and m from the probability is larger than pr kg from corollary under condition between n m and r the gramian matrix of weighted least squares satisfies and therefore the probability is larger than for standard least squares from theorem the gramian matrix satisfies with probability larger than but under condition in the numerical tests the probability is approximated by empirical probability obtained by counting how many times the event cond g occurs when repeating the random sampling one hundred times all the examples presented in this section confine to multivariate approximation spaces of polynomial type one natural assumption in this case is to require that the set is downward closed that is satisfies and where means that for all i then vm is the polynomial space spanned by the monomials d y zj j z z and the orthonormal basis is provided by taking each to be a sequence of univariate orthonormal polynomials of xi in both the univariate and multivariate forthcoming examples the random samples from the measure are generated using algorithm the univariate densities are sampled using the inversion transform sampling method the inverse of the cumulative distribution function is approximated using the interpolation technique univariate examples in the univariate case d let the index set be m and vm span z k k m we report in fig the probability approximated by empirical probability when g is the gramian matrix of the weighted method different combinations of values for m and n are tested with three choices of the measure uniform gaussian and chebyshev the results do not show perceivable differences among the performances of weighted least squares with the three different measures in any of the three cases ln n is enough to obtain an empirical probability equal to one that cond g this confirms that condition with any choice of r ensures since it demands for a larger number of samples fig shows the probability when g is the gramian matrix of standard least squares with the uniform measure the condition ln n is enough to have with empirical probability larger uniform measure gaussian measure chebyshev measure figure weighted least squares p r cond g d left uniform measure center gaussian measure right chebyshev measure uniform measure gaussian measure chebyshev measure figure standard least squares p r cond g d left uniform measure center gaussian measure right chebyshev measure than when is the gaussian measure stability requires a very large number of evaluations roughly ln n linearly proportional to exp for the univariate chebyshev measure it is proven that standard least squares are stable under the same minimal condition as for weighted least squares in accordance with the theory the numerical results obtained in this case with weighted and standard least squares are indistinguishable see fig and fig multivariate examples afterwards we present some numerical tests in the multivariate setting these tests are again based as in the previous section on approximating the probability by empirical probability in dimension d larger than one there are many possible ways to enrich the polynomial space the number of different downward closed sets whose cardinality equals m gets very large already for moderate values of m and therefore we present the numerical results for a chosen sequence of polynomial spaces such that where each is downward closed dim j and the starting set contains only the null all the tests in fig and fig have been obtained using the same sequence of increasingly embedded polynomial spaces for both weighted and standard least squares and for the three choices of the measures such a choice allows us to establish a fair comparison between the two methods and among different measures without the additional variability arising from modifications to the polynomial space uniform measure gaussian measure chebyshev measure figure weighted least squares p r cond g d left uniform measure center gaussian measure right chebyshev measure uniform measure gaussian measure chebyshev measure figure standard least squares p r cond g d left uniform measure center gaussian measure right chebyshev measure we report the results obtained for the tests in dimension d the results in fig confirm that weighted least squares always yield an empirical probability equal to one that cond g provided that log n this condition ensures that with any choice of r implies thus verifying corollary again the results do not show significant differences among the three choices of the measure a straight line with the same slope for all the three cases uniform chebyshev and gaussian separates the two regimes corresponding to empirical probabilities equal to zero and one compared to the univariate case in fig the results in fig exhibit a sharper transition between the two extreme regimes and an overall lower variability in the transition regime the results for standard least squares with d are shown in fig in the case of the uniform measure in fig stability is ensured if ln n which is more demanding than the condition ln n needed for the stability of weighted least squares in fig but much less strict than the condition required with standard least squares in the univariate case where ln n scales like these phenomena have already been observed and described in similar results as those with the uniform measure are obtained with the chebyshev measure in fig where again standard least squares achieve stability using more evaluations than weighted least squares in fig the case of the gaussian measure drastically differs from the uniform and chebyshev cases the results in fig clearly indicate that a very large number of evaluations n compared to m is required to achieve stability of standard least squares let us mention that analogous results as those presented in figs and for weighted least squares have been obtained also in other dimensions and with many other sequences of increasingly embedded polynomial spaces in the next tables we report some of these results for selected values of d we choose n and m that satisfy condition with r and report in table the empirical probabilities that approximate again calculated over one hundred repetitions this table provides multiple comparisons weighted least squares versus standard least squares for the three choices of the measure uniform gaussian and chebyshev and with d varying between and method d d d weighted ls weighted ls weighted ls uniform gaussian chebyshev standard ls standard ls standard ls uniform gaussian chebyshev table p r cond g with n and m weighted least squares versus standard least squares uniform versus gaussian versus chebyshev d method d d d weighted ls weighted ls weighted ls uniform gaussian chebyshev standard ls standard ls standard ls uniform gaussian chebyshev table average of cond g with n and m weighted least squares versus standard least squares uniform versus gaussian versus chebyshev d in table all the empirical probabilities related to results for weighted least squares are equal to one and confirm the theory since for the chosen values of n m and r the probability is larger than this value is computed using estimate from the proof of theorem in contrast to weighted least squares whose empirical probability equal one independently of and d the empirical probability of standard least squares does depend on the chosen measure and to some extent on the dimension d as well with the uniform measure the empirical probability that approximates equals zero when d or d equals when d and equals one when d d or d in the gaussian case standard least squares always feature null empirical probabilities with the chebyshev measure the condition number of g for standard least squares is always lower than three for any tested value of in addition to the results in table further information are needed for assessing how severe is the lack of stability when obtaining null empirical probabilities to this aim in table we also report the average value of cond g obtained when averaging the condition number of g over the same repetitions used to estimate the empirical probabilities in table the information in table are complementary to those in table on the one hand they point out the stability and robustness of weighted least squares showing a tamed condition number with any measure and any dimension on the other hand they provide further insights on stability issues of standard least squares and their dependence on and for standard least squares with the uniform measure the average condition number reduces as the dimension d increases in agreement with the conclusion drawn from table the gramian matrix of standard least squares with the gaussian measure is very for all tested values of for standard least squares with the chebyshev measure the averaged condition number of g is only slightly larger than the one for weighted least squares it is worth remarking that the results for standard least squares in fig table and table are sensitive to the chosen sequence of polynomial spaces testing different sequences might produce different results that however necessarily obey to the estimates proven in theorem with uniform and chebyshev measures when n m and r satisfy condition many other examples with standard least squares have been extensively discussed in previous works also in situations where n m and r do not satisfy condition and therefore theorem does not apply in general when n m and r do not satisfy there exist multivariate polynomial spaces of dimension m such that the gramian matrix of standard least squares with the uniform and chebyshev measures does not satisfy examples of such spaces are discussed in using these spaces would yield null empirical probabilities in table for standard least squares with the uniform and chebyshev measures for weighted least squares when n m and r satisfy condition any sequence of polynomial spaces yields empirical probabilities close to one according to corollary indeed such a robustness with respect to the choices of of the polynomial space and of the dimension d represents one of the main advantages of the weighted approach references chardon cohen and daudet sampling and reconstruction of solutions to the helmholtz equation sampl theory signal image chkifa cohen migliorati nobile and tempone discrete least squares polynomial approximation with random evaluations application to parametric and stochastic elliptic pdes cohen davenport and leviatan on the stability and accuracy of least squares approximations found comput doostan and hampton coherence motivated sampling and convergence analysis of least squares polynomial chaos regression comput methods appl mech jakeman narayan and zhou a christoffel function weighted least squares algorithm for collocation approximations preprint migliorati multivariate and inequalities for polynomials associated with downward closed sets approx theory migliorati nobile von schwerin and tempone analysis of discrete projection on polynomial spaces with random evaluations found comput migliorati nobile and tempone convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points multivar analysis saff and totik logarithmic potentials with external fields springer nevai freud orthogonal polynomials and christoffel functions a case study approx theory nevai and totik s extremum problem on the unit circle annals of mathematics tropp user friendly tail bounds for sums of random matrices found comput
10
how to construct deep recurrent neural networks razvan caglar kyunghyun and yoshua apr d informatique et de recherche de pascanur gulcehrc department of information and computer science aalto university school of science abstract in this paper we explore different ways to extend a recurrent neural network rnn to a deep rnn we start by arguing that the concept of depth in an rnn is not as clear as it is in feedforward neural networks by carefully analyzing and understanding the architecture of an rnn however we find three points of an rnn which may be made deeper function transition and function based on this observation we propose two novel architectures of a deep rnn which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build a deep rnn schmidhuber el hihi and bengio we provide an alternative interpretation of these deep rnns using a novel framework based on neural operators the proposed deep rnns are empirically evaluated on the tasks of polyphonic music prediction and language modeling the experimental result supports our claim that the proposed deep rnns benefit from the depth and outperform the conventional shallow rnns introduction recurrent neural networks rnn see rumelhart et have recently become a popular choice for modeling sequences rnns have been successfully used for various task such as language modeling see graves pascanu et mikolov sutskever et learning word embeddings see mikolov et online handwritten recognition graves et and speech recognition graves et in this work we explore deep extensions of the basic rnn depth for feedforward models can lead to more expressive models pascanu et and we believe the same should hold for recurrent models we claim that unlike in the case of feedforward neural networks the depth of an rnn is ambiguous in one sense if we consider the existence of a composition of several nonlinear computational layers in a neural network being deep rnns are already deep since any rnn can be expressed as a composition of multiple nonlinear layers when unfolded in time schmidhuber el hihi and bengio earlier proposed another way of building a deep rnn by stacking multiple recurrent hidden states on top of each other this approach potentially allows the hidden state at each level to operate at different timescale see hermans and schrauwen nonetheless we notice that there are some other aspects of the model that may still be considered shallow for instance the transition between two consecutive hidden states at a single level is shallow when viewed has implications on what kind of transitions this model can represent as discussed in section based on this observation in this paper we investigate possible approaches to extending an rnn into a deep rnn we begin by studying which parts of an rnn may be considered shallow then for each shallow part we propose an alternative deeper design which leads to a number of deeper variants of an rnn the proposed deeper variants are then empirically evaluated on two sequence modeling tasks the layout of the paper is as follows in section we briefly introduce the concept of an rnn in section we explore different concepts of depth in rnns in particular in section we propose two novel variants of deep rnns and evaluate them empirically in section on two tasks polyphonic music prediction et and language modeling finally we discuss the shortcomings and advantages of the proposed models in section recurrent neural networks a recurrent neural network rnn is a neural network that simulates a dynamical system that has an input xt an output yt and a hidden state ht in our notation the subscript t represents time the dynamical system is defined by ht fh xt yt fo ht where fh and fo are a state transition function and an output function respectively each function is parameterized by a set of parameters h and o n n n n given a set of n training sequences d xtn ytn the parameters of an rnn can be estimated by minimizing the following cost function j n n n n tn xx n n d yt fo ht n n where ht fh xt and d a b is a predefined divergence measure between a and b such as euclidean distance or conventional recurrent neural networks a conventional rnn is constructed by defining the transition function and the output function as ht fh xt w u xt yt fo ht xt v ht where w u and v are respectively the transition input and output matrices and and are nonlinear functions it is usual to use a saturating nonlinear function such as a logistic sigmoid function or a hyperbolic tangent function for an illustration of this rnn is in fig a the parameters of the conventional rnn can be estimated by for instance stochastic gradient descent sgd algorithm with the gradient of the cost function in eq computed by backpropagation through time rumelhart et deep recurrent neural networks why deep recurrent neural networks deep learning is built around a hypothesis that a deep hierarchical model can be exponentially more efficient at representing some functions than a shallow one bengio a number of recent theoretical results support this hypothesis see le roux and bengio delalleau and bengio pascanu et for instance it has been shown by delalleau and bengio that a deep network may require exponentially less units to represent the same function compared to a shallow network furthermore there is a wealth of empirical evidences supporting this hypothesis see goodfellow et hinton et a these findings make us suspect that the same argument should apply to recurrent neural networks depth of a recurrent neural network yt ht xt figure a conventional recurrent neural network unfolded in time the depth is defined in the case of feedforward neural networks as having multiple nonlinear layers between input and output unfortunately this definition does not apply trivially to a recurrent neural network rnn because of its temporal structure for instance any rnn when unfolded in time as in fig is deep because a computational path between the input at time k t to the output at time t crosses several nonlinear layers a close analysis of the computation carried out by an rnn see fig a at each time step individually however shows that certain transitions are not deep but are only results of a linear projection followed by an nonlinearity it is clear that the ht ht yt and xt ht functions are all shallow in the sense that there exists no intermediate nonlinear hidden layer we can now consider different types of depth of an rnn by considering those transitions separately we may make the transition deeper by having one or more intermediate nonlinear layers between two consecutive hidden states and ht at the same time the function can be made deeper as described previously by plugging multiple intermediate nonlinear layers between the hidden state ht and the output yt each of these choices has a different implication deep function a model can exploit more structure from the input by making the function deep previous work has shown that representations of deep networks tend to better disentangle the underlying factors of variation than the original input goodfellow et glorot et and flatten the manifolds near which the data concentrate bengio et we hypothesize that such representations should make it easier to learn the temporal structure between successive time steps because the relationship between abstract features can generally be expressed more easily this has been for instance illustrated by the recent work mikolov et showing that word embeddings from neural language models tend to be related to their temporal neighbors by simple algebraic relationships with the same type of relationship adding a vector holding over very different regions of the space allowing a form of analogical reasoning this approach of making the function deeper is in the line with the standard practice of replacing input with extracted features in order to improve the performance of a machine learning model see bengio recently chen and deng reported that a better speech recognition performance could be achieved by employing this strategy although they did not jointly train the deep function together with other parameters of an rnn deep function a deep function can be useful to disentangle the factors of variations in the hidden state making it easier to predict the output this allows the hidden state of the model to be more compact and may result in the model being able to summarize the history of previous inputs more efficiently let us denote an rnn with this deep function a deep output rnn instead of having feedforward intermediate layers between the hidden state and the output et al proposed to replace the output layer with a conditional yt yt yt yt yt z ht xt a rnn ht ht xt b xt b dt s ht xt c zt ht xt d stacked rnn figure illustrations of four different recurrent neural networks rnn a a conventional rnn b deep transition dt rnn b with shortcut connections c deep transition deep output dot rnn d stacked rnn erative model such as restricted boltzmann machines or neural autoregressive distribution estimator larochelle and murray in this paper we only consider feedforward intermediate layers deep transition the third knob we can play with is the depth of the transition the state transition between the consecutive hidden states effectively adds a new input to the summary of the previous inputs represented by the hidden state previous work with rnns has generally limited the architecture to a shallow operation affine transformation followed by an nonlinearity instead we argue that this procedure of constructing a new summary or a hidden state from the combination of the previous one and the new input should be highly nonlinear this nonlinear transition could allow for instance the hidden state of an rnn to rapidly adapt to quickly changing modes of the input while still preserving a useful summary of the past this may be impossible to be modeled by a function from the family of generalized linear models however this highly nonlinear transition can be modeled by an mlp with one or more hidden layers which has an universal approximator property see hornik et an rnn with this deep transition will be called a deep transition rnn throughout remainder of this paper this model is shown in fig b this approach of having a deep transition however introduces a potential problem as the introduction of deep transition increases the number of nonlinear steps the gradient has to traverse when propagated back in time it might become more difficult to train the model to capture longterm dependencies bengio et one possible way to address this difficulty is to introduce shortcut connections see raiko et in the deep transition where the added shortcut connections provide shorter paths skipping the intermediate layers through which the gradient is propagated back in time we refer to an rnn having deep transition with shortcut connections by dt s see fig b furthermore we will call an rnn having both a deep function and a deep transition a deep output deep transition rnn see fig c for the illustration of if we consider shortcut connections as well in the hidden to hidden transition we call the resulting model dot s an approach similar to the deep transition has been proposed recently by pinheiro and collobert in the context of parsing a static scene they introduced a recurrent convolutional neural network rcnn which can be understood as a recurrent network whose the transition between consecutive hidden states and input to hidden state is modeled by a convolutional neural network the rcnn was shown to speed up scene parsing and obtained the result in stanford background and sift flow datasets ko and dieter proposed deep transitions for gaussian process models earlier valpola and karhunen used a deep neural network to model the state transition in a nonlinear dynamical model stack of hidden states an rnn may be extended deeper in yet another way by stacking multiple recurrent hidden layers on top of each other schmidhuber el hihi and bengio jaeger graves we call this model a stacked rnn srnn to distinguish it from the other proposed variants the goal of a such model is to encourage each recurrent level to operate at a different timescale it should be noticed that the and the srnn extend the conventional shallow rnn in different aspects if we look at each recurrent level of the srnn separately it is easy to see that the transition between the consecutive hidden states is still shallow as we have argued above this limits the family of functions it can represent for example if the structure of the data is sufficiently complex incorporating a new input frame into the summary of what had been seen up to now might be an arbitrarily complex function in such a case we would like to model this function by something that has universal approximator properties as an mlp the model can not rely on the higher layers to do so because the higher layers do not feed back into the lower layer on the other hand the srnn can deal with multiple time scales in the input sequence which is not an obvious feature of the the and the srnn are however orthogonal in the sense that it is possible to have both features of the and the srnn by stacking multiple levels of to build a stacked which we do not explore more in this paper formal descriptions of deep rnns here we give a more formal description on how the deep transition recurrent neural network dtrnn and the deep output rnn as well as the stacked rnn are implemented deep transition rnn we noticed from the state transition equation of the dynamical system simulated by rnns in eq that there is no restriction on the form of fh hence we propose here to use a multilayer perceptron to approximate fh instead in this case we can implement fh by l intermediate layers such that ht fh xt wl u xt where and wl are the nonlinear function and the weight matrix for the layer this rnn with a multilayered transition function is a deep transition rnn an illustration of building an rnn with the deep state transition function is shown in fig b in the illustration the state transition function is implemented with a neural network with a single intermediate layer this formulation allows the rnn to learn a highly nonlinear transition between the consecutive hidden states deep output rnn similarly we can use a multilayer perceptron with l intermediate layers to model the output function fo in eq such that yt fo ht vl ht where and vl are the nonlinear function and the weight matrix for the layer an rnn implementing this kind of multilayered output function is a deep output recurrent neural network fig c draws a deep output deep transition rnn implemented using both the deep transition and the deep output with a single intermediate layer each stacked rnn the stacked rnn schmidhuber el hihi and bengio has multiple levels of transition functions defined by l l l l ht fh ht wl u l ht l where ht is the hidden state of the level at time when l the state is computed using xt instead of ht the hidden states of all the levels are recursively computed from the bottom level l once the hidden state is computed the output can be obtained using the usual formulation in eq alternatively one may use all the hidden states to compute the output hermans and schrauwen each hidden state at each level may also be made to depend on the input as well graves both of them can be considered approaches using shortcut connections discussed earlier the illustration of this stacked rnn is in fig d another perspective neural operators in this section we briefly introduce a novel approach with which the already discussed deep transition dt deep output do recurrent neural networks rnn may be built we call this approach which is based on building an rnn with a set of predefined neural operators an operatorbased framework in the framework one first defines a set of operators of which each is implemented by a multilayer perceptron mlp for instance a plus operator may be defined as a function receiving two vectors x and h and returning the summary of them x h where we may constrain that the dimensionality of h and are identical additionally we can define another operator b which predicts the most likely output symbol given a summary h such that bh it is possible to define many other operators but in this paper we stick to these two operators which are sufficient to express all the proposed types of rnns yt ht xt figure a view of an rnn under the framework and b are the plus and predict operators respectively it is clear to see that the plus operator and the predict operator b correspond to the transition function and the output function in eqs thus at each step an rnn can be thought as performing the plus operator to update the hidden state given an input ht xt and then the predict operator to compute the output yt bht b xt see fig for the illustration of how an rnn can be understood from the framework each operator can be parameterized as an mlp with one or more hidden layers hence a neural operator since we can not simply expect the operation will be linear with respect to the input vector s by using an mlp to implement the operators the proposed deep transition deep output rnn naturally arises this framework provides us an insight on how the constructed rnn be regularized for instance one may regularize the model such that the plus operator is commutative however in this paper we do not explore further on this approach note that this is different from mikolov et where the learned embeddings of words happened to be suitable for algebraic operators the framework proposed here is rather geared toward learning these operators directly experiments we train four types of rnns described in this paper on a number of benchmark datasets to evaluate their performance for each benchmark dataset we try the task of predicting the next symbol the task of predicting the next symbol is equivalent to the task of modeling the distribution over a sequence for each sequence xt we decompose it into p xt p t y p xt and each term on the side will be replaced with a single timestep of an rnn in this setting the rnn predicts the probability of the next symbol xt in the sequence given the all previous symbols then we train the rnn by maximizing the we try this task of modeling the joint distribution on three different tasks polyphonic music prediction and language modeling we test the rnns on the task of polyphonic music prediction using three datasets which are nottingham jsb chorales and musedata et on the task of characterlevel and language modeling we use penn treebank corpus marcus et model descriptions we compare the conventional recurrent neural network rnn deep transition rnn with shortcut connections in the transition mlp dt s deep rnn with shortcut connections in the hidden to hidden transition mlp dot s and stacked rnn srnn see fig a d for the illustrations of these models notthingam music jsb chorales musedata language units parameters units parameters units parameters units parameters units parameters rnn dt s dot s srnn layers table the sizes of the trained models we provide the number of hidden units as well as the total number of parameters for dt s the two numbers provided for the number of units mean the size of the hidden state and that of the intermediate layer respectively for dot s the three numbers are the size of the hidden state that of the intermediate layer between the consecutive hidden states and that of the intermediate layer between the hidden state and the output layer for srnn the number corresponds to the size of the hidden state at each level the size of each model is chosen from a limited set to minimize the validation error for each polyphonic music task see table for the final models in the case of language modeling tasks we chose the size of the models from and for and tasks respectively in all cases we use a logistic sigmoid function as an nonlinearity of each hidden unit only for the language modeling we used rectified linear units glorot et for the intermediate layers of the output function which gave lower validation error training we use stochastic gradient descent sgd and employ the strategy of clipping the gradient proposed by pascanu et al training stops when the validation cost stops decreasing polyphonic music prediction for nottingham and musedata datasets we compute each gradient step on subsequences of at most steps while we use subsequences of steps for jsb chorales we do not reset the hidden state for each subsequence unless the subsequence belongs to a different song than the previous subsequence the cutoff threshold for the gradients is set to the hyperparameter for the learning rate is tuned manually for each dataset we set the hyperparameter to for nottingham for musedata and for jsb chroales they correspond to two epochs a single epoch and a third of an epoch respectively the weights of the connections between any pair of hidden layers are sparse having only nonzero incoming connections per unit see sutskever et each weight matrix is rescaled to have a unit largest singular value pascanu et the weights of the connections between the input layer and the hidden state as well as between the hidden state and the output layer are initialized randomly from the white gaussian distribution with its standard deviation fixed to and respectively in the case of deep output functions dot s the weights of the connections between the hidden state and the intermediate layer are sampled initially from the white gaussian distribution of standard deviation in all cases the biases are initialized to to regularize the models we add white gaussian noise of standard deviation to each weight parameter every time the gradient is computed graves language modeling we used the same strategy for initializing the parameters in the case of language modeling for modeling the standard deviations of the white gaussian distributions for the weights and the weights we used and respectively while those hyperparameters were both for modeling in the case of dot s we sample the weights of between the hidden state and the rectifier intermediate layer of the output function from the white gaussian distribution of standard deviation when using rectifier units language modeling we fix the biases to in language modeling the learning rate starts from an initial value and is halved each time the validation cost does not decrease significantly mikolov et we do not use any regularization for the modeling but for the modeling we use the same strategy of adding weight noise as we do with the polyphonic music prediction for all the tasks polyphonic music prediction and language modeling the stacked rnn and the dot s were initialized with the weights of the conventional rnn and the dt s which is similar to pretraining of a feedforward neural network see hinton and salakhutdinov we use a ten times smaller learning rate for each parameter that was pretrained as either rnn or dt s notthingam jsb chorales musedata rnn dt s dot s srnn dot s table the performances of the four types of rnns on the polyphonic music prediction the numbers represent negative on test sequences we obtained these results using dot s with lp units in the deep transition maxout units in the deep output function and dropout gulcehre et result and analysis polyphonic music prediction the on the test set of each data are presented in the first four columns of tab we were able to observe that in all cases one of the proposed deep rnns outperformed the conventional shallow rnn though the suitability of each deep rnn depended on the data it was trained on the best results obtained by the dt s on notthingam and jsb chorales are close to but we use at each update the following learning rate max where and indicate tively when the learning rate starts decreasing and how quickly the learning rate decreases in the experiment we set to coincide with the time when the validation error starts increasing for the first time worse than the result obtained by rnns trained with the technique of fast dropout fd which are and respectively bayer et in order to quickly investigate whether the proposed deeper variants of rnns may also benefit from the recent advances in feedforward neural networks such as the use of activation and the method of dropout we have built another set of dot s that have the recently proposed lp units gulcehre et in deep transition and maxout units goodfellow et in deep output function furthermore we used the method of dropout hinton et instead of weight noise during training similarly to the previously trained models we searched for the size of the models as well as other learning hyperparameters that minimize the validation performance we however did not pretrain these models the results obtained by the dot s having lp and maxout units trained with dropout are shown in the last column of tab on every music dataset the performance by this model is significantly better than those achieved by all the other models as well as the best results reported with recurrent neural networks in bayer et this suggests us that the proposed variants of deep rnns also benefit from having activations and using dropout just like feedforward neural networks we reported these results and more details on the experiment in gulcehre et we however acknowledge that the results for the both datasets were obtained using an rnn combined with a conditional generative model such as restricted boltzmann machines or neural autoregressive distribution estimator larochelle and murray in the output et rnn dt s dot s srnn table the performances of the four types of rnns on the tasks of language modeling the numbers represent and perplexity computed on test sequence respectively for the and modeling tasks the results obtained with shallow rnns the results obtained with rnns having term memory units language modeling on tab we can see the perplexities on the test set achieved by the all four models we can clearly see that the deep rnns dt s dot s and srnn outperform the conventional shallow rnn significantly on these tasks dot s outperformed all the other models which suggests that it is important to have highly nonlinear mapping from the hidden state to the output in the case of language modeling the results by both the dot s and the srnn for modeling surpassed the previous best performance achieved by an rnn with long memory lstm units graves as well as that by a shallow rnn with a larger hidden state mikolov et even when both of them used dynamic the results we report here are without dynamic evaluation for modeling the results were obtained using an optimization method with a specific type of rnn architecture called mrnn mikolov et or a regularization technique called adaptive weight noise graves our result however is better than the performance achieved by conventional shallow rnns without any of those advanced note that it is not trivial to use activation functions in conventional rnns as this may cause the explosion of the activations of hidden states however it is perfectly safe to use activation functions at the intermediate layers of a deep rnn with deep transition reported by mikolov et al using mrnn with optimization technique reported by mikolov et al using the dynamic evaluation reported by graves using the dynamic evaluation and weight noise dynamic evaluation refers to an approach where the parameters of a model are updated as the data is predicted regularization methods mikolov et where they reported the best performance of using an rnn trained with the learning algorithm martens and sutskever discussion in this paper we have explored a novel approach to building a deep recurrent neural network rnn we considered the structure of an rnn at each timestep which revealed that the relationship between the consecutive hidden states and that between the hidden state and output are shallow based on this observation we proposed two alternative designs of deep rnn that make those shallow relationships be modeled by deep neural networks furthermore we proposed to make use of shortcut connections in these deep rnns to alleviate a problem of difficult learning potentially introduced by the increasing depth we empirically evaluated the proposed designs against the conventional rnn which has only a single hidden layer and against another approach of building a deep rnn stacked rnn graves on the task of polyphonic music prediction and language modeling the experiments revealed that the rnn with the proposed deep transition and deep output dot s rnn outperformed both the conventional rnn and the stacked rnn on the task of language modeling achieving the result on the task of language modeling for polyphonic music prediction a different deeper variant of an rnn achieved the best performance for each dataset importantly however in all the cases the conventional shallow rnn was not able to outperform the deeper variants these results strongly support our claim that an rnn benefits from having a deeper architecture just like feedforward neural networks the observation that there is no clear winner in the task of polyphonic music prediction suggests us that each of the proposed deep rnns has a distinct characteristic that makes it more or less suitable for certain types of datasets we suspect that in the future it will be possible to design and train yet another deeper variant of an rnn that combines the proposed models together to be more robust to the characteristics of datasets for instance a stacked dt s may be constructed by combining the dt s and the srnn in a quick additional experiment where we have trained dot s constructed using nonsaturating nonlinear activation functions and trained with the method of dropout we were able to improve the performance of the deep recurrent neural networks on the polyphonic music prediction tasks significantly this suggests us that it is important to investigate the possibility of applying recent advances in feedforward neural networks such as novel activation functions and the method of dropout to recurrent neural networks as well however we leave this as future research one practical issue we ran into during the experiments was the difficulty of training deep rnns we were able to train the conventional rnn as well as the dt s easily but it was not trivial to train the dot s and the stacked rnn in this paper we proposed to use shortcut connections as well as to pretrain them either with the conventional rnn or with the dt s we however believe that learning may become even more problematic as the size and the depth of a model increase in the future it will be important to investigate the root causes of this difficulty and to explore potential solutions we find some of the recently introduced approaches such as advanced regularization methods pascanu et and advanced optimization algorithms see pascanu and bengio martens to be promising candidates acknowledgments we would like to thank the developers of theano bergstra et bastien et we also thank justin bayer for his insightful comments on the paper we would like to thank nserc compute canada and calcul for providing computational resources razvan pascanu is supported by a deepmind fellowship kyunghyun cho is supported by fics finnish doctoral programme in computational sciences and the academy of finland finnish centre of excellence in computational inference research coin references bastien lamblin pascanu bergstra goodfellow bergeron bouchard and bengio y theano new features and speed improvements deep learning and unsupervised feature learning nips workshop bayer osendorfer korhammer chen urban and van der smagt on fast dropout and its applicability to recurrent networks bengio y learning deep architectures for ai found trends mach bengio simard and frasconi learning dependencies with gradient descent is difficult ieee transactions on neural networks bengio mesnil dauphin and rifai better mixing via deep representations in icml bergstra breuleux bastien lamblin pascanu desjardins turian wardefarley and bengio y theano a cpu and gpu math expression compiler in proceedings of the python for scientific computing conference scipy oral presentation bengio and vincent modeling temporal dependencies in sequences application to polyphonic music generation and transcription in icml chen and deng a new method for learning deep recurrent neural networks delalleau and bengio y shallow deep networks in nips el hihi and bengio y hierarchical recurrent neural networks for dependencies in nips mit press glorot bordes and bengio y deep sparse rectifier neural networks in aistats glorot bordes and bengio y domain adaptation for sentiment classification a deep learning approach in icml goodfellow le saxe and ng a measuring invariances in deep networks in nips pages goodfellow mirza courville and bengio y maxout networks in icml graves a practical variational inference for neural networks in zemel bartlett pereira and weinberger editors advances in neural information processing systems pages graves a generating sequences with recurrent neural networks graves liwicki fernandez bertolami bunke and schmidhuber j a novel connectionist system for improved unconstrained handwriting recognition ieee transactions on pattern analysis and machine intelligence graves mohamed and hinton speech recognition with deep recurrent neural networks icassp gulcehre cho pascanu and bengio y pooling for deep feedforward and recurrent neural networks hermans and schrauwen b training and analysing deep recurrent neural networks in advances in neural information processing systems pages hinton deng dahl mohamed jaitly senior vanhoucke nguyen sainath and kingsbury b deep neural networks for acoustic modeling in speech recognition ieee signal processing magazine hinton and salakhutdinov reducing the dimensionality of data with neural networks science hinton srivastava krizhevsky sutskever and salakhutdinov improving neural networks by preventing of feature detectors technical report hornik stinchcombe and white multilayer feedforward networks are universal approximators neural networks jaeger discovering multiscale dynamical features with hierarchical echo state networks technical report jacobs university ko and dieter bayesian filtering using gaussian process prediction and observation models autonomous robots larochelle and murray i the neural autoregressive distribution estimator in proceedings of the fourteenth international conference on artificial intelligence and statistics aistats volume of jmlr w cp le roux and bengio y deep belief networks are compact universal approximators neural computation marcus marcinkiewicz and santorini b building a large annotated corpus of english the penn treebank computational linguistics martens j deep learning via optimization in bottou and littman editors proceedings of the international conference on machine learning pages acm martens and sutskever i learning recurrent neural networks with optimization in proc icml acm mikolov statistical language models based on neural networks thesis brno university of technology mikolov burget cernocky and khudanpur recurrent neural network based language model in proceedings of the annual conference of the international speech communication association interspeech volume pages international speech communication association mikolov kombrink burget cernocky and khudanpur extensions of recurrent neural network language model in proc ieee international conference on acoustics speech and signal processing icassp mikolov sutskever deoras le kombrink and cernocky j subword language modeling with neural networks unpublished mikolov sutskever deoras le kombrink and cernocky j subword language modeling with neural networks preprint http mikolov sutskever chen corrado and dean j distributed representations of words and phrases and their compositionality in advances in neural information processing systems pages mikolov chen corrado and dean j efficient estimation of word representations in vector space in international conference on learning representations workshops track pascanu and bengio y revisiting natural gradient for deep networks technical report pascanu mikolov and bengio y on the difficulty of training recurrent neural networks in icml pascanu montufar and bengio y on the number of response regions of deep feed forward networks with linear activations pinheiro and collobert recurrent convolutional neural networks for scene labeling in proceedings of the international conference on machine learning pages raiko valpola and lecun y deep learning made easier by linear transformations in perceptrons in proceedings of the fifteenth internation conference on artificial intelligence and statistics aistats volume of jmlr workshop and conference proceedings pages jmlr w cp rumelhart hinton and williams j learning representations by backpropagating errors nature schmidhuber j learning complex extended sequences using the principle of history compression neural computation sutskever martens and hinton generating text with recurrent neural networks in getoor and scheffer editors proceedings of the international conference on machine learning icml pages new york ny usa acm sutskever martens dahl and hinton on the importance of initialization and momentum in deep learning in icml valpola and karhunen j an unsupervised ensemble learning method for nonlinear dynamic models neural
9
significantly improving lossy compression for scientific data sets based on multidimensional prediction and quantization jun dingwen tao sheng di zizhong chen and franck university of california riverside ca usa chen argonne national laboratory il usa cappello university of illinois at il usa s hpc applications are producing extremely large amounts of data such that data storage and analysis are becoming more challenging for scientific research in this work we design a new lossy compression algorithm for scientific data our key contribution is significantly improving the prediction hitting rate or prediction accuracy for each data point based on its nearby data values along multiple dimensions we derive a series of multilayer prediction formulas and their unified formula in the context of data compression one serious challenge is that the data prediction has to be performed based on the preceding decompressed values during the compression in order to guarantee the error bounds which may degrade the prediction accuracy in turn we explore the best layer for the prediction by considering the impact of compression errors on the prediction accuracy moreover we propose an adaptive quantization encoder which can further improve the prediction hitting rate considerably the data size can be reduced significantly after performing the variablelength encoding because of the uneven distribution produced by our quantization encoder we evaluate the new compressor on production scientific data sets and compare it with many other compressors gzip fpzip zfp and isabela experiments show that our compressor is the best in class especially with regard to compression factors or bitrates and compression errors including rmse nrmse and psnr our solution is better than the solution by more than a increase in the compression factor and reduction in the normalized root mean squared error on average with reasonable error bounds and bitrates i ntroduction one of the most challenging issues in performing scientific simulations or running parallel applications today is the vast amount of data to store in disks to transmit on networks or to process in postanalysis the accelerated cosmology code hacc for example can generate pb of data for a single simulation yet a system such as the mira supercomputer at the argonne leadership computing facility has only pb of file system storage and a single user can not request of the total storage capacity for a simulation climate research also deals with a large volume of data during simulation and postanalysis as indicated by nearly pb of data were produced by the community earth system model for the coupled model intercomparison project cmip which further introduced tb of postprocessing data submitted to the earth system grid estimates of the raw data requirements for the project exceed pb data compression offers an attractive solution for largescale simulations and experiments because it enables significant reduction of data size while keeping critical information available to preserve discovery opportunities and analysis accuracy lossless compression preserves of the information however it suffers from limited compression factor up to in general which is far less than the demand of scientific experiments and simulations therefore only lossy compression with error controls can fulfill user needs in terms of data accuracy and of execution demand the key challenge in designing an efficient errorcontrolled lossy compressor for scientific research applications is the large diversity of scientific data many of the existing lossy compressors such as and isabela try to predict the data by using method or spline interpolation method the effectiveness of these compressors highly relies on the smoothness of the data in local regions however simulation data often exhibits fairly sharp or spiky data changes in small data regions which may significantly lower the prediction accuracy of the compressor and eventually degrade the compression quality numarck and ssem both adopt a quantization step in terms of the distribution of the data or quantile which can mitigate the dependence of smoothness of data however they are unable to strictly control the compression errors based on the bounds zfp uses an optimized orthogonal data transform that does not strongly rely on the data smoothness either however it requires an alignment step which might not respect the user error bound when the data value range is huge as shown later in the paper and its optimized transform coefficients are highly dependent on the compression data and can not be modified by users in this work we propose a novel lossy compression algorithm that can deal with the irregular data with spiky changes effectively will still strictly respecting error bounds specifically the critical contributions are threefold we propose a multidimensional prediction model that can significantly improve the prediction hitting rate or prediction accuracy for each data point based on its nearby data values in multiple dimensions unlike previous work that focuses only on prediction extending the prediction to multiple dimensions is challenging prediction requires solving more complicated surface equation system involving many more variables which become intractable especially when the number of data points used in the prediction is relatively high however since the data used in the prediction must be preceding decompressed values in order to strictly control the compression errors the prediction accuracy is degraded significantly if many data points are selected for the prediction in this paper not only do we derive a generic formula for the multidimensional prediction model but we also optimize the number of data points used in the prediction by an analysis with realworld data cases we design an adaptive quantization and encoding model in order to optimize the compression quality such an optimization is challenging in that we need to design the adaptive solution based on very careful observation on masses of experiments and the encoding has to be tailored and reimplemented to suit variable numbers of quantization intervals we implement the new compression algorithm namely and release the source code under a bsd license we comprehensively evaluate the new compression method by using multiple production scientific data sets across multiple domains such as climate simulation scientific research and hurricane simulation we compare our compressor with five compressors gzip fpzip zfp and isabela experiments show that our compressor is the best in class especially with regard to both compression factors or and compression errors including rmse nrmse and psnr on the three tested data sets our solution is better than the solution by nearly a increase in the compression factor and reduction in the normalized root mean squared error on average the rest of the paper is organized as follows in section ii we formulate the lossy compression issue we describe our novel compression method in section iii an optimized multidimensional prediction model with bestlayer analysis and section iv an adaptive quantization and encoding model in section v we evaluate the compression quality using multiple production scientific data sets in section vi we discuss the use of our compressor in parallel for data sets and perform an evaluation on a supercomputer in section vii we discuss the related work and in section viii we conclude the paper with a summary and present our future work ii p roblem and m etrics d escription in this paper we focus mainly on the design and implementation of a lossy compression algorithm for scientific data sets with given error bounds in computing hpc applications these applications can generate multiple snapshots that will contain many variables each variable has a specific data type for example multidimensional array and string data since the major type of the scientific data is we focus our lossy compression research on how to compress multidimensional data sets within reasonable error bounds also we want to achieve a better compression performance measured by the following metrics pointwise compression error between original and reconstructed data sets for example absolute error and relative error average compression error between original and reconstructed data sets for example rmse nrmse and psnr correlation between original and reconstructed data sets compression factor or compression and decompression speed we describe these metrics in detail below let us first define some necessary notations let the original multidimensional data set be x xn where each xi is a floatingpoint scalar let the reconstructed data set be which is recovered by the decompression process also we denote the range of x by rx that is rx xmax xmin we now discuss the metrics we may use in measuring the performance of a compression method metric for data point i let eabsi xi where eabsi is the absolute error let ereli eabsi where ereli is the relative error in our compression algorithm one should set either one bound or both bounds for the absolute error and the relative error depending on their compression accuracy requirement the compression errors will be guaranteed within the error bounds which can be expressed by the formula ebabs ebrel for i n where ebabs is the absolute error bound and ebrel is the relative error bound metric to evaluate the average error in the compression we first use the popular root mean squared error rmse v u n x rmse t eabsi n because of the diversity of variables we further adopt the normalized rmse nrmse rmse nrmse rx the peak ratio psnr is another commonly used average error metric for evaluating a lossy compression method especially in visualization it is calculated as following rx psnr rmse psnr measures the size of the rmse relative to the peak size of the signal logically a lower value of means less error but a higher value of psnr represents less error metric to evaluate the correlation between original and reconstructed data sets we adopt the pearson correlation coefficient cov x where cov x is the covariance this coefficient is a measurement of the linear dependence between two variables giving between and where is the note that unlike the pointwise relative error that is compared with each data value relative error is compared with value range total positive linear correlation the apax profiler suggests that the correlation coefficient between original and reconstructed data should be five nines or better metric to evaluate the size reduce as a result of the compression we use the compression factor cf f ilesize forig cf f f ilesize fcomp already processed points including all colors to be predicted point first layer second layer third layer or the f ilesizebit fcomp br f n where f ilesizebit is the file size in bits and n is the data size the represents the amortized storage cost of each value for a data set the is bits per value before a compression while the will be less than bits per value after a compression also cf and br have a mathematical relationship as br f cf f so that a lower means a higher compression factor metric to evaluate the speed of compression we compare the throughput bytes per second based on the execution time of both compression and decompression with other compressors iii p rediction m odel based on m utidimensional s cientific data s ets in sections iii and iv we present our novel compression algorithm at a high level the compression process involves three steps predict every data value through our proposed multilayer prediction model adopt an quantization encoder with an adaptive number of intervals and perform a encoding technique based on the uneven distributed quantization codes in this section we first present our new multilayer prediction model designed for multidimensional scientific data sets then we give a solution for choosing the best layer for our multilayer prediction model we illustrate how our prediction model works using data sets as an example a prediction model for multidimensional scientific data sets consider a data set on a uniform grid of size m n where m is the size of second dimension and n is the size of first dimension we give each data point a global coordinate i j where i m and j n in our compression algorithm we process the data point by point from the low dimension to the high dimension assume that the coordinates of the current processing data point are and the processed data points are i j where i or i j as shown in figure the figure also shows our definition of layer around the processing data point we denote the data subset and by n since the data subset contains the layer from the first one to the nth one we call data now we build a prediction model for data sets using the n symmetric processed data points in the data subset to predict data fourth layer figure example of data set showing the processed processing data and the data in different layers of the prediction model first let us define a surface called the prediction surface with the maximum order of as follows i x i j f x y ai j x y the surface f x y has n coefficients so we can construct a linear system with n equations by using the coordinates and values of n data points and then solve this system for these n coefficients consequently we build the prediction surface f x y however the problem is that not every linear system has a solution which also means not every set of n data is able to be on the surface at the same time fortunately we demonstrate that the linear system constructed by the n data in can be solved with an explicit solution also we demonstrate that f can be expressed by the linear combination of the data values in now let us give the following theorem and proof theorem the n data in will determine a surface f x y shown in equation and the value of f equals v where nk is the binomial coefficient and v i j is the data value of i j in proof we transform the coordinate of each data point in to a new coordinate as then using their new coordinates and data values we can construct a linear system with n equations as i x v ai j where let us denote f as follows x n n k f i p for any coefficient al m v ai j only has al m which is one term containing also from equations and al m f contains and because x n x x table i f ormulas of layer prediction for two dimensional al m data sets n n n n x n n n x n n for l m either l or m is smaller than from the theory of finite differences p also i ni p x for any polynomial p x of p degree less than n so either or p therefore f contains al m so l p f al m and f n n v we transform the current coordinate to the previous one reversely namely thus f n n v from this theorem we know that the value of on the prediction surface f can be expressed by the linear combination of the data values in hence we can use the value of f as our predicted value for v in other words we build our prediction model using the data values in as follows f n n v we call this prediction model using data subset the prediction model consequently our proposed model can be called a multilayer prediction model also we can derive a generic formula of the multilayer prediction model for any dimensional data sets because of space limitations we give the formula as follows kd d x y f xd kj kd n kj v xd kd where d is the dimensional size of the data set and n represents the used in the prediction model note that lerenzo predictor is a special case of our multidimensional prediction model when n prediction formula f v v v f v v v f v v f v v analysis of the best layer for multilayer prediction model in subsection we developed a general prediction model for multidimensional data sets based on this model we need to answer another critical question how many layers should we use for the prediction model during the compression process in other words we want to find the best n for equation why does there have to exist a best n we will use twodimensional data sets to explain we know that a better n can result in a more accurate data prediction and a more accurate prediction will bring us a better compression performance including improvements in compression factor compression error and speed on the one hand a more accurate prediction can be achieved by increasing the number of layers which will bring more useful information along multiple dimensions on the other hand we also note that data from further distance will bring more uncorrelated information noise into the prediction which means that too many layers will degrade the accuracy of our prediction therefore we infer that there has to exist a best number of layers for our prediction model how can we get the best n for our multilayer prediction model for a data set we first need to get prediction formulas for different layers by substituting and so forth into the generic formula of our prediction model as shown in equation the formulas are shown in table i then we introduce a term called the prediction hitting rate which is the proportion of the predictable data in the whole data set we define a data point as predictable data if the difference between its original value and predicted value is not larger than the error bound we denote the ph prediction hitting rate by rp h nn where np h is the number of predictable data points and n is the size of the data set in the climate simulation atm data sets example the hitting rates are calculated in table ii based on the prediction methods described above here the second column shows the prediction hitting rate by using the original data values orig denoted by rp h in this case prediction will be orig rp h decomp rp h quantization the design of our quantization is shown in figure first we calculate the predicted value by using the multilayer prediction model proposed in the preceding section we call this predicted value the predicted value represented by the red dot in fig then we expand values from the predicted value by scaling the error bound linearly we call these values predicted values represented by the orange dots in fig the distance between any two adjacent predicted values equals twice the error bound note that each predicted value will also be expanded one more error bound in both directions to form an interval with the length of twice the error bound this will ensure that all the intervals are not overlapped if the real value of the data point falls into a certain interval we mark it as predictable data and use its corresponding predicted value from the same interval to represent the real value in the compression in this case the difference between the real value and predicted value is always lower than the error bound however if the real value doesn t fall into any interval we mark the data point as unpredictable data since there are intervals we use codes to encode these intervals since all the predictable data can be real value predicted value error bound predicted value error bound error bound predicted value error bound predicted value iv aeqve a daptive e rror controlled q uantization and variable length e ncoding in this section we present our adaptive quantization and encoding model namely aeqve which can further optimize the compression quality first we introduce our quantization method which is completely different from the traditional one second using the same logic from subsection we develop an adaptive solution to optimize the number of intervals in the errorcontrolled quantization third we show the fairly uneven distribution produced by our quantization encoder finally we reduce the data size significantly by using the variablelength encoding technique on the quantization codes predicted value error bound more accurate than other layers if performing the prediction on the original data values however in order to guarantee that the compression error absolute or relative falls into the error bounds the compression algorithm must use the preceding decompressed data values instead of the original data values therefore the last column of table ii shows the hitting rate of the prediction by using decomp preceding decompressed data values denoted by rp h in this case prediction will become the best one for the compression algorithm on atm data sets since the best layer n is different scientific data sets may have different best layers thus we give users an option to set the value of layers in the compression process the default value in our compressor is n prediction model based on original and decompressed data values on atm data sets quan za on code table ii p rediction hitting rate using different layers for the figure design of quantization based on linear scaling of the error bound encoded as the code of its corresponding interval and since all the unpredictable data will be encoded as another code we need m bits to encode all codes for example we use the codes of to encode predictable data and use the code of to encode unpredictable data this process is quantization encoding note that our proposed quantization is totally different from the traditional quantization technique vector quantization used in previous lossy compression such as ssem and numarck in two properties uniformity and the vector quantization method is nonuniform whereas our quantization is uniform specifically in vector quantization the more concentratedly the data locates the shorter the quantization interval will be while the length of our quantization intervals is fixed twice the error bound therefore in vector quantization the compression error can not be controlled for every data point especially the points in the intervals with the length longer than twice the error bound thus we call our quantization method as quantization the next question is how many quantization intervals should we use in the quantization we leave this question to subsection first we introduce a technique we will adopt after the quantization figure shows an example of the distribution of quantization codes produced by our quantization encoder which uses quantization intervals to represent predictable data from this figure we see that the distribution of quantization codes is uneven and that the degree of nonuniformity of the distribution depends on the accuracy of the previous prediction in information and coding theory a strategy called encoding is used to compress the nonuniform distribution source in encoding more common symbols will be generally represented using fewer bits than less common symbols for uneven distribution we can employ the encoding to reduce the data size significantly note that encoding is a process of lossless data compression specifically we use the most popular hi ng rate a quantization code encoding strategy huffman coding here we do not describe the huffman coding algorithm in detail but we note that huffman coding algorithm implemented in all the lossless compressors on the market can deal only with the source byte by byte hence the total number of the symbols is as higher as to in our case however we do not limit m to be no greater than hence if m is larger than more than quantization codes need to be compressed using the huffman coding thus in our compression we implement a highly efficient huffman coding algorithm that can handle a source with any number of quantization codes adaptive scheme for number of quantization intervals in subsection our proposed compression algorithm encodes the predictable data with its corresponding quantization code and then uses encoding to reduce the data size a question remaining how many quantization intervals should we use we use an m bit code to encode each data point and the unpredictable data will be stored after a reduction of analysis however even binaryrepresentation analysis can reduce the data size to a certain extent storing the unpredictable data point has much more overhead than storing the quantization codes therefore we should select a value for the number of quantization intervals that is as small as possible but can provide a sufficient prediction hitting rate note that the rate depends on the error bound as shown in figure if the error bound is too low ebrel the compression is close to lossless and achieving a high prediction hitting rate is difficult hence we focus our research on a reasonable range of error bounds ebrel now we introduce our adaptive scheme for the number of quantization intervals used in the compression algorithm figure shows the prediction hitting rate with different relative error bounds using different numbers of quantization intervals on atm data sets and hurricane data sets it indicates that the prediction hitting rate will suddenly descend at a certain error bound from over to a relatively low value for example if using quantization intervals the prediction hitting rate will drop from to at ebrel thus we consider that quantization intervals can cover only the relative error bound higher than however different numbers of quantization intervals have different capabilities to cover different error bounds generally more quantization intervals will cover lower error bounds baker et al point out that ebrel is enough for climate research simulation data sets such as atm data sets thus based on fig for atm data sets using intervals error bound b figure distribution produced by quantization encoder on atm data sets of a relative error bound and b relative error bound with quantization intervals m intervals a hi ng rate quantization code intervals error bound b figure prediction hitting rate with decreasing error bounds using different quantization intervals on a atm data sets and b hurricane data sets and intervals are good choices for ebrel and ebrel respectively but for hurricane data sets we suggest using intervals for ebrel and intervals for ebrel in our compression algorithm a user can determine the number of quantization intervals by setting a value for m quantization intervals however if it is unable to achieve a good prediction hitting rate smaller than in some error bounds our compression algorithm will suggest that the user increases the number of quantization intervals on contrast the user should reduce the number of quantization intervals until a further reduction results the prediction hitting rate smaller than in practice sometimes a user s requirement for compression accuracy is stable therefore the user can tune a good value for the number of quantization intervals and get optimized compression factors in the following compression algorithm in figure outlines our proposed lossy compression algorithm note that the input data is a ddimensional array of the size n n n n d where n is the size of the lowest dimension and n d is the size of the highest dimension before processing the data line our algorithm needs to compute the n d coefficients based on equation of the prediction method only once line while processing the data line first the algorithm computes the predicted value for the current processing data point using the prediction method line next the algorithm computes the difference between the original and predicted data value and encodes the data point using quantization codes line then if the data point is unpredictable the algorithm adopts the binaryrepresentation analysis line proposed in to reduce its storage lastly the algorithm computes and records the decompressed value for the future prediction line after processing each data point line the algorithm will compress the quantization codes using the encoding technique line and count the number of predictable data points line if the prediction hitting rate is lower than the threshold our algorithm will suggest that the user increases the quantization interval number line table iii d escription of data sets used in empirical performance evaluation atm aps hurricane o o o o o o o n o n figure proposed lossy compression algorithm using prediction and aeqve model the computation complexity of each step is shown in figure note that lines and are o since they depend only on the number of layers n used in the prediction rather than the data size n although line is o analysis is more than the other o operations such as lines and and hence increasing the prediction hitting rate can result in faster compression significantly and since we adopt the huffman coding algorithm for the encoding and the total number of the symbols quantization intervals is such as line is its theoretical complexity o n log o mn o n therefore the overall complexity is o n e mpirical p erformance e valuation in this section we evaluate our compression algorithm namely on various data sets atm data sets from climate simulations aps data sets from scientific research and hurricane data sets from a hurricane simulation as shown in table iii also we compare our compression algorithm with losseless gzip and fpzip and lossy compressors zfp and isabela based on the metrics mentioned in section iii we conducted our experiments on a single core of an imac with ghz intel core processors and gb of mhz ram data source climate simulation instrument hurricane simulation dimension size data size tb gb gb file number a compression factor first we evaluated our compression algorithm based on the compression factor figure compares the compression factors of and five other compression methods gzip fpzip zfp and isabela with reasonable relative error bounds namely and respectively specifically we ran different compressors using the absolute error bounds computed based on the above listed ratios and the global data value range and then checked the compression results figure indicates that has the best compression factor within these reasonable error bounds for example with ebrel for atm data sets the average compression factor of is which is higher than zfp s higher than s higher than isabela s higher than fpzip s and higher than gzip s for aps data sets the average compression factor of is which is higher than zfp s higher than s higher than isabela higher than fpzip s and higher than gzip s for the hurricane data sets the average compression factor of is which is higher than zfp s higher than s higher than isabela s higher than fpzip s and higher than gzip s note that isabela can not deal with some low error bounds thus we plot its compression factors only until it fails we note that zfp might not respect the error bound because of the alignment when the value range is huge for example the variable cdnumc in the atm data sets its value range is from to and the compression error of the data point with the value is if using zfp with ebabs when the value range is not such huge the maximum compression error of zfp is much lower than the input error bound whereas the maximum compression errors of the other lossy compression methods including are exactly the same as the input error bound this means that zfp is overconservative with regard to the user s accuracy requirement table v shows the maximum compression errors of and zfp with different error bounds for a fair comparison we also evaluated by setting its input error bound as the maximum compression error of zfp which will make the maximum compression errors of and zfp the same the comparison of compression factors is shown in figure for example with the same maximum compression error of our average compression factor is higher than zfp s on the atm data sets with the same maximum compression error of our average compression factor is higher than zfp s on the hurricane data sets we note that zfp is designed for a fixed whereas sz including and and isabela are designed for a fixed maximum compression error thus for a fair comparison we plot the curve for all the table iv c omparison of p earson correlation coefficient using various lossy compressors with different maximum compression errors maximum erel atm zfp maximum erel our compression fpzip gzip a fpzip gzip compression factor our compression relaave error bound b figure comparison of compression factors with same maximum compression error using and zfp on a atm and b hurricane data sets our compression fpzip gzip rela ve error bound c figure comparison of compression factors using different lossy compression methods on a atm b aps and c hurricane data sets with different error bounds table v m aximum compression errors normalized to value range using and zfp with different user set value range based error bounds b rela ve error bound compression factor compression factor our compression a relaave error bound our compression rela ve error bound ebrel compression factor compression factor hurricane zfp atm zfp hurricane zfp lossy compressors and compare the distortion quality with the same rate here rate means in and we will use the peak ratio psnr to measure the distortion quality psnr is calculated by the equation in decibel generally speaking in the curve the higher the more bits per value in compressed storage the higher the quality higher psnr of the reconstructed data after decompression figure shows the curves of the different lossy compressors on the three scientific data sets the figure indicates that our lossy compression algorithm has the best curve on the data sets atm and aps specifically when the equals cf for the atm data sets the psnr of is about db which is db higher than the zfp s db this db improvement in psnr represents an increase in accuracy or reduction in rmse of more than times also the accuracy of our compressor is more than times that of and times than of isabela for aps data sets the psnr of is about db which is db higher than zfp s db this db improvement in psnr represents an increase in accuracy of times also the accuracy of our compressor is times that of and times that of isabela for the hurricane data sets the curves illustrate that at low the psnr of is close to that of zfp in the other cases of bitrate higher than our psnr is better than zfp s specifically when the is our psnr is about db which is db higher times in accuracy than zfp s db and db higher times in accuracy than s db note that we test and show the cases only with the lower than for the three data sets which means the compression factors are higher than as we mentioned in section i some lossless compressors can provide a compression factor up to it is reasonable to assume that users are interested in lossy compression only if it provides a compression factor of or higher pearson correlation next we evaluated our compression algorithm based on the pearson correlation coefficient between table vi c ompression and decompression speeds s using and zfp with different value range based relative error bounds ebrel atm zfp comp decomp comp decomp aps zfp comp decomp comp decomp psnr db our compression rate a psnr db our compression rate b psnr db rate c figure using different lossy compression methods on a atm b aps and c hurricane data sets the original and the decompressed data table iv shows the pearson correlation coefficients using different lossy compression methods with different maximum compression errors because of space limitations we compare only with zfp and since from the previous evaluations they outperform isabela significantly we note that we use the maximum compression error of zfp as the input error bound of and to make sure that all three lossy compressors have the same maximum compression error from table iv we know that all three compressors have five nines or better coefficients marked with bold from to lower relative error bounds on the atm data sets and from to lower relative error bounds on the hurricane data sets these results mean has accuracy in the pearson correlation of decompressed data similar to that of zfp and speed now let us evaluate the compression and decompression speed of our compressor we evaluate the compression and decompression speed of different lossy compressors with different error bound in megabytes per hurricane zfp comp decomp comp decomp second first we compare the overall speed of with and isabela s for the atm and aps data sets on average our compressor is faster than and faster than isabela for the hurricane data sets on average is faster than and faster than isabela due to space limitations we do not show the specific values of and isabela we then compare the speed of and zfp table vi shows the compression and decompression speed of and zfp it illustrates that on average s compression is slower than zfp s and decompression is slower than zfp s our compression has not been optimized in performance because the primary objective was to reach high compression factors therefore we plan to optimize our compression for different architectures and data sets in the future autocorrelation of compression error finally we analyze the autocorrelation of the compression errors since some applications require the compression errors to be uncorrelated we evaluate the autocorrelation of the compression errors on the two typical variables in the atm data sets freqsh and snowhlnd the compression factors of freqsh and snowhlnd are and using with ebrel thus to some extent freqsh can represent relatively data sets while snowhlnd can represent relatively data sets figure shows the first autocorrelation coefficients of our and zfp s compression errors on these two variables it illustrates that on the freqsh the maximum autocorrelation coefficient of is which is much lower than zfp s however on the snowhlnd the maximum autocorrelation coefficient of is about which is higher than zfp s we also evaluate the autocorrelation of and zfp on the aps and hurricane data sets and observe that generally s autocorrelation is lower than zfp s on the relatively data sets whereas zfp s autocorrelation is lower than s on the relatively data sets we therefore plan to improve the autocorrelation of compression errors on the relatively data sets in the future the effect of compression error autocorrelation being application specific lossy compressor users might need to understand this effect before using one of the other compressor vi d iscussion in this section we first discuss the parallel use of our compressor for data sets we then perform an empirical performance evaluation on the full tb atm data sets using cores nodes each node with two intel xeon processors and gb memory and each processor has cores from the blues cluster at argonne our compression c zfp b our compression a table vii s trong scalability of parallel compression using with different number of processes on b lues zfp d figure autocorrelation analysis first coefficients of compression errors with increasing delays using our lossy compressor and zfp on variable freqsh a and b and variable snowhlnd c and d in atm data sets compression wri ng compressed data wri ng ini al data number of processes number of nodes comp speed speedup parallel efficiency table viii s trong scalability of parallel decompression using with different number of processes on b lues number of processes number of nodes decomp speed speedup parallel efficiency number of processes a decompression reading compressed data reading ini al data number of processes b figure comparison of time to and compressed data against time to initial data on blues parallel compression can be classified into two categories compression and compression our compressor can be easily used as an compressor embedded in a parallel application each process can a fraction of the data that is being held in its memory for compression an mpi program or a script can be used to load the data into multiple processes and run the compression separately on them atm data sets as shown in table iii for example have a total of files and aps data sets have files the users can load these files by multiple processes and run our compressor in parallel without communications we present the strong scalability of the parallel compression and decompression without the data time in table vii and viii with different scales ranging from to processes on the blues cluster in the experiments we set ebrel for all the compression the number of processes is increased in two stages at the first stage we launch one process per node and increase the ber of nodes until the maximum number we can request at the second stage we run the parallel compression on nodes while changing the number of processes per node we measure the time of without the time and use the maximum time among all the processes we test each experiment five times and use the average time to calculate their speeds speedup and parallel efficiency as shown in the tables the two tables illustrates that the parallel efficiency of our compressor can stay nearly from to processes which demonstrates that our have linear speedup with the number of processors however the parallel efficiency is decreased to about when the total number of processes is greater than more than two processes per node this performance degradation is due to node internal limitations note that the speeds of a singe process in table vii and viii are different from ones in table vi since we run the sequential and parallel compression on two different platforms figure compares the time to and the compressed data against the time to the initial data each bar represents the sum of time the compressed data and the initial data we normalize the sum to and plot a dash line at to ease the comparison it illustrates that the time of writing and reading initial data will be much longer than the time of writing and reading compressed data plus the time of compression and decompression on the blues when the number of processors is or more this demonstrates our compressor can effectively reduce the total time when dealing with the atm data sets we also note that the relative time spent in will increase with the number of processors because of inevitable bottleneck of the bandwidth when data simultaneously by many processes by contract our have linear speedup with the number of processors which means the performance gains should be greater with increasing scale vii r elated w ork scientific data compression algorithms fall into two categories losseless compression and lossy compression popular lossless compression algorithms include gzip and fpzip however the mainly limitation of the lossless compressors is their fairly low compression factor up to in general in order to improve the compression factor several lossy data compression algorithms were proposed in recent years isabela performs data compression by interpolation after sorting the data series but isabela has to use extra storage to record the original index for each data point because of the loss of the location information in the data series thus it suffers from a low compression factor especially for large numbers of data points lossy compressors using vector quantization such as numarck and ssem can not guarantee the compression error within the bound and have a limitation of the compression factor as demonstrated in the difference between numarck and ssem is that numarck uses vector quantization on the differences between adjacent two iterations for each data whereas ssem uses vector quantization on the high frequency data after wavelet transform zfp is a lossy compressor using alignment orthogonal block transform encoding however it might not respect the error bound when the data value range is huge viii c onclusion and f uture w ork in this paper we propose a novel lossy compression algorithm we evaluate our compression algorithm by using multiple production scientific data sets across multiple domains and we compare it with five compressors based on a series of metrics we have implemented and released our compressor under a bsd license the key contributions are listed below we derive a generic model for the multidimensional prediction and optimize the number of data points used in the prediction to achieve significant improvement in the prediction hitting rate we design an adaptive quantization and encoding model aeqve to deal effectively with the irregular data with spiky changes our average compression factor is more than compared with the compressor with reasonable error bounds and our average compression error has more than reduction over the with on the atm aps and hurricane data sets we encourage users to evaluate our lossy compressor and compare with existing compressors on more scientific data sets in the future work we plan to optimize our compression for different architectures and data sets we will also further improve the autocorrelation of our compression on the data sets with relatively high compression factors acknowledgments this research was supported by the exascale computing project ecp project number a collaborative effort of two doe organizations the office of science and the national nuclear security administration responsible for the planning and preparation of a capable exascale ecosystem including software applications hardware advanced system engineering and early testbed platforms to support the nation s exascale computing imperative the submitted manuscript has been created by uchicago argonne llc operator of argonne national laboratory argonne argonne a department of energy office of science laboratory is operated under contract no r eferences a simulation of a hurricane from the national center for atmospheric research http online austin advanced photon source synchrotron radiation news baker xu dennis levy nychka mickelson edwards vertenstein and wegener a methodology for evaluating the impact of data compression on climate simulation data in hpdc pages bernholdt bharathi brown chanchio chen chervenak cinquini drach i foster fox et al the earth system grid supporting the next generation of climate modeling research proceedings of the ieee brenner and scott the mathematical theory of finite element methods volume springer science business media chen son hendrix agrawal liao and choudhary numarck machine learning algorithm for resiliency and checkpointing in sc pages community earth simulation model cesm http online deutsch gzip file format specification version di and cappello fast lossy hpc data compression with sz in ipdps pages gleckler durack stouffer johnson and forest global ocean heat uptake doubles in recent decades nature climate change ibarria lindstrom rossignac and szymczak compression and decompression of large ndimensional scalar fields in computer graphics forum volume pages wiley online library lakshminarasimhan shah ethier ku chang klasky latham ross and samatova isabela for effective in situ compression of scientific data concurrency and computation practice and experience lindstrom compressed arrays tvcg lindstrom and isenburg fast and efficient compression of data tvcg ratanaworabhan ke and burtscher fast lossless compression of scientific data in dcc pages sasaki sato endo and matsuoka exploration of lossy compression for in ipdps pages wegener universal numerical encoder and profiler reduces computing s memory wall with software fpga and soc implementations in dcc page ziv and lempel a universal algorithm for sequential data compression ieee transactions on information theory
7
dec change point detection in autoregressive models with no moment assumptions fumiya akashi holger dette waseda university bochum department of applied mathematics mathematik tokyo japan bochum germany yan liu waseda university department of applied mathematics tokyo japan abstract in this paper we consider the problem of detecting a change in the parameters of an autoregressive process where the moments of the innovation process do not necessarily exist an empirical likelihood ratio test for the existence of a change point is proposed and its asymptotic properties are studied in contrast to other work on change point tests using empirical likelihood we do not assume knowledge of the location of the change point in particular we prove that the maximizer of the empirical likelihood is a consistent estimator for the parameters of the autoregressive model in the case of no change point and derive the limiting distribution of the corresponding test statistic under the null hypothesis we also establish consistency of the new test a nice feature of the method consists in the fact that the resulting test is asymptotically distribution free and does not require an estimate of the long run variance the asymptotic properties of the test are investigated by means of a small simulation study which demonstrates good finite sample properties of the proposed method keywords and phrases empirical likelihood change point analysis infinite variance autoregressive processes ams subject classification introduction the problem of detecting structural breaks in time series has been studied for a long time since the seminal work of page who proposed a sequential scheme for identifying changes in the mean of a sequence of independent random variables numerous authors have worked on this problem a large part of the literature concentrates on cusum tests which are nonparametric by design see aue and for a recent review and some important references other authors make distributional assumptions to construct tests for structural breaks for example gombay and suggested a likelihood ratio procedure to test for a change in the mean and extensions of this method can be found in the monograph of and and the reference therein an important problem in this context is the detection of changes in the parameters of an autoregressive process and we refer to the work of andrews bai davis et al lee et al and berkes et al among others who proposed and likelihood ratio tests in practice however the distribution of random variables is rarely known and its misspecification may result in an invalid analysis using likelihood ratio methods one seminal method to treat the likelihood ratio empirically has been investigated by owen qin and lawless in a general context and extended by chuang and chan to estimate and test parameters in an autoregressive model in change point analysis the empirical likelihood approach can be viewed as a compromise between the completely parametric likelihood ratio and nonparametric cusum method baragona et al used this concept to construct a test for changepoints and showed that in the case where the location of the break points is known the limiting distribution of the corresponding test statistic is a distribution ciuperca and salloum considered the change point problem in a model with independent data without assuming knowledge of its location and derived an extreme value distribution as limit distribution of the empirical likelihood ratio test statistic these findings are similar in spirit to the meanwhile classical results in and who considered the likelihood ratio test the purpose of the present paper is to investigate an empirical likelihood test for a change in the parameters of an autoregressive process with infinite variance more precisely we do not assume the existence of any moments our work is motivated by the fact that in many fields such as electrical engineering hydrology finance and physical systems one often observes data see nolan or samoradnitsky and taqqu among many others to deal with such data many authors have developed methods for example chen et al constructed a robust test for a linear hypothesis of the parameters based on least absolute deviation ling and pan et al proposed least absolute estimators for parametric time series models with an infinite variance innovation process and show the asymptotic normality of the estimators however the limit distribution of the statistics usually contains the unknown probability density of the innovation process which is difficult to estimate for example ling and pan et al used kernel density estimators for this purpose but the choice of the corresponding bandwidth is not clear and often depends on users to circumvent problems of this type in the context of change point analysis we combine in this paper quantile regression and empirical likelihood methods as a remarkable feature the asymptotic distribution of the proposed test statistic does not involve unknown quantities of the model even if we consider autoregressive models with an infinite variance in the innovation process we would also like to emphasize that the nonparametric cusum tests proposed by bai for detecting structural breaks in the parameters of an autoregressive process assume the existence of the variance of the innovations however an alternative to the method proposed here are cusum tests based on quantile regression which has been cently considered by qu su and xiao and zhou et al among others the remaining part of this paper is organized as follows in section we introduce the model the testing problem and the empirical likelihood ratio test statistic our main results are given in section where we derive the limit distribution of the proposed test statistic and prove consistency the finite sample properties of the proposed test are investigated in section by means of a simulation study we also compare the test proposed in this paper with the cusum test using quantile regression see qu while the empirical likelihood based test suggested here is competitive with the cusum test using quantile regression when the innovation process is gaussian it performs remarkably better than the cusum test of qu if the innovation process has heavy tails moreover the new test is robust with respect even when the process is nearly a unit root process finally rigorous proofs of the results relegated to section change point tests using empirical likelihood throughout this paper the following notations and symbols are used the set of all integers and real numbers are denoted as z and r respectively for any sequence of random vectors an n we denote by l p a and an an convergence in probability and law to a random vector a respectively the transpose of a matrix m is denoted by m and km k tr m m is the frobenius norm we denote the zero vector the j k zero matrix and the l l identity matrix by and respectively consider the autoregressive model of order p ar p model defined by yt et where and rp and assume that the innovation process et t z is a sequence of independent and identically distributed random variables with vanishing median let yn be an observed stretch from the model for where denotes the true parameter this paper focuses on a posteriori type change point problem for the parameters in the ar p process more precisely we consider the model et t k yt et k t n for some vector rp where k n is the unknown time point of the change the testing problem for a change point in the autoregressive process can then be formulated by the following hypotheses against note that we neither assume knowledge of the change point k if the null hypothesis is not true nor of the true value rp if the null hypothesis holds for the testing problem we construct an empirical likelihood ratio elr test to be precise let i denote the indicator function as the median of et is zero the moment condition hn o i e i yt a holds under the null hypothesis in where is any measurable function of independent of et motivated by the moment conditions we first introduce the moment function g yt p i yt t n where yt p yt and a a x x is an m p q function a function w a and w some positive weight function we can choose the weight function w and arbitrarily provided that assumption in section holds in particular we can use a x x which corresponds to the case q see also section note that under the null hypothesis we have that e g yt p for all t let rn k be vk vn be a vector in the unit cube n then the empirical likelihood el for before the change point k n and after the change point is defined by k n y y o ln k sup vi vj rn k pn k mn k where pn k and mn k are subsets of the cube n defined as n n pn k rn k k x vi n x o vj and k n o n x x p n vj g yjp mn k rn k vi g yi note that the unconstrained maximum el is represented as ln k e sup n ny vi rn k pn k o k n k and hence the logarithm of the empirical likelihood ratio elr statistic is given by ln k ln k e k n y y o log sup kvi n k vj rn k pn k mn k ln k log k hx p log g yi n x i log g yjp where is obtained by the lagrange multiplier method and the multipliers rm satisfy k x n x g yjp g yi p g yi p g yjp we finally define the test statistic for the change point problem since the maximum elr under is given by pn k sup k one may define the elr test statistic by tn max nc pn k where for fixed constants note that we do not consider the maximum of pn k k n as pn k can not be estimated accurately for small and large values of k see theorem in section for more details the asymptotic properties of a weighted version of this statistic are investigated in the following section remark the approach presented here can be naturally extended to the general regression models to be precise suppose that qy inf y p yt y denotes the of yt conditional on and assume that qy the moment condition e g yt p still holds under the null hypothesis if we define g yt p yt and u i u remark the method can also be extended to develop change point analysis based on the generalized empirical likelihood gel a gel test statistic for the change point problem can be defined by ln k h sup k x p g yi sup n x i g yjp where is a concave twice differentiable function defined on an open interval of the real line that contains the point with typical examples for the choice of are given by log and using lagrangian multipliers it is easy to see that the choice log yields the empirical likelihood method discussed so far the class associated with is called the family see cressie and read main results in this section we state our main results throughout this paper let f and f denote the distribution function and the probability density function of et respectively we impose the following assumptions assumption i int b where the parameter space b is a compact set in rp with nonempty interior ii z z p for and b iii the median of et is zero iv the distribution function f of et is continuous and differentiable at the point with positive derivative f f assumption e ka ka assumption the matrix e g yt p g yt p is positive definite assumption i there exists a constant such that e ii let vt sign et then the sequence vt t z is strong mixing p with mixing coefficients that satisfy the maximum el estimator k is defined by k k k sup k and the consistency with corresponding rate of convergence of this statistic are given in the following theorem theorem suppose that assumptions hold and define k rn for some r then under the null hypothesis we have as n op as seen from theorem tn is not accurate for small k and n k as the result does not hold if o or n k o in addition the elr statistic is not computable for small k and n for this reason we consider in the following discussion the trimmed and of el ratio test statistic defined by pn k max h n where h is a given weight function n n and if takes a significant large value we have enough reason to reject the null hypothesis of no change point we also need a further assumption to control a remainder terms in the stochastic expansion of assumption r h r with this additional assumption the limit distribution of the test statistic can be derived in the following theorem theorem suppose that assumptions hold then under the null hypothesis of no change point n o l t sup r h r b r rb h r b qb as n here b r r is an vector of independent brownian motions and the matrix q is defined by q where denotes the square root of a nonnegative definite matrix a g g g and e g yt p g yt p e a test for the hypotheses in is now easily obtained by rejecting the null hypothesis in whenever where is the of the distribution of the random variable t defined on the side of equation using an appropriate estimate of the matrix q theorem suppose that assumptions and the alternative hold then we have p as n theorem shows that the power of the test approaches at any fixed alternative in other words the test is consistent finite sample properties in this section we illustrate the finite sample properties of the elr test for the hypothesis by means of small simulation study for this purpose we consider the ar model yt et where the coefficient satisfies t k t k n for the calculation of the elr statistic in we use the functions a x x and h r r r throughout this section following ling the are chosen as where c and c is the of the sample yn the trimming parameters in the definition of the statistic are chosen as and the critical value in is obtained as the empirical quantile of the samples l l max b b l where b b are independent standard brownian motions note that in this case the matrix in is given by q in figures we display the rejection probabilities of the elr test for the hypothesis where the nominal level is chosen as the horizontal and vertical axes show respectively the values of and the rejection rate of the hypothesis at this point is fixed as the sample sizes are given by n and and the distribution of the innovation process is a standard normal distribution figure a with degrees of freedom figure and a cauchy distribution figure we also consider two values of the parameter r in the definition of the change point k rn that is r and r we observe that for small sample sizes the test is slightly conservative and that the approximation of the nominal level improves with increasing sample size the alternatives are rejected with reasonable probabilities where the power is larger in the case r than for r a comparison of the different distributions in figures shows that the power is lower for standard normal distributed innovations while an error process with a cauchy distribution yields the largest rejection probabilities other simulations show a similar picture and the results are omitted for the sake of brevity figure simulated rejection probabilities of the elr test in the ar model with normal distributed innovations a r b r rejection rate rejection rate n n n n n n figure simulated rejection probabilities of the elr test in the ar model with innovations a r b r rejection rate rejection rate n n n n n n figure simulated rejection probabilities of the elr test in the ar model with cauchy distributed innovations a r b r rejection rate rejection rate n n n n n n in the second part of this section we compare the new test defined by with the cusum test in qu which uses quantile regression the test statistic for the median in qu is defined by sup n n k where k k is the sup norm is the median regressor n x x x and the matrix x is given by x xn in figures we display the rejection probabilities of the test based on the statistic tn in in and in for the hypothesis where the nominal level is chosen as the horizontal and vertical axes show respectively the values of and the rejection rate of the hypothesis h at this point is fixed as the distribution of the innovation process is a standard normal distribution figure a with degree of freedom figure and a cauchy distribution figure and the sample sizes are given by n and in each case again we consider two different locations for the change point k corresponding to the values r and r we observe that all tests derived from the three statistics tn in corresponding to the weight function h r in corresponding to the weight function h r r r and in are slightly conservative and that the approximation of the nominal level improves with increasing sample size see figure for the value the approximation is usually more accurate for r next we compare the power of the different tests for different distributions of the innovations in the case of gaussian innovations all tests shows a similar behavior see figure and only if the case n and r the elr test based on the unweighted statistic tn shows a better performance as the tests based on and moreover for gaussian innovations all three tests show a remarkable robustness against that is in figure we display corresponding results for innovations the differences in the approximation of the nominal level are negligible if r we do not observe substantial differences in the power between the three tests independently of the sample size on the other hand if r the tests based on elr statistics and tn yield larger rejection probabilities than the test see the right part of figure figure interestingly the unweighted test based on tn shows a better performance than the test based on in these cases again all tests are robust with respect to finally in figure we display the rejection probabilities of the three tests for cauchy distributed innovations where we again do not observe differences in the approximation of the nominal level on the other hand the differences in power between the tests based on elr and quantile regression are remarkable in all cases the elr tests based on tn and have substantially more power than the test based on the elr test based on the unweighted statistic tn shows a better performance than the elr test based on this superiority is less pronounced in the case r but clearly visible for r finally in contrast to the test based on the elr tests based on tn and are robust against for cauchy distributed innovations and clearly detect a change in the parameters in these cases figure simulated rejection probabilities of various change point tests based on the statistics tn and defined in and respectively the model is given by an ar model with normal distributed innovations i n a r b r rejection rate rejection rate tn tn tn tn ii n a r b r rejection rate rejection rate tn tn tn tn iii n a r b r rejection rate rejection rate tn tn tn tn figure simulated rejection probabilities of various change point tests based on the statistics tn and defined in and respectively the model is given by an ar model with innovations i n a r b r rejection rate rejection rate tn tn tn tn ii n a r b r rejection rate rejection rate tn tn tn tn iii n a r b r rejection rate rejection rate tn tn tn tn figure simulated rejection probabilities of various change point tests based on the statistics tn and defined in and respectively the model is given by an ar model with cauchy distributed innovations i n a r b r rejection rate rejection rate tn tn tn tn ii n a r b r rejection rate rejection rate tn tn tn tn iii n a r b r rejection rate rejection rate tn tn tn tn proofs this section gives rigorous proofs of all results in this paper in what follows c will denote a generic positive constant that varies in different places with probability approaching one will be abbreviated as moreover we use the following notations throughout this section gi g yi p g e g yi p k k log gi k n x log gj n k rm gi for all i k k rm gj for all j k n n n p g yi gi n n k gi k and k n x gj n k proof of theorem we start proving several auxiliary results which are required in the proof of theorem lemma suppose that assumption i holds for let k ck c n k then as n we have sup p max gi sup also for all b max p gj proof let bi kgi by assumption i we can choose such that k e is finite then for any we can define m and obtain p bi m k k x p bi m k x k x p m k e m k consequently maxi bi op k and by the inequality we have sup max gi sup bi op k which implies sup p gi similarly it follows that sup max p gj therefore for all b which completes the proof of lemma lemma suppose that assumptions hold and there exists a sequence n b such that p n n op k and k n k op n k as n denote n by then under arg max and arg max k exist moreover as n we have op k op n k op k k op n k proof we only show the statement for the corresponding statement for follows by similar arguments since is a closed set it follows that arg max exists note that is a concave function of from lemma it follows that is continuously twice differentiable with respect to by a taylor expansion at there exists a point on the line joining and such that x i gi gi k where gi note that the definition of gi implies gi gi for any b by lemma we have uniformly with respect to i furthermore the ergodicity of xt t z implies that the random variable x converges to in probability hence the minimum eigenvalue of is bounded away from and we obtain k i x c gi gi k k k dividing both sides of by we get op k op k and hence int again by lemma the concavity of and the convexity of it follows that exists and op k these results also imply that op k by similar arguments we can show the corresponding results for and k next let us consider the estimator k of theorem recall that k is the minimizer of ln k k sup n k sup k k let us define k k n k k and k arg max k k k arg max k k k k lemma suppose that assumptions hold then under the null hypothesis of no change point we have op k k k op n k as n l proof define k l k for l k k k k k n k k k k then it follows from that k k k k k k k k k and k which implies the inequality k k k k k k k k by similar arguments as used in and we have k k k k n k where is the same constant as in the proof of lemma on the other hand we have the following inequality k k k k inf sup k k k k sup k k k sup n k sup k k applying lemma with n yields sup op k sup k op n k and from and we get op finally from and we have k k k op k n k which implies k and k k k k k op k k k op k establishing the assertion of lemma proof proof of theorem by lemma we have op then it follows from the triangular inequality and uniform law of large numbers that kg k kg k k sup kg k k op since g has a unique zero at the function kg k must be bounded away from zero outside any neighborhood of therefore must be inside any p neighborhood of and therefore next we show that op as k rn by lemma we have n o k n k k op k and the central limit theorem implies h n oi op k n k op further g k k op which yields kg k g k k k h n oi k op op k n k moreover similar arguments as given in newey and mcfadden on page the differentiability of kg k and the estimate kg k k show that h n oi k k op op k n k and hence h n oi op k op op k n k if k rn the side of is of order op which completes the proof of theorem proof of theorem we first show that in is well approximated by some function near its optima using a similar reasoning as in parente and smith for this purpose let us define k k and k k n k k furthermore hereafter redefine k arg min sup k arg max m k and k arg max k m lemma suppose that assumptions hold then under op as n proof it is sufficient to show that i op ii op iii op for a proof of i we note that a taylor expansion leads to k k k k k i x k a a k k i where is on the line joining the points k and observing the definition of k k this yields the estimate k k k k k g k k i x a a k n k k i p since by theorem we can take n in lemma and obtain op then recalling the first term in where k is replaced by k becomes k g k n o k g g g n k n k n k n op op op moreover the second term in is of order op k hence we get k op k and similarly k k k k k op n k combining these estimates yields op which is the statement i for a proof of ii we first show op note that the function k is smooth in and then the first order conditions for an interior global maximum k n k k g k n k g k these conditions can be k k are satisfied for the point k rewritten in matrix form as k k k k g g n k n k k k with the notations g h k h h pn k n n the system is equivalent to k k k n k k h n k h h k n k h k n k h k n k h n k pn k h n k k h consequently and are of order op op k and op n k respectively therefore by the same arguments as given in the proof of i it follows that op this relationship and the fact that k k k and k k k are the saddle points of the functions and respectively imply that op op op on the other hand op op op thus and lead to op finally we can prove iii by similar arguments that op op op op and consequently which implies iii proof proof of theorem by lemma and it follows that sup rn n rn op n k n h rn op rn op where k rn x g yt p r n rn k k k k k k k k k u u u and x denotes the integer part of real number x as shown in lemma max sup rn op second from assumption and lemma in phillips it follows that n o l r r b r r for any vector c rm where b r r is an standard brownian motion hence the device and the continuous mapping theorem lead to n o k max h n n h o sup r rn h rn l sup n h r r o kb r rb k h r b qb proof of theorem proof without loss of generality suppose that this implies that there exist a neighborhood u of and a neighborhood u of such that u u under the alternative it follows that u or u note that e g ytp for t k and e g ytp for k t from a uniform law of large numbers or k k is outside a neighborhood of for any sufficiently large now if we consider instead of and k instead of k in we find as in that can be approximated by n k k n h rn op this time however we have n k k since k for any sufficiently k large this completes the proof of theorem acknowledgements the authors would like to thank martina stein who typed this manuscript with considerable technical expertise the work of authors was supported by jsps for young scientists b waseda university grant for special research projects and the deutsche forschungsgemeinschaft sfb statistik nichtlinearer dynamischer prozesse teilprojekt and references andrews tests for parameter instability and structural change with unknown change point econometrica journal of the econometric society aue and structural breaks in time series journal of time series analysis bai j on the partial sums of residuals in autoregressive and moving average models journal of time series analysis bai j convergence of the sequential empirical processes of residuals in arma models annals of statistics baragona battaglia cucina et al empirical likelihood for break detection in time series electronic journal of statistics berkes ling and schauer testing for structural change of ar model to threshold ar model journal of time series analysis chen ying zhang and zhao analysis of least absolute deviation biometrika chuang and chan empirical likelihood for autoregressive models with applications to unstable time series statistica sinica ciuperca and salloum empirical likelihood test in a posteriori nonlinear model metrika cressie and read multinomial tests journal of the royal statistical society series b methodological and limit theorems in analysis john wiley davis huang and yao testing for a change in the parameter values and order of an autoregressive model annals of statistics gombay and asymptotic distributions of maximum likelihood tests for change in the mean biometrika lee ha na and na the cusum test for parameter change in time series models scandinavian journal of statistics ling least absolute deviation estimation for infinite variance autoregressive models journal of the royal statistical society series b statistical methodology newey and mcfadden large sample estimation and hypothesis testing handbook of econometrics nolan stable distributions models for heavy tailed data boston birkhauser in progress chapter online at owen a b empirical likelihood ratio confidence intervals for a single functional biometrika page continuous inspection schemes biometrika page control charts with warning lines biometrika pan wang and yao weighted least absolute deviations estimation for arma models with infinite variance econometric theory parente and smith gel methods for nonsmooth moment indicators econometric theory phillips time series regression with a unit root econometrica qin and lawless empirical likelihood and general estimating equations the annals of statistics qu z testing for structural change in regression quantiles journal of econometrics samoradnitsky and taqqu stable random processes stochastic models with infinite variance volume crc press su and xiao testing for parameter stability in quantile regression models statistics probability letters zhou wang and tang sequential change point detection in linear quantile regression models statistics probability letters
10
feedback generation for performance problems in introductory programming assignments sumit gulwani ivan microsoft research usa tu wien austria sep sumitg radicek florian zuleger tu wien austria zuleger abstract general terms providing feedback on programming assignments manually is a tedious error prone and task in this paper we motivate and address the problem of generating feedback on performance aspects in introductory programming assignments we studied a large number of functionally correct student solutions to introductory programming assignments and observed there are different algorithmic strategies with varying levels of efficiency for solving a given problem these different strategies merit different feedback the same algorithmic strategy can be implemented in countless different ways which are not relevant for reporting feedback on the student program we propose a programming language extension that allows a teacher to define an algorithmic strategy by specifying certain key values that should occur during the execution of an implementation we describe a dynamic analysis based approach to test whether a student s program matches a teacher s specification our experimental results illustrate the effectiveness of both our specification language and our dynamic analysis on one of our benchmarks consisting of functionally correct implementations to programming problems we identified strategies that we were able to describe using our specification language in minutes after inspecting around implementations our dynamic analysis correctly matched each implementation with its corresponding specification thereby automatically producing the intended feedback algorithms languages performance categories and subject descriptors software engineering testing and debugging artificial intelligence automatic analysis of algorithms the second and third author were supported by the vienna science and technology fund wwtf grant permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page copyrights for components of this work owned by others than the author s must be honored abstracting with credit is permitted to copy otherwise or republish to post on servers or to redistribute to lists requires prior specific permission a fee request permissions from permissions november hong kong china copyright is held by the s publication rights licensed to acm acm http keywords education moocs performance analysis trace specification dynamic analysis introduction providing feedback on programming assignments is a very tedious and task for a human teacher even in a standard classroom setting with the rise of massive open online courses moocs which have a much larger number of students this challenge is even more pressing hence there is a need to introduce automation around this task immediate feedback generation through automation can also enable new pedagogical benefits such as allowing resubmission opportunity to students who submit imperfect solutions and providing immediate diagnosis on class performance to a teacher who can then adapt her instruction accordingly recent research around automation of feedback generation for programming problems has focused on guiding students to functionally correct programs either by providing counterexamples generated using test input generation tools or generating repairs however aspects of a program especially performance are also important we studied several programming sessions of students who submitted solutions to introductory c programming problems on the platform in such a programming session a student submits a solution to a specified programming problem and receives a counterexample based feedback upon submitting a functionally incorrect attempt generated using pex the student may then inspect the counterexample and submit a revised attempt this process is repeated until the student submits a functionally correct attempt or gives up we studied different problems and observed that of the different programming sessions led to functionally correct solutions however unfortunately on average around of these functionally correct solutions had different kinds of performance problems in this paper we present a methodology for semiautomatically generating appropriate performance related feedback for such functionally correct solutions from our study we made two observations that form the basis of our feedback generation methodology i there are different algorithmic strategies with varying levels of efficiency for solving a given problem rithmic strategies capture the global insight of a solution to a programming problem while also defining key performance characteristics of the solution different strategies merit different feedback ii the same algorithmic strategy can be implemented in countless different ways these differences originate from local implementation choices and are not relevant for reporting feedback on the student program in order to provide meaningful feedback to a student it is important to identify what algorithmic strategy was employed by the student program a profiling based approach that measures running time of a program or use of static bound analysis techniques is not sufficient for our purpose because different algorithmic strategies that necessitate different feedback may have the same computational complexity also a simple pattern matching based approach is not sufficient because the same algorithmic strategy can have syntactically different implementations our key insight is that the algorithmic strategy employed by a program can be identified by observing the values computed during the execution of the program we allow the teacher to specify an algorithmic strategy by simply annotating at the source code level certain key values computed by a sample program that implements the corresponding algorithm strategy using a new language construct called observe fortunately the number of different algorithmic strategies for introductory programming problems is often small at most per problem in our experiments these can be easily enumerated by the teacher in an iterative process by examining any student program that does not match any existing algorithmic strategy we refer to each such step in this iterative process as an inspection step we propose a novel dynamic analysis that decides whether the student program also referred to as an implementation matches an algorithm strategy specified by the teacher in the form of an annotated program also referred to as a specification our dynamic analysis executes a student s implementation and the teacher s specification to check whether the key values computed by the specification also occur in the corresponding traces generated from the implementation we have implemented the proposed framework in c and evaluated our approach on programming problems on attempted by several hundreds of students and on new problems that we hosted on as part of a programming course attempted by students in the course experimental results show that i the manual teacher effort required to specify various algorithmic strategies is a small fraction of the overall task that our system automates in particular on our mooc style benchmark of functionally correct implementations to programming problems we specified strategies in minutes after inspecting implementations on our standard classroom style benchmark of functionally correct implementations to programming problems we specified strategies in minutes after inspecting implementations ii our methodology for specifying and matching algorithmic strategies is both expressive and precise in particular we were able to specify all strategies using our specification language and our dynamic analysis correctly matched each implementation with the intended strategy this paper makes the following contributions we observe that there are different algorithmic strategies used in functionally correct attempts to introductory programming assignments these strategies merit different performance related feedback we describe a new language construct called observe for specifying an algorithmic strategy we describe a dynamic analysis based approach to test whether a student s implementation matches the teacher s specification our experimental results illustrate the effectiveness of our specification language and dynamic analysis overview in this section we motivate our problem definition and various aspects of our solution by means of various examples motivation fig shows our running examples programs a i im show some sample implementations for the anagram problem which involves testing whether the two input strings can be permuted to become equal on the platform all programs are examples of inefficient implementations because of their quadratic asymptotic complexity an efficient solution for example is to first collect in an array the number of occurrences of each character in both strings and then compare them leading to linear asymptotic complexity algorithmic strategies in implementations im we identify three different algorithmic strategies implementations iterate over one of the input strings and for each character in that string count the occurrences of that character in both strings counting strategy implementations sort both input strings and check if they are equal sorting strategy implementations iterate over one of the input strings and remove corresponding characters from the other string removing strategy implementation details an algorithmic strategy can have several implementations in case of counting strategy implementation calls manually implemented method countchar to count the number of characters in a string lines and while implementation uses a special c construct lines and and implementation uses the library function split for that task lines and in case of the sorting strategy implementation employs binary insertion sort while implementation employs bubble sort and implementation uses a library call lines and we also observe different ways of removing a character from a string in implementations desired feedback each of the three identified strategies requires separate feedback independent of the underlying implementation details to help a student understand and fix the performance issues for the first strategy implementations a possible feedback might be calculate the number of characters in each string in a preprocessing phase instead of each iteration of the main loop for the second strategy it might be instead of sorting input strings compare the number of character occurrences in each string and for the third strategy use a more efficient to remove characters specifying algorithmic strategies key values our key insight is that different implementations that employ the same algorithmic strategy generate the same key values during their execution on the same input bool puzzle string s string t if return false else return c c c b bool puzzle string s string t if return false foreach var item in s if item item return false return true a c bool puzzle string s string t bool puzzle string s string t if return false var sa var ta char sa sa char ta ta for int j return ta for int i if sa i sa char i f sa i sa bool puzzle string s string t if ta i ta char i if return false ta i ta ta temp foreach char c in int index c for int k k if index return false s index if sa k ta k return false return true return true e g puzzle string s string t if string tt t t s s tt bool puzzle string s string t for int i i char sc int char tc for int j j char c if s j s i if return false if observe s j for int i c sc i for int j if observefun split if tc j observe tc j for int j j if t j s i break if observe t j if return false return true if observefun split observe i j counting specification cs bool puzzle string s string t if return false char taux for int i i bool compareletterstring string a string b char sc s i var la x boolean exists false var lb x for int j j return lb if sc taux j exists true taux j break if exists false return false return true m n custom data equality cde void puzzle string s string t bool puzzle string s string t if string tt t t s s tt if int cs new int ct new int return false cover tochararray char cs cover tochararray char ct cover int hash new int for int i i for int i cs int s i hash i observe cs foreach char ch in cs for int i i hash int ch if cs int t i foreach char ch in ct observe cs hash int ch else ct int t i for int i observe ct if hash i return false return true cover bool puzzle string s string t if return false foreach char ch in if countchars s ch countchars t ch return false return true int countchars string s char c int number foreach char ch in if ch c return number p q efficient specification es int binarysearch list char xs char y int low high while low high int mid high low low if y xs mid high mid else if y xs mid low mid else return mid return low char sort string xs var res new list char foreach var x in xs var pos binarysearch res x pos x return bool puzzle string s string t return sort s sort t d insertion bool puzzle string s string t return ispermutation s t bool ispermutation string s string t if s t return true if return false int index s if index return false s t index return ispermutation s t h puzzle string s string t if s char ca ca if ca observe ca k sorting specification ss puzzle string s string t if string tt t t s s tt for int i i if i t return int ni i i int k s ni s ni t k observe t compareletterstring l removing specification rs bool puzzle string s string t if return false int int int int for int i cs int s i for int i ct int t i for int i if cs i ct i return false return true o bool puzzle string s string t if return false string cp t for int i char k s i bool found false for int j if cp j k if j cp char else if j cp j char else cp j char j found true break if found return false return true r computation figure running example implementations and specifications of anagram assignment for example the underlined expressions in the implementations and both produce the key value sequence a b a b a a a b a b a a a b a b a a on the input strings aba and baa our framework allows a teacher to describe an algorithmic strategy by simply annotating certain expressions in a sample implementation using a special language statement observe our framework decides whether or not a student implementation matches a teacher specification by comparing their execution traces on common input s we say that an implementation q matches a specification p if the execution trace of p is a subsequence of the execution trace of q and for every observed expression in p there is an expression in q that has generated the same values we call this matching criterion a trace embedding the notion of trace embedding establishes a fairly strong connection between specification and implementation basically both programs produce the same values at corresponding locations in the same order our notion of trace embedding is an adaptation of the notion of a simulation relation to dynamic analysis choice because of minor differences between implementations of the same strategy keyvalues can differ for example implementation uses a library function to obtain the number of characters while implementations and explicitly count them by explicitly iterating over the string moreover counted values in are incremented by one compared to those in and thus yields a different but related trace split split split split split split on input strings aba and baa to address variations in implementation details we include a choice construct in our specification language the is fixed before the execution thus such a choice is merely a syntactic sugar to succinctly represent multiple similar specifications n variables specifications specifications cs ss and rs denote the specifications for the counting strategy used in implementations sorting strategy used in and removing strategy used in respectively in cs the teacher observes the characters that are iterated over lines and the results of counting the characters lines and and use of library function split lines and also the teacher uses boolean variables line to choose the string over which the main loop iterates as the input strings are symmetric in the anagram problem and to choose between manual and library function implementations which also decides on observed counted values on lines and in ss the teacher observes one of the input strings after sorting and allows that implementations convert input string to uppercase on line and sort the string in reverse order on line notice that it is enough to observe only one sorted input as in the case that the input strings are anagrams the sorted strings are the same in rs the teacher observes the string with removed characters and chooses which string is iterated on line direction of the iteration on line and the direction in which the remove candidate is searched for on line specifications and implementations expression e d v opbin opun v statement s v e e v f vn while v do s skip if v then else observe v e observefun f vn e figure the syntax of l language in this section we introduce an imperative programming language l that supports standard constructs for writing implementations and has some novel constructs for writing specifications the language l the syntax of the language l is stated in fig we discuss the features of the language below expressions a data value d is any value from some data domain set d which contains all values in the language in c all integers characters arrays hashsets a variable v belongs to a finite set of variables var an expression is either a data value d a variable v an operator applied to variables or an array access here opbin represents a set of binary operators and opun a set of unary operators we point out that the syntax of l ensures that programs are in three address code operators can only be applied to variables but not to arbitrary expressions the motivation for this choice is that three address code enables us to observe any expression in the program by observing only variables we point out that any program can be automatically translated into three address code by assigning each subexpression to a new variable for example the statement a b can be translated into code as follows a b this enables us to observe the subexpression a b by observing statements the statements of l allow to build simple imperative programs assignments to variables and array elements skip statement composition of statements looping and branching constructs we also allow library function calls in l denoted by v f vn where f f is a library function name from a set of all library functions f there are two special observe constructs which are only available to the teacher and not to the student we discuss the observe statements in below we assume that each statement s is associated with a unique program location and write functions for space reasons we do not define functions here we could easily extend the language to recursive functions in fact we allow recursive functions in our implementation semantics we assume some standard imperative semantics to execute programs written in the language l for c we assume the usual semantics of c the two observe statements have the same semantic meaning of the skip statement computation domain we extend the data domain d by a special symbol which we will use to represent any data value we define the computation domain val associated with our language l as val d f we assume the data domain d is equipped with some equality relation d d for c we have x y iff a and b are of the same type and comparison by the equals method returns true we denote by e the set of all relations over val we define a default equality relation edef e as follows we have x y edef iff x or y or x y we have f xn yn edef iff f and xi yi edef for all i computation trace a computation trace over some finite set of programming locations loc is a finite sequence of pairs loc val we use the notation to denote the set of all computation traces over loc given some and loc loc we denote by the sequence that we obtain by deleting all pairs val from where loc strings to define value representations of both implementations regardless of used characters as equal we call a function loc e a comparison function we define e for every statement observe v e or observefun f vn e for statements where e has been left out we set the default value edef choice we assume that the teacher can use some finite set of boolean variables b nd nd n var these are not available to the student choice allows the teacher to specify variations in implementations as discussed in variables are similar to the input variables in the sense that are assigned before program is executed we note that this results into different program behaviors for a given input student implementation matching in the following we describe how a computation trace is generated for a student implementation q on a given input the computation trace is initialized to the empty sequence then the implementation is executed on according to the semantics of during the execution we append pairs to for every assignment statement for e or e we append to we denote by the current value of we point out that we add the complete array to the trace for an assignment to an array variable for a library function call v f vn we append f v vn to we denote the resulting trace by jqk this construction of a computation trace can be achieved by instrumenting the implementation in an appropriate manner teacher specification the teacher uses observe and observefun for specifying the key values she wants to observe during the execution of the specification and for defining an equality relation over computation domain as usual the rectangular brackets and enclose optional arguments in the following we describe how a computation trace is generated for a specification p on a given input the computation trace is initialized to the empty sequence then the specification is executed according to the semantics of during the execution we append pairs to only for observe and observefun statements for observe v e we append v to we denote by v the current value of v for observefun f vn e we append f xn to where xi vi if the ith argument to f has been specified and xi if it has been left out we denote the resulting trace by jp k custom data equality the possibility of specifying an equality relation e e at some location is very useful for the teacher we point out that in practice the teacher has to specify e by an equality function val val true false the teacher can use e to define the equality of similar computation values we show its usage on examples and fig both examples implement the removing strategy discussed in in almost identical ways the only difference is on lines and respectively where implementations use different characters to denote a character removed from a string and in specification rs the teacher uses the equality function compareletterstring defined in cde which compares only letters of two in this section we define what it means for an implementation to partially match or fully match a specification and describe the corresponding matching algorithms the teacher has to determine for each specification which definition of matching has to be applied in case of partial matching we speak of inefficient specifications and in case of full matching of efficient specifications trace embedding we start out by discussing the problem of trace embedding that we use as a building block for the matching algorithms subsequence we call c partial full a matching criterion let val val val n and val val val be two computation traces over some set of locations loc and let be some comparison function as defined in we say is a subsequence of to c written c if there are indices kn m such that for all i n we have and val i val in case of c full we additionally require that and have the same length we refer to val i val as equality check if id the identity relation over val for all i n we obtain the usual definition of subsequence since deciding subsequence c is a central operation in this paper we state complexity of this decision problem it is easy to see that deciding subsequence requires only o m equality checks basically one iteration over is sufficient mapping function let loc and loc be two disjoint sets of locations we call an injective function loc loc a mapping function we lift to a function by applying it to every location we set for val val val val given a comparison function a matching criterion c and computation traces and we say that can be embedded in by iff c and write c we refer to as embedding witness executing a program on set of assignments i gives rise to a set of traces one for each assignment i we say that the set of traces can be embedded in by iff c for all i definition trace embedding trace embedding is the problem of deciding for given sets of traces and a comparison function and a matching criterion c if there is a witness mapping function such that c for all i complexity clearly trace embedding is in np assuming equality checks can be done in polynomial time we first guess the mapping function loc loc and then check c for all i which is cheap as discussed above however it turns out that trace embedding is npcomplete even for a singleton set i a singleton computation domain val and the full matching criterion theorem trace embedding is assuming equality checks can be done in polynomial time proof in order to show we reduce permutation pattern to trace embedding first we formally define permutation pattern let n k be positive integers with k let be a permutation of n and let be a permutation of k we say occurs in if there is an injective function k n such that is monotone for all r s k we have r s and k is a subsequence of n permutation pattern is the problem of deciding whether occurs in we now give the reduction of permutation pattern to trace embedding we will construct two traces and over a singleton computation domain val and over the sets of locations loc k and loc n we set i id the identity function on val for every i loc because val is singleton we can ignore values in the rest of the proof we set k and n because every i k occurs exactly twice in and partial and full matching criteria are equivalent so we can ignore the difference we now show that occurs in iff there is an injective function loc loc with we establish this equivalence by two observations first because every i k occurs exactly twice in and we have k n and k n iff second k n iff loc loc is monotone algorithm fig shows our algorithm embed for the trace embedding problem a straightforward algorithmic solution for the trace embedding problem is to simply test all possible mapping functions however there is an exponential number of such mapping functions to the cardinality of loc and loc this exponential blowup seems unavoidable as the combinatorial search space is responsible for the np hardness the core element of our algorithm is a that narrows down the space of possible mapping functions effectively we observe that if and c then there exists a trace embedding restricted to locations and formally the algorithm uses this insight to create a bipartite graph g loc loc of potential mapping pairs in lines a pair of locations g is a potential mapping pair iff there exists a trace embedding restricted to locations and as described above the key idea in finding an embedding witness is to construct a maximum bipartite matching in a maximum bipartite matching has an edge connecting every program embed loc loc c g loc loc for all loc loc for all i if g g break for all maximumbipartitematching g found true for all i if c found false break if found true return true return false figure algorithm for trace embedding problem location from loc to a distinct location in loc and thus gives rise to an injective function we point out that such an injective function does not need to be an embedding witness because by observing only a single location pair at a time it ignores the order of locations thus for each maximum bipartite matching the algorithm checks in lines if it is indeed an embedding witness the key strength of our algorithm is that it reduces the search space for possible embedding witnesses the experimental evidence shows that this approach significantly reduces the number of possible matchings and enables a very efficient algorithm in practice as discussed in partial matching we now define the notion of partial matching also referred to simply as matching which is used to check whether an implementation involves at least those inefficiency issues that underlie a given inefficient specification definition partial matching let p be a specification with observed locations loc let be the comparison function specified by p and let q be an implementation whose assignment statements are labeled by loc then implementation q partially matches specification p on a set of inputs i if and only if there exists a mapping function loc loc and an assignment to the variables such that c for all input values i where jp k jqk and c partial fig describes an algorithm for testing if an implementation partially matches a given specification over a given set of input valuations i in lines the implementation q is executed on all input values i in line the algorithm iterates through all assignments to the variables bp of the specification p in lines the specification p is executed on all inputs i with both sets of traces available line calls subroutine embed which returns true iff there exists a trace embedding witness example we now give an example that demonstrates our notion of programs and that contains example applications of algorithms embed and matches in fig we state two implementations a and b and one specification c these programs represent simplified versions transformed into three adress code of after function inlining matches specification p implementation q inputs i loc observed locations in p comparison function specified by p c matching criterion loc assignment locations of q for all i jqk bp variables in p for all assignments to bp for all i jp k if embed loc loc c return true return false figure matching algorithm puzzle s t i n while i n c s i j while j n s j if c j j j while j n t j if c j j i i puzzle s t i n while i n c s i ss split s c st split t c i i a c puzzle s t i n while i n c s i observe c j while j n s j if observe j j j if observefun split while j n t j b if observe j j if observefun split i i figure implementations a b and spec c and sc fig note that every assignment and observe statement is on its own line we denote line i in program x by by location i the argument e has been left out for all locations in the specification thus we have edef for all specification locations algorithm matches runs all three programs on input values s aab and t aba for program a we obtain the following computation trace a a a b similarly for program b we obtain a split aab a split aba a a for specification c we obtain two traces depending on the choice for the variable t a a a b a b f a split split algorithm matches then calls embed to check for trace embedding algorithm embed first constructs a potential graph g which contains an edge for two locations of the specification and the implementation that show the same values for implementation a we obtain the following graph ga notice that shows the same values as the locations in the implementation a however there is only one maximal matching in ga which is also an embedding witness thus implementation a matches specification c for implementation b and true we obtain the graph gb t from which we can not construct a maximal matching however for false we obtain gb f which is also an embedding witness thus implementation b matches specification c full matching below we will define the notion of full matching which is used to match implementations against efficient specifications we will require that for every loop and every library function call in the implementation there is a corresponding loop and library function call in the matching specification in order to do so we need some helper definitions observed loop iterations we extend the construction of the implementation trace defined in for each statement while v do s we additionally append element to the trace whenever the loop body s is entered we call a loop iteration let be a embedding witness we say that observes all loop iterations iff between every two loop iterations in there exists a pair val such that in other words we require that between any two iterations of the same loop there exists some observed location observed library function calls we say that observes all library function calls iff for every f val val n in there is a such that definition full matching let p be a specification with observed locations loc let be the comparison function specified by p and let q be an implementation whose assignment statements are labeled by loc then implementation q fully matches specification p on a set of inputs i if and only if there exists a mapping function loc loc and an assignment to the variables such that c for all input valuations i where jp k jqk c full and observes all loop iterations and library function calls we note that procedure embed fig can easily check at line whether the current mapping observes all loop iterations and library function calls it is tedious for a teacher to exactly specify all possible loop iterations and library function calls used in different efficient implementations we add two additional constructs to the language l to simplify this specification task cover statement we extend l by two cover statements cover f vm e and cover v the first statement is the same as the statement observefun f vm e except that we allow the embedding witness to not map to any location in the implementation this enables the teacher to specify that function f vm may appear in the implementation the second statement allows to map to a location that appears at most v times for each appearance of where v is the current value of the specified variable thus cover v enables the teacher to cover any loop with up to v iterations example now we present examples for efficient implementations and and specification es for the anagram problem fig the teacher observes computed values on lines and and uses a choice on line to choose if implementations count the number of characters in each string or decrement one number from another also the teacher allows up to two library function calls and two loops with at most iterations defined by cover statements on lines and extensions in this section we discuss useful extensions to the core material presented above these extensions are part of our implementation but we discuss them separately to make the presentation easier to follow mapping according to definition of trace embedding an embedding witness maps one implementation location to a specification location it constructs a mapping however it is possible that a student splits a computation of some value over multiple locations for example in the implementation stated in fig the student removes a character from a string across three different locations on lines and depending on the location of the removed character in the string this requires to map a single location from the specification to multiple locations in the implementation for this reason we extend the notion of trace embedding to mappings loc where for all it is easy to extend procedure embed fig to this setting the potential graph g is also helpful to enumerate every possible mapping however it is costly and unnecessary to search for arbitrary mappings we use heuristics to consider only a few mappings for example one of the heuristics in our implementation checks if the same variable is assigned in different branches of an in example for all three locations there is an assignment to variable cp although mappings may seem more powerful we point out that the teacher can always write a specification that is more succinct than the implementation of the student the above described mappings provide enough expressivity to the teacher behaviour trace embedding requires equal values in the same order in the specification and implementation traces however an implementation can use a library function with behaviour the values returned by a random generator or the iteration order over a set data structure for such library functions we eliminate by fixing one particular behaviour we fix the values returned by a random generator or the iteration order over a set during program instrumentation these fixes do not impact functionally correct programs because they can not rely on some behaviour but allow us to apply our matching techniques implementation and experiments we now describe our implementation and present an experimental evaluation of our framework more details on our experiments can we found on the website experimental setup our implementation of algorithm matches fig is in c and analyzes c programs implementations and specifications are in c we used microsoft s roslyn compiler framework for instrumenting every to record value during program execution data we used preexisting problems from as mentioned in the anagram problem where students are asked to test if two strings could be permuted to become equal the issorted problem where students are asked to test if the input array is sorted and the caesar problem where students are asked to apply caesar cipher to the input string we have chosen these specific problems because they had a high number of student attempts diversity in algorithmic strategies and a problem was explicitly stated for many problems on platform students have to guess the problem from failing examples we also created a new course on the platform with programming problems these problems were assigned as a homework to students in a second year undergraduate course we created this course to understand performance related problems that cs students make as opposed to regular users who might not have previous programming experience we encouraged our students to write efficient implementations by giving more points for performance efficiency than for mere functional correctness we omit the description of the problems here but all descriptions are available on the original course page methodology in the following we describe the methodology by which we envision the technique in the paper to be used the teacher maintains a set of efficient and inefficient specifications a new student implementation is checked against all available specifications if the implementation matches some specification the associated feedback is automatically provided to the student otherwise the teacher is notified that there is a new unmatched implementation the teacher studies the implementation and identifies one of the following reasons for its failure to match any existing specification i the implementation uses a new strategy not seen before in this case the teacher creates a new specification ii the existing specification for the strategy used in the implementation is too specific to capture the implementation in this case the teacher refines that existing specification this overall process is repeated for each unmatched implementation new specification a teacher creates a new specification using the following steps i copy the code of the unmatched implementation ii annotate certain values and function calls with observe statements iii remove any unnecessary code not needed in the specification from the implementation iv identify input values for the dynamic analysis for matching v associate a feedback with the specification specification refinement to refine a specification the teacher identifies one of the following reasons as to why an implementation did not match it i the implementation differs in details specified in the specification ii the b time required to specifications a of required inspection steps of matched implementations of matched implementations time min c of required inspection steps d time required to specifications longestequal doublechar longestword runlength vigenere basetobase catdog minimaldelete commonelement c of required inspection steps of matched implementations of inspection steps time min of inspection steps of matched implementations of inspection steps of matched implementations of matched implementations anagram issorted caesar d time required to specifications tableaggsum intersection reverselist sortingstrings minutesbetween maxsum median digitpermutation coins time min figure the number of inspection steps and time required to completely specify assignments problem name correct implement inefficient implement n s i nd anagram issorted caesar doublechar longestequal longestword runlength vigenere basetobase catdog minimaldelete commonelement tableaggsum intersection reverselist sortingstrings minutesbetween maxsum median digitpermutation coins ls performance avg max os oi m i table list of all assignments with the experimental results specification observes more values than those that appear in the implementation iii the implementation uses different data representation in case i the teacher adds a new nondeterministic choice and if necessary observes new values or function calls in case ii the teacher observes less values and in case iii the teacher creates or refines a custom input values our dynamic analysis approach requires the teacher to associate input values with specifications these input values should cause the corresponding implementations to exhibit their behavior otherwise an inefficient implementation might behave similar to an efficient implementation and for this reason match the specification of the efficient implementation this implies that trivial inputs should be avoided for example two strings with unequal lengths constitute a trivial input for the counting strategy since each of its three implementations fig then exit immediately similarly providing a sorted input for the sorting strategy is meaningless we remark that it is easy for a teacher who understands the various strategies to provide good input values granularity of feedback we want to point out that the granularity of a feedback depends on the teacher for example in a programming problem where sorting the input value is an inefficient strategy the teacher might not want to distinguish between different sorting algorithms as they do not require a different feedback however in a programming problem where students are asked to implement a sorting algorithm it makes sense to provide a different feedback for different sorting algorithms evaluation we report results on the problems discussed above in table results from manual code study we first observe that a large number of students managed to write a functionally correct implementation on most of the problems umn correct implementations this shows that succeeds in guiding students towards a correct solution our second observation is that for most problems a large fraction of implementations is inefficient column inefficient implementations especially for anagram problem this shows that although students manage to achieve functional correctness efficiency is still an issue recall that in our homework the students were explicitly asked and given extra points for efficiency we also observe that for all except two problems there is at least one inefficient algorithmic strategy and for most problems there are several inefficient algorithmic strategies column n these results highly motivate the need for a tool that can find inefficient implementations and also provide a meaningful feedback on how to fix the problem precision and expressiveness for each programming assignment we used the above described methodology and wrote a specification for each algorithmic strategy both efficient and inefficient we then manually verified that each specification matches all implementations of the strategy hence providing desired feedback for implementations this shows that our approach is precise and expressive enough to capture the algorithmic strategy while ignoring low level implementation details teacher effort to provide manual feedback to students the teacher would have to go through every implementation and look at its performance characteristics in our approach the teacher has to take a look only at a few representative implementations in column s we report the total number of inspection steps that we required to fully specify one programming problem the number of implementations that the teacher would had to go through to provide feedback on all implementations for the problems the teacher would only have to go through out of or around implementations to provide full feedback fig shows the number of matched implementations with each inspection step as well the time it took us to all specifications we measured the time it takes from seeing an unmatched implementation until a matching specification for it in column ls we report the largest ratio of specification and average matched implementation in terms of lines of code we observe that in half of the cases the largest specification is about the same size or smaller than the average matched implementation furthermore the number of the input values that need to be provided by the teacher is across all problems column i in all but one problem issorted one set of input values is used for all specifications also in about one third of the specifications there was no need for variables and the largest number used in one specification is column nd overall our approach requires considerably less teacher effort than providing manual feedback performance we plan to integrate our framework in a mooc platform so performance as for most web applications is critical our implementation consists of two parts the first part is the execution of the implementation and the specification usually small programs on relatively small inputs and obtaining execution traces which is in most cases neglectable in terms of performance the second part is the embed algorithm as discussed in the challenge consists in finding an embedding witness with os observed variables in the specification and oi obi served variables in the implementation there are s possible injective mapping functions for the sortingstrings problem that gives possible mapping functions oi os however our algorithm reduces this huge search space by constructing a bipartite graph g of potential mappings pairs in m we report the number of mapping functions that our tool had to explore for sortingstrings only different mapping functions had to be explored for all values os oi and m we report the maximal number across all specifications in the last column we state the total execution time required to decide if one implementation matches the specification average and maximal note that this time includes execution of both programs exploration of all assignments to boolean variables and finding an embedding witness our tool runs in most cases under half a second per implementation these results show that our tool is fast enough to be used in an interactive teaching environment threats to validity unsoundness our method is unsound in general since it uses a dynamic analysis that explores only a few possible inputs however we did not observe any unsoundness in our large scale experiments if one desires provable soundness an embedding witness could be used as a guess for a simulation relation that can then be formally verified by other techniques otherwise a student who suspects an incorrect feedback can always bring it to the attention of the teacher program size we evaluated our approach on introductory programming assignments although questions about applicability to larger programs might be raised our goal was not to analyze arbitrary programs but rather to develop a framework to help teachers who teach introductory programming with providing performance feedback currently a manual and task difficulty of the specification language although we did not perform case study with instructors we report our experiences with using the proposed language we would also like to point out that writing specifications is a investment which could be performed by an experienced personnel related work automated feedback there has been a lot of work in the area of generating automated feedback for programming assignments this work can be classified along three dimensions a aspects on which the feedback is provided such as functional correctness performance characteristics or modularity b nature of the feedback such as counterexamples bug localization or repair suggestions and c whether static or dynamic analysis is used ihantola present a survey of various systems developed for automated grading of programming assignments the majority of these efforts have focussed on checking for functional correctness this is often done by examining the behavior of a program on a set of test inputs these test inputs can be manually written or automatically generated there has only been little work in testing for properties the assyst system uses a simple form of tracing for counting execution steps to gather performance measurements the system counts the number of evaluations done which can be used for very coarse complexity analysis the authors conclude that better error messages are the most important area of improvement the ai community has built tutors that aim at bug localization by comparing source code of the student and the teacher s programs laura converts teacher s and student s program into a graph based representation and compares them heuristically by applying program transformations while reporting mismatches as potential bugs talus matches a student s attempt with a collection of teacher s algorithms it first tries to recognize the algorithm used and then tentatively replaces the expressions in the student s attempt with the recognized algorithm for generating correction feedback in contrast we perform trace comparison instead of source code comparison which provides robustness to syntactic variations striewe and goedicke have proposed localizing bugs by trace comparisons they suggested creating full traces of program behavior while running test cases to make the program behavior visible to students they have also suggested automatically comparing the student s trace to that of a sample solution for generating more directed feedback however no implementation has been reported we also compare the student s trace with the teacher s trace but we look for similarities as opposed to differences recently it was shown that automated techniques can also provide repair based feedback for functional correctness singh s sat solving based technology can successfully generate feedback of up to corrections on around of all incorrect solutions from an mit introductory programming course in about seconds on average while test inputs provide guidance on why a given solution is incorrect and bug localization techniques provide guidance on where the error might be repairs provide guidance on how to fix an incorrect solution we also provide repair suggestions that are manually associated with the various teacher tions but for performance based aspects furthermore our suggestions are not necessarily restricted to small fixes performance analysis the programming languages and software engineering communities have explored various kinds of techniques to generate performance related feedback for programs symbolic execution based techniques have been used for identifying related issues the speed project investigated use of static analysis techniques for estimating symbolic computational complexity of programs goldsmith used dynamic analysis techniques for empirical computational complexity the toddler tool reports a specific pattern computations with repetitive and similar patterns the cachetor tool reports memoization opportunities by identifying operations that generate identical values in contrast we are interested in not only identifying whether or not there is a performance issue but also identifying its root cause and generating repair suggestions references http making programs efficient http microsoft roslyn ctp http pex for fun http adam and laurent laura a system to debug student programs artif bose buss and lubiw pattern matching for permutations inf process burnim jalbert stergiou and looper lightweight detection of infinite loops at runtime in ase pages goldsmith aiken and wilkerson measuring empirical computational complexity in fse gulwani learning in stem education to appear in commun acm gulwani mehra and chilimbi speed precise and efficient static estimation of program computational complexity in popl pages gulwani and zuleger the problem in pldi pages gupta henzinger majumdar rybalchenko and xu proving in popl pages ihantola ahoniemi karavirta and review of recent systems for automatic assessment of programming assignments in proceedings of the koli calling international conference on computing education research koli calling pages new york ny usa acm jackson and usher grading student programs using assyst in sigcse pages masters a brief guide to understanding moocs the internet journal of medical education milner an algebraic definition of simulation between programs technical report stanford ca usa murray automatic program debugging for intelligent tutoring systems computational intelligence nguyen and xu cachetor detecting cacheable data to remove bloat in proceedings of the joint meeting on foundations of software engineering pages new york ny usa acm nistor song marinov and lu toddler detecting performance problems via similar patterns in proceedings of the international conference on software engineering icse pages piscataway nj usa ieee press saikkonen malmi and korhonen fully automatic assessment of programming exercises in proceedings of the annual conference on innovation and technology in computer science education iticse pages new york ny usa acm singh gulwani and automated feedback generation for introductory programming assignments in pldi pages striewe and goedicke using run time traces in automated programming tutoring in iticse pages striewe and goedicke trace alignment for automated tutoring in caa tillmann and de halleux box test generation for in tap pages tillmann de halleux xie gulwani and bishop teaching and learning programming and software engineering via interactive gaming in icse uno algorithms for enumerating all perfect maximum and maximal matchings in bipartite graphs in isaac pages zuleger gulwani sinn and veith bound analysis of imperative programs with the abstraction in sas pages
6
nov bayesian identification for selecting influenza mitigation strategies pieter timothy diederik jelena kristof philippe and ann artificial intelligence lab department of computer science vrije universiteit brussel brussels belgium ku leuven university of leuven department of microbiology and immunology rega institute for medical research clinical and epidemiological virology leuven belgium november abstract pandemic influenza has the epidemic potential to kill millions of people while various preventive measures exist vaccination and school closures deciding on strategies that lead to their most effective and efficient use remains challenging to this end epidemiological models are essential to assist decision makers in determining the best strategy to curve epidemic spread however models are computationally intensive and therefore it is pivotal to identify the optimal strategy using a minimal amount of model evaluations additionally as epidemiological modeling experiments need to be planned a computational budget needs to be specified a priori consequently we present a new sampling method to optimize the evaluation of preventive strategies using fixed budget identification algorithms we use epidemiological modeling theory to derive knowledge about the reward distribution which we exploit using bayesian identification algorithms thompson sampling and bayesgap we evaluate these algorithms in a realistic experimental setting and demonstrate that it is possible to identify the optimal strategy using only a limited number of model evaluations times faster compared to the uniform sampling method the predominant technique used for epidemiological decision making in the literature finally we contribute and evaluate a statistic for thompson sampling to inform the decision makers about the confidence of an arm recommendation introduction the influenza virus is responsible for the deaths of half of a million people each year in addition seasonal influenza epidemics cause a significant economic burden while transmission is primarily local a newly emerging variant may spread to pandemic proportions in a naive fully susceptible host population pandemic influenza occurs less frequently than seasonal influenza but the outcome with respect to morbidity and mortality can be much more severe potentially killing millions of people worldwide consequently it is essential to study mitigation strategies to control influenza pandemics for influenza different preventive measures exist vaccination social measures school closures and travel restrictions and antiviral drugs however the efficiency of strategies greatly depends on the availability of preventive compounds as well as on the characteristics of the targeted epidemic furthermore governments typically have limited resources to implement such measures therefore it remains challenging to formulate public health strategies that make effective and efficient use of these preventive measures within the existing resource constraints epidemiological models compartment models and models are essential to study the effects of preventive measures in silico while models are usually associated with a greater model complexity and computational cost than compartment models they allow for a more accurate evaluation of preventive strategies to capitalize on these advantages and make it feasible to employ models it is essential to use the available computational resources as efficiently as possible in the literature a set of possible preventive strategies is typically evaluated by simulating each of the strategies an equal number of times however this approach is inefficient to identify the optimal preventive strategy as a large proportion of computational resources will be used to explore strategies furthermore a consensus on the required number of model evaluations per strategy is currently lacking moreover as we show in this paper this number depends on the hardness of the evaluation problem for this reason we propose to combine epidemiological models with bandits in a preliminary study the potential of bandits was explored in a regret minimization setting using default strategies and however in this work we recognize that epidemiological modeling experiments need to be planned and that a computational budget needs to be specified a priori within this constraint we aim to minimize the number of required model evaluations to determine the most promising preventive strategy therefore we present a novel approach formulating the evaluation of preventive strategies as a identification problem using a fixed budget of model evaluations as running an model is computationally intensive minutes to hours depending on the complexity of the model minimizing the number of required model evaluations reduces the total time required to evaluate a given set of preventive strategies this renders the use of models attainable in studies where it would otherwise not be computationally feasible additionally reducing the number of model evaluations can also free up computational resources in studies that already use models capacitating researchers to explore different model scenarios considering a wider range of scenarios increases the confidence about the overall utility of preventive strategies in our model an arm s reward distribution corresponds to the epidemic size distribution of the epidemiological model we employ epidemiological modeling theory to derive that this distribution is approximately gaussian and exploit this knowledge using bayesian identification algorithms in this paper we contribute a novel method to evaluate preventive strategies as a identification problem this method enables decision makers to obtain recommendations in a reduced number of model evaluations and supports their decision process by providing a confidence recommendation statistic in section we employ concepts from epidemiological model theory and we adapt bayesian identification algorithms to incorporate this knowledge in section we evaluate these algorithms in an experimental setting where we aim to find the best vaccine allocation strategy in a realistic simulation environment that models seattle s social network we repeat the experiment for a wide range of basic reproduction numbers the number of infections that is by average generated by one single infection that are typically used in the influenza literature the obtained experimental results show that our approach is able to identify the best preventive strategy times faster compared to uniform sampling the predominant technique used for epidemiological decision making in the literature furthermore we contribute section and evaluate section a statistic to inform the decision makers about the confidence of a particular recommendation background pandemic influenza and vaccine production the primary preventive strategy to mitigate seasonal influenza is to produce vaccine prior to the epidemic anticipating the virus strains that are expected to circulate this vaccine pool is used to inoculate the population before the start of the epidemic while seasonal influenza may have a restricted susceptible population due to vaccination and immunity a newly emerging strain can become pandemic by spreading rapidly among naive human hosts worldwide while it is possible to stockpile vaccines to prepare for seasonal influenza this is not the case for new variants of influenza viruses as the vaccine should be specifically tailored to the virus that is the source of the pandemic therefore before an appropriate vaccine can be produced the responsible virus needs to be identified hence vaccine will be available only in limited supply at the beginning of the pandemic in addition production problems can result in vaccine shortages when the number of vaccine doses is limited it is imperative to identify an optimal vaccine allocation strategy modeling influenza there is a long tradition to use models to study influenza epidemics as it allows for a more accurate evaluation of preventive strategies a model that has been the driver for many high impact research efforts is flute flute implements a contact model where the population is divided into communities of households the population is organized in a hierarchy of social mixing groups where the contact intensity is inversely proportional with the size of the group closer contact between members of a household than between colleagues additionally flute implements an individual disease progression model that associates different disease stages with different levels of infectiousness to support the evaluation of preventive strategies flute implements the simulation of therapeutic interventions vaccines antiviral compounds and interventions school closure case isolation household quarantine bandits and identification the bandit game concerns a bandit a slot machine with k levers where each arm ak returns a reward rk when it is pulled rk represents a sample from ak s reward distribution a common use of the bandit game is to pull a sequence of arms such that the cumulative regret is minimized to fulfill this goal the player needs to carefully balance between exploitation choose the arms with the highest expected reward and exploration explore the other arms to potentially identify even more promising arms in this paper the objective is to recommend the best arm the arm with the highest average reward after a fixed number of arm pulls this is referred to as the fixed budget identification problem an instance of the problem for a given budget t the objective is to minimize the simple regret where is the average reward of the recommended arm aj at time t simple regret is inversely proportional to the probability of recommending the correct arm related work in this work we recognize that a computational budget needs to be specified a priori to meet the realities associated with high performance computational infrastructure for this reason we consider the fixed budget identification setting in contrast to techniques that attempt to identify the best arm with a predefined confidence racing strategies strategies that exploit the confidence bound of the arms means and more recently fixed confidence identification algorithms while other algorithms exist to rank or select bandit arms bestarm identification is best approached using adaptive sampling methods as the ones we study in this paper moreover the use of identification methods clears the way for interesting future work with respect to evaluating preventive strategies while considering multiple objectives see section methods we formulate the evaluation of preventive strategies as a bandit game with the aim of identifying the best arm using a fixed budget of model evaluations the presented method is generic with respect to the type of epidemic that is modeled pathogen contact network preventive strategies the method is evaluated in the context of pandemic influenza in the next section preventive bandits definition a stochastic epidemiological model e is defined in terms of a model configuration c c and can be used to evaluate a preventive strategy p evaluating the model e results in a sample of the model s outcome distribution outcome e c p where c c and p p note that a model configuration c c describes the complete model environment both aspects inherent to the model flute s mixing model and options that the modeler can provide population statistics vaccine properties the result of a model evaluation is referred to as the model outcome prevalence proportion of symptomatic individuals morbidity mortality societal cost our objective is to find the optimal preventive strategy from a set of alternative strategies pk p for a particular configuration c of a stochastic epidemiological model where corresponds to the studied epidemic definition a preventive bandit has k pk arms pulling arm pk corresponds to evaluating pk by running a simulation in the epidemiological model e pk a preventive bandit is thus a bandit that has preventive strategies as arms with reward distributions corresponding to the outcome distribution of a stochastic epidemiological model e pk while the parameters of the reward distribution are known the parameters of the epidemiological model it is intractable to determine the optimal reward analytically from the epidemiological model hence we must learn about the outcome distribution via interaction with the epidemiological model outcome distribution as previously defined the reward distribution associated with a preventive bandit s arm corresponds to the outcome distribution of the epidemiological model that is evaluated when pulling that arm therefore employing insights from epidemiological modeling theory allows us to specify prior knowledge about the reward distribution it is well known that a disease outbreak has two possible outcomes either it is able to spread beyond a local context and becomes a fully established epidemic or it fades out most stochastic epidemiological models reflect this reality and hence its epidemic size distribution is bimodal when evaluating preventive strategies the objective is to determine the preventive strategy that is most suitable to mitigate an established epidemic as in practice we can only observe and act on established epidemics epidemics that faded out in simulation would bias this evaluation consequently it is necessary to focus on the mode of the distribution that is associated with the established epidemic therefore we censor discard the epidemic sizes that correspond to the faded epidemic the size distribution that remains the one that corresponds with the established epidemic is approximately gaussian in this study we consider a scaled epidemic size distribution the proportion of symptomatic infections hence we can assume bimodality of the full size distribution and an approximately gaussian size distribution of the established epidemic we verified experimentally that these assumptions hold for all the reward distributions that we observed in our experiments see section to censor the size distribution we use a threshold that represents the number of infectious individuals that are required to ensure an outbreak will only fade out with a low probability epidemic threshold for heterogeneous host populations a population with a significant variance among individual transmission rates as is the case for influenza epidemics the number of secondary infections can be accurately modeled using a negative binomial offspring distribution nb where is the basic reproductive number and is a dispersion parameter that specifies the extent of heterogeneity the probability of epidemic extinction pext can be computed by solving g s s where g s is the probability generating function pgf of the offspring distribution for an epidemic where individuals are targeted with preventive measures vaccination in our use case we obtain the following pgf s g s popc popc where popc signifies the random proportion of controlled individuals from pext we can compute a threshold to limit the probability of extinction to a cutoff identification with a fixed budget our objective is to identify the best preventive strategy the strategy that minimizes the expected outcome out of a set of preventive strategies for a particular configuration c using a fixed budget t of model evaluations successive rejects was the first algorithm to solve the identification in a fixed budget setting for a bandit successive rejects operates in k phases at the end of each phase the arm with the lowest average reward is discarded thus at the end of phase k only one arm survives and this arm is recommended at phase f k each arm that is still available is played mf mf times where with t mf k f log k k log k k successive rejects serves as a useful baseline however it has no support to incorporate any prior knowledge bayesian identification algorithms are able to take into account such knowledge by defining an appropriate prior and posterior on the arms reward distribution as we will show such prior knowledge can increase the identification accuracy additionally at the time an arm is recommended the posteriors contain valuable information that can be used to formulate a variety of statistics helpful to assist decision makers we consider two bayesian algorithms bayesgap and thompson sampling for thompson sampling we derive a statistic based on the posteriors to inform the decision makers about the confidence of an arm recommendation the probability of success as we established in the previous section each arm of our preventive bandit has a reward distribution that is approximately gaussian with unknown mean and variance to make our method generic for any type of preventive bandit problem we assume an uninformative jeffreys prior on honda and takemura demonstrate that this prior leads to the following posterior on at the nth k pull s xk nk xk nk sk nk tnk sk nk where xk nk is the reward mean sk nk is the sum of squares sk nk nk x rk m xk nk and tnk is the standard student with nk degrees of freedom bayesgap is a bayesian algorithm the algorithm requires that for each arm ak a upper bound uk t and lower bound lk t is defined on the posterior of at each time step using these bounds the gap quantity bk t max ul t lk t is defined for each arm ak bk t represents an upper bound on the simple regret as defined in section at each step t of the algorithm the arm j t that minimizes the gap quantity bk t is compared to the arm j t that maximizes the upper bound uk t from j t and j t the arm with the highest confidence diameter uk t lk t is pulled the reward that results from this pull is observed and used to update ak s posterior when the budget is consumed the arm j argmin bj t t is recommended this is the arm that minimizes the simple regret bound over all times t t in order to use bayesgap in the preventive bandit setting we contribute bounds given our posteriors equation we define uk t t t lk t t t where t and t are the mean and standard deviation of the posterior of arm ak at time step t and is the exploration coefficient the amount of exploration that is feasible given a particular bandit game is proportional to the available budget and inversely proportional to the game s complexity this complexity can be modeled taking into account the game s hardness and the variance of the rewards following hoffman et al we define a hardness quantity x hk k with hardness hk max max considering the budget t hardness and a generalized reward variance over all arms we define s t theorem in the supplementary formally proves that using these bounds results in a probability of simple regret that asymptotically reaches the exponential lower bound presented by hoffman et al supplementary information is available at the end of this manuscript as both and are unknown in order to compute these quantities need to be estimated firstly we estimate s upper bound by estimating as follows k max t t t t k as in hoffman et al secondly for we need a measure of variance that is representative for the reward distribution of all arms to this end when the arms are initialized we observe their sample variance and compute their average pk s k k as our bounds depend on the standard deviation t of the posterior each arm s posterior needs to be initialized times to ensure that t is defined this initialization also ensures proper posteriors thompson sampling is a reformulation of the thompson sampling algorithm such that it can be used in a context thompson sampling operates directly on the arms posterior of the mean at each time step thompson sampling obtains one sample for each arm s posterior the arm with the highest sample is pulled and its reward is subsequently used to update that arm s posterior while this approach has been proven highly successful to minimize cumulative regret as it balances the it is to identify the best arm to adapt thompson sampling to minimize simple regret thompson sampling increases the amount of exploration to this end an exploration probability needs to be specified at each time step one sample is obtained for each arm s posterior the arm atop with the highest sample is only pulled with probability with probability we repeat sampling from the posteriors until we find an arm that has the highest posterior sample and where atop when the arm is found it is pulled and the observed reward is used to update the posterior of the pulled arm when the available budget is consumed the arm with the highest average reward is recommended as thompson sampling only requires samples from the arms posteriors we can use the posteriors from equation as is to avoid improper posteriors each arm needs to be initialized times as specified in the previous subsection the reward distribution is censored we observe each reward but only consider it to update the arm s value when it exceeds the threshold when we receive a sample from the mode of the epidemic that represents the established epidemic probability of success the probability that an arm recommendation is correct presents a useful confidence statistic to support policy makers with their decisions as thompson sampling recommends the arm with the highest average reward the probability of success is p max where is the random variable that represents the mean of the recommended arm as we assume that the arm s reward distributions are independent this probability can be computed using the recommended arm s posterior probability density function and the other arms cumulative density function p max p z p x p x dx z z k k p x p x dx x x dx as this integral can not be computed analytically we estimate it using gaussian quadrature it is important to notice that while aiming for generality we made some conservative assumptions the reward distributions are approximated as gaussian and the uninformative jeffreys prior is used these assumptions imply that the derived probability of success will be an for the actual recommendation success experiments we composed and performed an experiment in the context of pandemic influenza where we analyze the mitigation strategy to vaccinate a population when only a limited number of vaccine doses is available details about the rationale behind this scenario in section in our experiment we extend the simulation environment presented in to accommodate a realistic setting to evaluate vaccine allocation in contrast to we consider a large and realistic social network the city of seattle and a wide range of values we consider the scenario when a pandemic is emerging in a particular geographical region and vaccines becomes available albeit in a limited number of doses when the number of vaccine doses is limited it is imperative to identify an optimal vaccine allocation strategy in our experiment we explore the allocation of vaccines over five different age groups children children young adults older adults and the elderly as presented by chao et al we consider this experiment for a wide range of values influenza model and configuration the epidemiological model used in the experiments is the flute stochastic model in our experiment we consider the population of seattle united states that includes individuals this population is realistic both with respect to the number of individuals and its community structure and provides an adequate setting for the validation of vaccine strategies at the first day of the simulated epidemic random individuals are seeded with an infection the epidemic is simulated for days during this time no more infections are seeded thus all new infections established during the run time of the simulation result from the mixing between infectious and susceptible individuals we assume no immunity towards the circulating virus variant we choose the number of vaccine doses to allocate to be approximately of the population size in this experiment we explore the efficacy of different vaccine allocation strategies we consider that only one vaccine variant is available in the simulation environment flute allows vaccine efficacy to be configured on levels efficacy to protect against infection when an individual is susceptible v esus efficacy to avoid an infected individual from becoming infectious v einf and efficacy to avoid an infected individual from becoming symptomatic v esym in our experiment we choose v esus v einf and v esym the influenza vaccine only becomes fully effective after a certain period upon its administration and the effectiveness increases gradually over this period in our experiment we assume the vaccine effectiveness to build up exponentially over a period of weeks we perform our experiment for a set of values within the range of to in steps of this range is considered representative for the epidemic potential of influenza pandemics we refer to this set of values as note that the setting described in this subsection in conjunction with a particular value corresponds to a model configuration c definition the computational complexity of flute simulations depends both on the size of the susceptible population and the proportion of the population that becomes infected for the population of seattle the simulation run time was up to minutes median of minutes on hardware details in supplementary information section formulating vaccine allocation strategies we consider age groups to which vaccine doses can be allocated children years old children years old young adults years old older adults years old and the elderly years old an allocation scheme can be encoded as a boolean where each position in the tuple corresponds to the respective age group the boolean value at a particular position in the tuple denotes whether vaccines should be allocated to the respective age group when vaccine is to be allocated to a particular age group this is done proportional to the size of the population that is part of this age group to decide on the best vaccine allocation strategy we enumerate all possible combinations of this tuple the tuple can be encoded as a binary number and as such the different allocation strategies can be represented by integers an influenza preventive bandit the influenza preventive bandit bf lu has exactly arms each arm ak is associated with the allocation strategy for which the integer encoding is given a model configuration c definition when an arm ak of bf lu is pulled flute is invoked with and the vaccine allocation strategy pk p definition associated with the arm ak when flute finishes it outputs the proportion of the population that experienced a symptomatic infection pi from which the reward rk pi is computed outcome distributions to establish a proxy for the ground truth concerning the outcome distributions of the considered preventive strategies all strategies were evaluated times for each of the values in we will use this ground truth as a reference to validate the correctness of the recommendations obtained throughout our experiments presents us with an interesting evaluation problem to demonstrate this we visualize the outcome distribution for in figure and for in figure the outcome distributions for the other values are shown in section of the supplementary information firstly we observe that for different values of the distances between top arms means differ additionally outcome distribution variances vary over the set of values in these differences produce distinct levels of evaluation hardness see section and demonstrate the setting s usefulness as benchmark to evaluate preventive strategies secondly we expect the outcome distribution to be bimodal however the probability to sample from the mode of the outcome distribution that represents the epidemic decreases as increases this expectation is confirmed when we inspect figure that shows a bimodal distribution for while figure shows a unimodal outcome distribution for as only samples from the established epidemic were obtained our analysis identified that the best vaccine allocation strategy was allocate vaccine to school children strategy for all values in identification experiment to assess the performance of the different identification algorithms successive rejects bayesgap and thompson sampling we run epidemic size vaccine allocation strategy figure violin plot that depicts the density of the outcome distribution epidemic size for vaccine allocation strategies ro epidemic size vaccine allocation strategy figure violin plot that depicts the density of the outcome distribution epidemic size for vaccine allocation strategies ro each algorithm for all budgets in the range of to this evaluation is performed on the influenza bandit game that we defined earlier for each budget we run the algorithms times and report the recommendation success rate in the previous section the optimal vaccine allocation strategy was identified to be vaccine allocation strategy for all in we thus consider a recommendation to be correct when it equals this vaccine allocation strategy we evaluate the algorithm s performance with respect to each other and with respect to uniform sampling the current art to evaluate preventive strategies the uniform sampling method pulls arm au for each step t of the given budget t where au s index u is sampled from the uniform distribution u k to consider different levels of hardness and obtain insight in the effect of the unestablished outcome distribution we perform this analysis for each value in for the bayesian identification algorithms the prior specifications are detailed in section bayesgap requires an upper and lower bound that is defined in terms of the used posteriors in our experiments we use upper bound uk t and lower bound lk t that were established in section toptwo thompson sampling requires a parameter that modulates the amount of exploration as it is important for identification algorithms to differentiate between the top two arms we choose such that in the limit thompson sampling will explore the top two arms uniformly we censor the reward distribution based on the threshold we defined in section this threshold depends on basic reproductive number and dispersion parameter is defined explicitly for each of our experimental settings for the dispersion parameter we choose which is a conservative choice according to the literature we define the probability cutoff figure and figure show recommendation success rate for each of the identification algorithms respectively for and the results for the other values are visualized in section of the supplementary information the results for different values of clearly indicate that our selection of identification algorithms significantly outperform the uniform sampling method in our experiment where we consider different values the uniform sampling method requires more than double the amount of evaluations to achieve a similar recommendation performance for the harder problems setting with recommendation uncertainty remains considerable even after consuming times the budget required by thompson sampling all identification algorithms require an initialization phase in order to output a recommendation successive rejects needs to pull each arm at least once while thompson sampling and bayesgap need to pull each arm respectively and times details in section for this reason these algorithms performance can only be evaluated after this initialization phase bayesgap s performance is on par with successive rejects except for the hardest setting we studied in comparison thompson sampling consistently outperforms successive rejects pulls after the initialization phase thompson sampling needs to initialize each arm s posterior with pulls double the amount of uniform sampling and successive rejects however our experiments clearly show that none of the other algorithms reach any acceptable recommendation rate using less than pulls thereby alleviating concerns using a posterior success rate bg sr ttts uni budget figure in this figure we present the results for the experiment with each curve represents the rate of successful arm recommendations for a range of budgets a curve is shown for each of the considered algorithms bayesgap legend bg successive rejects legend sr thompson sampling legend ttts and uniform sampling legend uni in section we derived a statistic to express the probability of success ps concerning a recommendation made by thompson sampling we analyzed this probability for all the thompson sampling recommendations that were obtained in the experiment described above to provide some insights on how this statistic can be used to support policy makers we show the ps values of all thompson sampling recommendations for in figure figures for the other values in section of the supplementary information figure indicates that ps closely follows recommendation correctness and that success rate bg sr ttts uni budget figure in this figure we present the results for the experiment with each curve represents the rate of successful arm recommendations for a range of budgets a curve is shown for each of the considered algorithms bayesgap legend bg successive rejects legend sr thompson sampling legend ttts and uniform sampling legend uni the uncertainty of ps is inversely proportional to the size of the available budget additionally in figure figures for the other values in section of the supplementary information we confirm that ps underestimates recommendation correctness these observations indicate that ps has the potential to serve as a conservative statistic to inform policy makers about the confidence of a particular recommendation and thus can be used to define meaningful cutoffs to guide policy makers in their interpretation of the recommendation of preventive strategies conclusion we formulate the objective to select the best preventive strategy in an individualbased model as a fixed budget identification problem an experiment was set up to evaluate this setting in the context of pandemic influenza to assess the best arm recommendation performance of the preventive bandit we report a success rate over independent bandit runs probability of success success failure budget figure thompson sampling was run times for each budget for the experiment with for each of the recommendations ps was computed these ps values are shown as a scatter plot where each point s color reflects the correctness of the recommendation see legend we demonstrate that it is possible to efficiently identify the optimal preventive strategy using only a limited number of model evaluations even if there is a large number of preventive strategies to consider compared to uniform sampling our method is able to recommend the best preventive strategy reducing the number of required model evaluations times additionally we show that by using bayesian identification algorithms statistics can be defined to support policy makers with their decisions as such we are confident that our method has the potential to be used as a decision support tool for mitigating epidemics this will enable the use of models in studies where it would otherwise be computationally too prohibitive and allow researchers to explore a wider variety of model scenarios we identify two particular directions for future work firstly while our method is evaluated in the context of pandemic influenza it is important to stress that it can be used to evaluate preventive strategies for other infectious diseases since recently a dengue vaccine is available and the optimal allocation of this vaccine remains an important research topic we recognize that dengue epidemics are an interesting use case secondly in this paper our empirical success rate estimated probability of succes figure thompson sampling was run times for each budget for the experiment with for each of the recommendations ps was computed the ps values were binned to in steps of per bin we thus have a set of bernoulli trials for which we show the empirical success rate blue scatter and the confidence interval blue confidence bounds the orange reference line denotes perfect correlation between the empirical success rate and the estimated probability of success preventive bandits only learn with respect to a single model outcome the proportion of symptomatic infections however for many pathogens it is interesting to incorporate multiple objectives morbidity mortality cost in the future we aim to use bandits in contrast to the current preventive bandits with this approach we plan to learn a coverage set containing an optimal strategy for every possible preference profile the decision makers might have statement with respect to the reproducibility of our research if this manuscript is accepted all source code used in our experiments will be made publicly available acknowledgments pieter libin was supported by a phd grant of the fwo fonds wetenschappelijk onderzoek vlaanderen and the vub research council timothy verstraeten was supported by a phd grant of the fwo fonds wetenschappelijk onderzoek vlaanderen and the vub research council kristof theys was supported by a postdoctoral grant of the fwo fonds wetenschappelijk onderzoek vlaanderen diederik roijers was supported by a postdoctoral grant of the fwo fonds wetenschappelijk onderzoek vlaanderen the computational resources and services used in this work were provided by the hercules foundation and the flemish government department krediet aan navorsers theys supplementary information introduction in this supplementary information we provide a proof for bayesgap s simple regret bound section furthermore we provide additional figures that were omitted from the main manuscript figures for the outcome epidemic size distributions section figures for the experimental success rates section figures for the probabilities of success ps values per budget section and figures for the binned distribution over ps values section finally in section we describe the computational resources that were used to execute the simulations bayesgap simple regret bound for posteriors lemma consider a jeffrey s prior over the parameters of the gaussian reward distributions then the posterior mean of arm k has the following nonstandardized at pull nk p nk sk nk tnk nk sk nk k where nk is the number of pulls for arm k nk is the sample mean and sk nk is the sum of squares proof this lemma was presented and proved by honda et al lemma consider a random variable x with variance and the probability that x is within a radius from its mean can then be written as p where c c is the normalizing constant of a standard proof consider a random variable z and then the probability of z being greater than p z r z q is z dz dz z z q dz c z c z dz p c c z p c q the probability of z being greater than the lower bound is the integral over its probability density function starting from that lower bound in the integral we introduce a factor which is greater than for the considered values of z we then take note of the following derivative and use this result to analytically solve the integral d dx x finally we solve the primitive from to infinity next we apply a union bound to obtain q a lower bound on the probability that the magnitude of z is smaller than p r finally consider z p r p c p c lemma consider a bandit problem with budget t and k arms let uk t and lk t be upper and lower bounds that hold for all times t t and all arms k k with probability t finally let gk be a monotonically k p hk decreasing function such that uk t lk t gk nk t and t we can then bound the simple regret rt as p rt k x t x t proof first we define e as the event in which every mean is bounded by its associated bounds uk t and lk t for each time step e k t lk t uk t the probability of deviating from a single bound at time t is by definition t k p p t t when applying the union bound we obtain p e the probability of regret is equal to the probability of the event e occuring as proven in theorem consider a gaussian bandit problem with budget t and unknown variance let be a generalization of that variance over all arms and uk t and lk t respectively be the upper and lower bounds for each arm k at time t where uk t t t and lk t t t the simple regret is then bounded as p nk t nk t nk t c nk t p rt nk t nk t min nk t k t o min nk t k x t x k t where s t note that when min nk t the bound decreases exponentially in simik t lar to the problem setting presented in intuitively this result makes sense as for known variances a gaussian can be used to describe the posterior means and indeed as the number of pulls approaches infinity our converge to gaussians proof according to lemma the posterior over the average reward is a distribution with scaling factor t nk t p sk nk t therefore uk t lk t t p nk t nk t t q nk t nk t nk t sk nk t s sk nk t nk t nk t q nk t t gk nk t k t t for arm k at time t with the variance of a equals nkn t scaling factor t as described in lemma we denote the variance over rewards per arm as t and define gk nk t to be the upper bound expression as specified in lemma next we compute the inverse of gk n m t we generalize t to a variance representative for all approximating p the hardness of the problem as k hk where hk is the hardness defined in we obtain as follows k x t k s t finally as the conditions in lemma on the function gk are now satisfied the simple regret bound can be obtained using lemma and the probability that the true mean is out of the bounds uk t and lk t given in lemma in the main paper we choose to be the mean over all variances g g obtained after the initialization phase vaccine allocation strategy vaccine allocation strategy outcome epidemic size distributions b outcome distributions for vaccine allocation strategy d outcome distributions for vaccine allocation strategy a outcome distributions for c outcome distributions for f outcome distributions for vaccine allocation strategy vaccine allocation strategy e outcome distributions for epidemic size epidemic size epidemic size epidemic size epidemic size epidemic size bandit run success rates success rate success rate bg sr ttts uni budget bg sr ttts uni budget success rate bg sr ttts uni budget d bandit run results for budget bg sr ttts uni budget c bandit run results for b bandit run results for bg sr ttts uni success rate success rate a bandit run results for success rate bg sr ttts uni e bandit run results for budget f bandit run results for ps values for thompson sampling probability of success probability of success success failure budget success failure budget probability of success probability of success budget d ps values for success failure success failure budget budget c ps values for b ps values for probability of success probability of success a ps values for success failure success failure e ps values for budget f ps values for binned distribution of ps values for thompson sampling empirical success rate empirical success rate estimated probability of succes estimated probability of succes b binned distribution for empirical success rate empirical success rate a binned distribution for estimated probability of succes c binned distribution for estimated probability of succes d binned distribution for empirical success rate empirical success rate estimated probability of succes e binned distribution for estimated probability of succes f binned distribution for computational resources the simulations were run on a high performance cluster hpc on this hpc we used ivy bridge nodes more specifically nodes with two ivy bridge xeon cpus ghz mb level cache and gb of ram this infrastructure allowed us to run flute simulations per node references abul k abbas andrew h h lichtman and shiv pillai cellular and molecular immunology elsevier health sciences milton abramowitz and irene a stegun handbook of mathematical functions with formulas graphs and mathematical tables volume courier corporation shipra agrawal and navin goyal analysis of thompson sampling for the bandit problem in conference on learning theory pages maira aguiar and nico stollenwerk dengvaxia efficacy dependency on serostatus a closer look at more recent data clinical infectious diseases wenchi chiu david goldsman mi lim lee kwokleung tsui beate sander david n fisman and azhar nizam reactive strategies for containing developing outbreaks of pandemic influenza bmc public health audibert and bubeck best arm identification in bandits in conference on learning theory peter auer and paul fischer analysis of the multiarmed bandit problem machine learning issn nicole e basta dennis l chao m elizabeth halloran laura matrajt and ira m longini strategies for pandemic and seasonal influenza vaccination of schoolchildren in the united states american journal of epidemiology tom britton stochastic epidemic models a survey mathematical biosciences bubeck munos and gilles stoltz pure exploration in bandits problems in international conference on algorithmic learning theory pages springer bubeck munos and gilles stoltz pure exploration in and bandits theoretical computer science dennis l chao m elizabeth halloran valerie j obenchain and ira m longini flute a publicly available stochastic influenza epidemic simulation model plos computational biology dennis chao scott halstead elizabeth halloran and ira longini controlling dengue with vaccines in thailand plos neglected tropical diseases issn olivier chapelle and lihong li an empirical evaluation of thompson sampling in advances in neural information processing systems pages charles j clopper and egon s pearson the use of confidence or fiducial limits illustrated in the case of the binomial biometrika pages ilaria dorigatti simon cauchemez andrea pugliese and neil morris ferguson a new approach to characterising infectious disease transmission dynamics from sentinel surveillance application to the italian influenza pandemic epidemics madalina drugan and ann nowe designing multiarmed bandits algorithms a study in proceedings of the international joint conference on neural networks isbn martin enserink crisis underscores fragility of vaccine production system science eubank kumar marathe srinivasan and wang structure of social contact networks and their impact on epidemics dimacs series in discrete mathematics and theoretical computer science issn stephen eubank hasan guclu v s anil kumar madhav v marathe aravind srinivasan zoltan toroczkai and nan wang modelling disease outbreaks in realistic urban social networks nature eyal shie mannor and yishay mansour action elimination and stopping conditions for the bandit and reinforcement learning problems journal of machine learning research jun neil m ferguson derek a t cummings simon cauchemez christophe fraser and others strategies for containing an emerging influenza pandemic in southeast asia nature neil m ferguson isabel ilaria dorigatti luis daniel j laydon and derek a t cummings benefits and risks of the dengue vaccine modeling optimal deployment science issn christophe fraser derek a t cummings don klinkenberg donald s burke and neil m ferguson influenza transmission in households during the pandemic american journal of epidemiology laura fumanelli marco ajelli stefano merler neil ferguson and simon cauchemez comprehensive analysis of school closure policies for mitigating influenza epidemics and pandemics plos computational biology issn garivier and emilie kaufmann optimal best arm identification with fixed confidence in conference on learning theory pages germann kadau longini and macken mitigation strategies for pandemic influenza in the united states proceedings of the national academy of sciences issn sri rezeki hadinegoro jose luis maria rosario capeding carmen deseda tawee chotpitayasunondh reynaldo dietze hj muhammad ismail humberto reynales kriengsak limkittikul doris maribel huu ngoc tran alain bouckenooghe danaya chansinghakul margarita karen fanouillere remi forrat carina frago sophia gailhardou nicholas jackson fernando noriega eric plennevaux anh wartel betzana zambrano and melanie saville efficacy and safety of a dengue vaccine in regions of endemic disease new england journal of medicine issn m elizabeth halloran ira m longini azhar nizam and yang yang containing bioterrorist smallpox science new york issn m elizabeth halloran neil m ferguson stephen eubank ira m longini derek a t cummings bryan lewis shufu xu christophe fraser anil vullikanti timothy c germann and others modeling targeted layered containment of an influenza pandemic in the united states proceedings of the national academy of sciences matthew hartfield and samuel alizon introducing the outbreak threshold in epidemiology plos pathog robbins herbert some aspects of the sequential design of experiments bulletin of the american mathematical society issn matthew hoffman bobak shahriari and nando freitas on correlation and budget constraints in bandit optimization with application to automatic machine learning in artificial intelligence and statistics pages junya honda and akimichi takemura optimality of thompson sampling for gaussian bandits depends on priors in aistats pages kevin jamieson matthew malloy robert nowak and bubeck lil ucb an optimal exploration algorithm for bandits in conference on learning theory pages christopher jennison iain m johnstone and bruce w turnbull asymptotically optimal procedures for sequential adaptive selection of the best of several normal means statistical decision theory and related topics iii emilie kaufmann and shivaram kalyanakrishnan information complexity in bandit subset selection in conference on learning theory pages emilie kaufmann olivier and garivier on the complexity of best arm identification in bandit models journal of machine learning research pieter libin timothy verstraeten kristof theys diederik roijers peter vrancx and ann nowe efficient evaluation of influenza mitigation strategies using preventive bandits aamas visionary papers lecture notes in ai volume page in press james o sebastian j schreiber p ekkehard kopp and wayne m getz superspreading and the effect of individual variation on disease emergence nature ira m longini jr m elizabeth halloran azhar nizam and yang yang containing pandemic influenza with antiviral agents american journal of epidemiology huong mclean mark thompson maria sundaram burney kieke manjusha gaglani kempapura murthy pedro piedra richard zimmerman mary patricia nowalk jonathan raviotta michael jackson lisa jackson suzanne ohmit joshua petrie arnold monto jennifer meece swathi thaker jessie clippard sarah spencer alicia fry and edward belongia influenza vaccine effectiveness in the united states during variable protection by age and virus type journal of infectious diseases issn jan medlock and alison p galvani optimizing influenza vaccine distribution science issn lauren ancel meyers e j newman michael martin and stephanie schrag applying network theory to epidemics control measures for mycoplasma pneumoniae outbreaks emerging infectious diseases issn a m molinari ismael mark messonnier william thompson pascale wortley eric weintraub and carolyn bridges the annual impact of seasonal influenza in the us measuring disease burden and costs vaccine issn henry nicholls pandemic influenza the inside story plos biol issn k david patterson and gerald f pyle the geography and mortality of the influenza pandemic bulletin of the history of medicine catharine paules and kanta subbarao influenza the lancet pages issn warren b powell and ilya o ryzhov optimal learning volume john wiley sons diederik roijers peter vamplew shimon whiteson and richard dazeley a survey of sequential journal of artificial intelligence research issn daniel russo simple bayesian algorithms for best arm identification in conference on learning theory pages klaus influenza who cares the lancet infectious diseases r s sutton and a g barto reinforcement learning an introduction isbn william r thompson on the likelihood that one unknown probability exceeds another in view of the evidence of two samples biometrika duncan j watts roby muhamad daniel c medina and peter s dodds multiscale resurgent epidemics in a hierarchical metapopulation model proceedings of the national academy of sciences of the united states of america who who guidelines on the use of vaccines and antivirals during influenza pandemics lander willem sean stijven ekaterina vladislavleva jan broeckhove philippe beutels and niel hens active learning to understand infectious disease models and improve policy making plos comput biol joseph t wu steven riley christophe fraser and gabriel m leung reducing the impact of the next influenza pandemic using public health interventions plos medicine wan yang jonathan d sugimoto m elizabeth halloran nicole e basta dennis l chao laura matrajt gail potter eben kenah and ira m longini the transmissibility and control of pandemic influenza a virus science new york issn
2
convex and regularization methods for spatial point processes intensity estimation achmad and mar laboratory jean kuntzmann department of probability and statistics grenoble alpes france department of mathematics du uqam canada march abstract this paper deals with feature selection procedures for spatial point processes intensity estimation we consider regularized versions of estimating equations based on campbell theorem derived from two classical functions poisson likelihood and logistic regression likelihood we provide general conditions on the spatial point processes and on penalty functions which ensure consistency sparsity and asymptotic normality we discuss the numerical implementation and assess finite sample properties in a simulation study finally an application to tropical forestry datasets illustrates the use of the proposed methods introduction spatial point pattern data arise in many contexts where interest lies in describing the distribution of an event in space some examples include the locations of trees in a forest gold deposits mapped in a geological survey stars in a cluster star animal sightings locations of some specific cells in retina or road accidents see and waagepetersen illian et baddeley et interest in methods for analyzing spatial point pattern data is rapidly expanding accross many fields of science notably in ecology epidemiology biology geosciences astronomy and econometrics one of the main interests when analyzing spatial point pattern data is to estimate the intensity which characterizes the probability that a point or an event occurs in an infinitesimal ball around a given location in practice the intensity is often assumed to be a parametric function of some measured covariates waagepetersen guan and loh and waagepetersen waagepetersen waagepetersen and guan guan and shen coeurjolly and in this paper we assume that the intensity function is parameterized by a vector and has a specification u exp z u where z u u zp u are the p spatial covariates measured at location u and is a real parameter when the intensity is a function of many variables covariates selection becomes inevitable variable selection in regression has a number of purposes provide regularization for good estimation obtain good prediction and identify clearly the important variables fan and lv mazumder et identifying a set of relevant features from a list of many features is in general combinatorially hard and computationally intensive in this context convex relaxation techniques such as lasso tibshirani have been effectively used for variable selection and parameter estimation simultaneously the lasso procedure aims at minimizing log l where l is the likelihood function for some model of interest the penalty shrinks coefficients towards zero and can also set coefficients to be exactly zero in the context of variable selection the lasso is often thought of as a convex surrogate for the selection problem log l p the penalty i penalizes the number of nonzero coefficients in the model since lasso can be suboptimal in model selection for some cases fan and li zou zhang and huang many regularization methods then have been developped motivating to go beyond regime to more aggressive penalties which bridges the gap between and such as scad fan and li and zhang more recently there were several works on implementing variable selection for spatial point processes in order to reduce variance inflation from overfitting and bias from underfitting thurman and zhu focused on using adaptive lasso to select variables for inhomogeneous poisson point processes this study then later was extended to the clustered spatial point processes by thurman et al who established the asymptotic properties of the estimates in terms of consistency sparsity and normality distribution they also compared their results employing adaptive lasso to scad and adaptive elastic net in the simulation study and application using both regularized weighted and unweighted estimating equations derived from the poisson likelihood yue and loh considered modelling spatial point data with poisson pairwise interaction point processes and cluster models incorporated lasso adaptive lasso and elastic net regularization methods into generalized linear model framework for fitting these point models note that the study by yue and loh also used an estimating equation derived from the poisson likelihood however yue and loh did not provide the theoretical study in detail although in application many penalty functions have been employed to regularization methods for spatial point processes intensity estimation the theoretical study is still restricted to some specific penalty functions in this paper we propose regularized versions of estimating equations based on campbell formula derived from the poisson and the logistic regression likelihoods to estimate the intensity of the spatial point processes we consider both convex and penalty functions we provide general conditions on the penalty function to ensure an oracle property and a central limit theorem thus we extend the work by thurman et al and obtain the theoretical results for more general penalty functions and under less restrictive assumptions on the asymptotic covariance matrix see remark the logistic regression method proposed by baddeley et al is as easy to implement as the poisson likelihood method but is less biased since it does not require deterministic numerical approximation we prove that the estimates obtained by regularizing the logistic regression likelihood can also satisfy asymptotic properties see remark our procedure is straightforward to implement since we only need to combine the spatstat r package with the two r packages glmnet and ncvreg the remainder of the paper is organized as follows section gives backgrounds on spatial point processes section describes standard parameter estimation methods when there is no regularization while regularization methods are developed in section section develops numerical details induced by the methods introduced in sections asymptotic properties following the work by fan and li for generalized linear models are presented in section section investigates the properties of the proposed method in a simulation study followed by an application to tropical forestry datasets in section and finished by conclusion and discussion in section proofs of the main results are postponed to appendices spatial point processes let x be a spatial point process on rd let d rd be a compact set of lebesgue measure which will play the role of the observation domain we view x as a locally finite random subset of rd the random number of points of x in b n b is almost surely finite whenever b rd is a bounded region suppose x xm denotes a realization of x observed within a bounded region d where xi i m represent the locations of the observed points and m is the number of points note that m is random and m if m then x is the empty point pattern in for further background material on spatial point processes see for example and waagepetersen moments the first and properties of a point process are described by intensity measure and factorial moment measure properties of a point process indicate the spatial distribution events in domain of interest the intensity measure on rd is given by b en b b rd if the intensity measure can be written as z b u du b rd b where is a nonnegative function then is called the intensity function if is constant then x is said to be homogeneous or stationary with intensity otherwise it is said to be inhomogeneous we may interpret u du as the probability of occurence of a point in an infinitesimally small ball with centre u and volume du properties of a point process indicate the spatial coincidence of events in the domain of interest the factorial moment measure on rd rd is given by c e x i u v c c rd rd u where the over the summation sign means that the sum runs over all pairwise different points u v in x and i is the indicator function if the factorial moment measure can be written as z z c i u v c u v dudv c rd rd where is a nonnegative function then is called the product density intuitively u v dudv is the probability for observing a pair of points from x occuring jointly in each of two infinitesimally small balls with centres u v and volume du dv fore more detail description of moment measures of any order see appendix c in and waagepetersen suppose x has intensity function and product density campbell theorem see and waagepetersen states that for any function k rd or k rd rd z x e k u k u u du e x z z k u v k u v u v dudv u in order to study whether a point process deviates from independence poisson point process we often consider the pair correlation function given by g u v u v u v when both and exist with the convention for a poisson point process section we have u v u v so that g u v if for example g u v resp g u v this indicates that pair of points are more likely resp less likely to occur at locations u v than for a poisson point process with the same intensity function as x in the same spirit we can define k the order intensity function see and waagepetersen for more details if for any u v g u v depends only on u v the point process x is said to be reweighted stationary modelling the intensity function we discuss spatial point process models specified by deterministic or random intensity function particularly we consider two important model classes namely poisson and cox processes poisson point processes serve as a tractable model class for no interaction or complete spatial randomness cox processes form major classes for clustering or aggregation for conciseness we focus on the two later classes of models we could also have presented determinantal point processes lavancier et which constitute an interesting class of repulsive point patterns with explicit moments this has not been further investigated for sake of brevity in this paper we focus on models of the intensity function given by poisson point process a point process x on d is a poisson point process with intensity function assumed to be locally integrable if the following conditions are satisfied for any b d with b n b p oisson b conditionally on n b the points in x b are with joint density proportional to u u b a poisson point process with a intensity function is also called a modulated poisson point process and waagepetersen waagepetersen in particular for poisson point processes u v u v and g u v v cox processes a cox process is a natural extension of a poisson point process obtained by considering the intensity function of the poisson point process as a realization of a random field suppose that u u d is a nonnegative random field if the conditional distribution of x given is a poisson point process on d with intensity function then x is said to be a cox process driven by see and waagepetersen there are several types of cox processes here we consider two types of cox processes a point process and a log gaussian cox process point processes let c be a stationary poisson process mother process with intensity given c let xc c c be independent poisson processes offspring processes with intensity function u exp z u k u c where k is a probability density function determining the distribution of offspring points around the mother points parameterized by then x xc is a special case of an inhomogeneous point process with mothers c and offspring xc p c the point process x is a cox process driven by u exp z u k u c waagepetersen coeurjolly and and we can verify that the intensity function of x is indeed u exp z u one example of point process is the thomas process where k u exp is the density for nd id conditionally on a parent event at location c children events are normally distributed around smaller values of correspond to tighter clusters and smaller values of correspond to fewer number of parents the parameter vector is referred to as the interaction parameter as it modulates the spatial interaction or dependence among events log gaussian cox process suppose that log is a gaussian random field given the point process x follows poisson process then x is said to be a log gaussian cox process driven by and waagepetersen if the random intensity function can be written as log u z u u where is a stationary gaussian random field with covariance function c u v r v u which depends on parameter and waagepetersen coeurjolly and the intensity function of this log gaussian cox process is indeed given by u exp z u one example of correlation function is the exponential form waagepetersen and guan r v u exp for here constitutes the interaction parameter vector where is the variance and is the correlation scale parameter parametric intensity estimation one of the standard ways to fit models to data is by maximizing the likelihood of the model for the data while maximum likelihood method is feasible for parametric poisson point process models section computationally intensive markov chain monte carlo mcmc methods are needed otherwise and waagepetersen as mcmc methods are not yet straightforward to implement estimating equations based on campbell theorem have been developed see waagepetersen and waagepetersen waagepetersen guan and shen baddeley et we review the estimating equations derived from the poisson likelihood in section and from the logistic regression likelihood in section maximum likelihood estimation for an inhomogeneous poisson point process with intensity function parameterized by the likelihood function is y l u exp u du d and the function of is z x log u where we have omitted the constant term has form reduces to x u du d r as the intensity function d z z u exp z u du d rathbun and cressie showed that the maximum likelihood estimator is consistent asymptotically normal and asymptotically efficient as the sample region goes to rd poisson likelihood let be the true parameter vector by applying campbell theorem to the score function the gradient vector of denoted by we have z x e e z u z u exp z u du d z z z u exp z u du z u exp z u du d zd z u exp z u exp z u du d when so the score function of the poisson appears to be an unbiased estimating equation even though x is not a poisson point process the estimator maximizing is referred to as the poisson estimator the properties of the poisson estimator have been carefully studied schoenberg showed that the poisson estimator is still consistent for a class of point process models the asymptotic normality for a fixed observation domain was obtained by waagepetersen while guan and loh established asymptotic normality under an increasing domain assumption and for suitable mixing point processes regarding the parameter see section waagepetersen and guan studied a procedure to estimate both and and they proved that under certain mixing conditions the parameter estimates enjoy the properties of consistency and asymptotic normality weighted poisson likelihood although the estimating equation approach derived from the poisson likelihood is simpler and faster to implement than maximum likelihood estimation it potentially produces a less efficient estimate than that of maximum likelihood waagepetersen guan and shen because information about interaction of events is ignored to regain some lack of efficiency guan and shen proposed a weighted poisson function given by z x w u u du w w u log u d where w is a weight surface by regarding we see that a larger weight w u makes the observations in the infinitesimal region du more influent by campbell theorem w is still an unbiased estimating equation in addition guan and shen proved that under some conditions the parameter estimates are consistent and asymptotically normal guan and shen showed that a weight surface w that minimizes the trace of the asymptotic matrix of the estimates maximizing can result in more efficient estimates than poisson estimator in particular the proposed weight surface is w u u f u r where f u d g kv uk du and g is the pair correlation function for a poisson point process note that f u and hence w u which reduces to maximum likelihood estimation for general point processes the weight surface depends on both the intensity function and the pair correlation function thus incorporates information on both inhomogeneity and dependence of the spatial point processes when clustering is present so that g v u then f u and hence the weight decreases with u the weight surface can be achieved by setting u u u to get the estimate u is substituted by given by poisson estimates that is u u alternatively u can also be computed nonparametrically by kernel method furthermore guan and shen suggessted to approximate f u by k r where k is the ripley s estimated by r x i ku vk r u v d u guan et al extended the study by guan and shen and considered more complex estimating equations specifically w u z u is replaced by a function h u in the derivative of with respect to the procedure results in a slightly more efficient estimate than the one obtained from however the computational cost is more important and since we combine estimating equations and penalization methods see section we have not considered this extension logistic regression likelihood although the estimating equations discussed in section and are unbiased these methods do not in general produce unbiased estimator in practical implementations waagepetersen and baddeley et al proposed another estimating function which is indeed close to the score of the poisson but is able to obtain less biased estimator than poisson estimates in addition their proposed estimating equation is in fact the derivative of the logistic regression likelihood following baddeley et al we define the weighted logistic regression loglikelihood function by x u w w u log u u z u u w u u log du u d where u is a nonnegative function its role as well as an explanation of the name logistic method will be explained further in section note that the score of is an unbiased estimating equation waagepetersen showed asymptotic normality for poisson and certain clustered point processes for the estimator obtained from a similar procedure furthermore the methodology and results were studied by baddeley et al considering spatial gibbs point processes to determine the optimal weight surface w for logistic method we follow guan and shen who minimized the trace of the asymptotic covariance matrix of the estimates we obtain the weight surface defined by w u u u u u f u where u and f u can be estimated as in section regularization techniques this section discusses convex and regularization methods for spatial point process intensity estimation methodology regularization techniques were introduced as alternatives to stepwise selection for variable selection and parameter estimation in general a regularization method pp attempts to maximize the penalized function where is the function of is the number of observations and is a nonnegative penalty function parameterized by a real number let w be either the weighted poisson function or the weighted logistic regression function in a similar way we define the penalized weighted function given by p x q w w where is the volume of the observation domain which plays the same role as the number of observations in our setting is a nonnegative tuning parameter corresponding to for j p and is a penalty function described in details in the next section penalty functions and regularization methods for any we say that r is a penalty function if is a nonnegative function with examples of penalty function are the norm norm elastic net for if scad for any if if for any if if the first and second derivatives of the above functions are given by table it is to be noticed that is not differentiable at resp for scad resp for penalty table the first and the second derivatives of several penalty functions penalty elastic net if scad if if if if if if if if if as a first penalization technique to improve ordinary least squares ridge regression hoerl and kennard works by minimizing the residual sum of squares subject to a bound on the norm of the coefficients as a continuous shrinkage method ridge regression achieves its better prediction through a ridge can also be extended to fit generalized linear models however the ridge can not reduce model complexity since it always keeps all the predictors in the model then it was introduced a method called lasso tibshirani where it employs penalty to obtain variable selection and parameter estimation simultaneously despite lasso enjoys some attractive statistical properties it has some limitations in some senses fan and li zou and hastie zou zhang and huang zhang making huge possibilities to develop other methods in the scenario where there are high correlations among predictors zou and hastie proposed an elastic net technique which is a convex combination between and penalties this method is particularly useful when the number of predictors is much larger than the number of observations since it can select or eliminate the strongly correlated predictors together the lasso procedure suffers from nonnegligible bias and does not satisfy an oracle property asymptotically fan and li fan and li and zhang among others introduced penalties to get around these drawbacks the idea is to bridge the gap between and by trying to keep unbiased the estimates of nonzero coefficients and by shrinking the less important variables to be exactly zero the rationale behind the penalties such as scad and can also be understood by considering its first derivative see table they start by applying the similar rate of penalization as the lasso and then continuously relax that penalization until the rate of penalization drops to zero however employing penalties in regression analysis the main challenge is often in the minimization of the possible objective function when the of the penalty is no longer dominated by the convexity of the likelihood function this issue has been carefully studied fan and li proposed the local quadratic approximation lqa zou and li proposed a local linear approximation lla which yields an objective function that can be optimized using least angle regression lars algorithm efron et finally breheny and huang and mazumder et al investigated the application of coordinate descent algorithm to penalties table details of some regularization methods method pp ridge pp lasso pp enet pp al pp aenet pp scad if pp p with p if j if o pp i i enet al and aenet respectively stand for elastic net adaptive lasso and adaptive elastic net in it is worth emphasizing that we allow each direction to have a different regularization parameter by doing this the and elastic net penalty functions are extended to the adaptive lasso zou and adaptive elastic net zou and zhang table details the regularization methods considered in this study numerical methods we present numerical aspects in this section for nonregularized estimation there are two approaches that we consider weighted poisson regression is explained in section while logistic regression is reviewed in section penalized estimation procedure is done by employing coordinate descent algorithm section we separate the use of the convex and penalties in section and weighted poisson regression berman and turner developed a numerical quadrature method to approximate maximum likelihood estimation for an inhomogeneous poisson point process they approximated the likelihood by a finite sum that had the same analytical form as the weighted likelihood of generalized linear model with poisson response this method was then extended to gibbs point processes by baddeley and turner suppose we approximate the integral term in by riemann sum approximation z m x u du vi ui d i where ui i m are points in d consisting of the m data points p and m m dummy points the quadrature weights vi are such that i vi to implement this method the domain is firstly partitioned into m rectangular pixels of equal area denoted by a then one dummy point is placed in the center of the pixel let be an indicator whether the point is an event of point process or a dummy point without loss of generality let ui um be the observed events and um be the dummy points thus the poisson function can be approximated and rewritten as m x vi yi log ui ui where yi i equation corresponds to a quasi poisson function maximizing is equivalent to fitting a weighted poisson generalized linear model which can be performed using standard statistical software similarly we can approximate the weighted poisson function using numerical quadrature method by w m x wi vi yi log ui ui i where wi is the value of the weight surface at point i the estimate is obtained as suggested by guan and shen the similarity beetween and allows us to compute the estimates using software for generalized linear model as well this fact is in particular exploited in the ppm function in the spatstat r package baddeley and turner baddeley et with option mpl to make the presentation becomes more general the number of dummy points is denoted by for the next sections logistic regression to perform well the approximation often requires a quite large number of dummy points hence fitting such generalized linear models can be computationally intensive especially when dealing with a quite large number of points when the unbiased estimating equations are approximated using deterministic numerical approximation as in section it does not always produce unbiased estimator to achieve unbiased estimator we estimate by x x u u w u log w w u log u u u u where d is dummy point process independent of x and with intensity function the form is related to the estimating equation defined by baddeley et al eq besides that we consider this form since if we apply campbell theorem to the last term of we obtain z x u u u w u u log du e w u log u u u d which is exactly what we have in the last term of in addition conditional on is the weighted likelihood function for bernoulli trials y u u x for u x d with exp log u z u u p y u u u exp log u z u precisely is a weighted logistic regression with offset term log thus parameter estimates can be straightforwardly obtained using standard software for generalized linear models this approach is in fact provided in the spatstat package in r by calling the ppm function with option logi baddeley et in spatstat the dummy point process d generates points in average in d from a poisson binomial or stratified binomial point process baddeley et al suggested to choose u where m is the number of points so furthermore to determine this option can be considered as a starting point for a approach see baddeley et for further details coordinate descent algorithm lars algorithm efron et is a remarkably efficient method for computing an entire path of lasso solutions for linear models the computational cost is of order o m which is the same order as a least squares fit coordinate descent algorithm friedman et appears to be a more competitive algorithm for computing the regularization paths by costs o m p operations therefore we adopt cyclical coordinate descent methods which can work really fast on large datasets and can take advantage of sparsity coordinate descent algorithms optimize a target function with respect to a single parameter at a time iteratively cycling through all parameters until convergence criterion is reached we detail this for some convex and penalty functions in the next two sections here we only present the coordinate descent algorithm for fitting generalized weighted poisson regression a similar approach is used to fit penalized weighted logistic regression convex penalty functions since w given by is a concave function of the parameters the newtonraphson algorithm used to maximize the penalized function can be done using the iteratively reweighted least squares irls method if the current estimate of the parameters is we construct a quadratic approximation of the weighted poisson function using taylor s expansion m x w q w z i c i where c is a constant are the working response values and are the weights wi vi exp z i z i yi exp z i exp z i regularized poisson linear model works by firstly identifying a decreasing sequence of for which starting with minimum value of such that the entire vector for each value of an outer loop is created to compute q w at secondly a coordinate descent method is applied to solve a penalized weighted least squares problem minp minp q w p x the coordinate descent method is explained as follows suppose we have the estimate for l j l j the method consists in partially optimizing with respect to that is min friedman et al have provided the form of the update for penalized regression using several penalties such as nonnegative garrote breiman lasso elastic net fused lasso tibshirani et group lasso yuan and lin berhu penalty owen and wang et for instance the update for the elastic net which embraces the ridge and lasso regularization by setting respectively to or is m x j s zij yi m x j p where zil is the fitted value excluding the contribution from covariate zij and s z is the operator with value if z and s z sign z z if z and if the update is repeated for j p until convergence coordinate descent algorithm for several convex penalties is implemented in the r package glmnet friedman et for we can set to implement ridge and to lasso while we set to apply elastic net regularization for adaptive lasso we follow zou take and replace by where is an initial estimate say ols or ridge and is a positive tuning parameter to avoid the computational evaluation for choosing we follow zou section and wasserman and roeder who also considered so we choose ridge where ridge is the estimates obtained from ridge regression implementing adaptive elastic net follows along similar lines penalty functions breheny and huang have investigated the application of coordinate descent algorithm to fit penalized generalized linear model using scad and for which the penalty is mazumder et al also studied the coordinatewise optimization algorithm in linear models considering more general penalties mazumder et al concluded that for a known current estimate the univariate penalized least squares function qu should be convex to ensure that the procedure converges to a stationary point mazumder et al found that this turns out to be the case for scad and penalty but it can not be satisfied by bridge or power penalty and some cases of breheny and huang derived the solution of coordinate descent algorithm for scad and in generalized linear models cases and it is implemented in the ncvreg package of let l be a vector containing estimates for l j l j p and we wish to partially optimize with respect to if we pm p j and z y define m j j ij i i zij the update for scad is s if s if if for any maxj then for maxj and the same definition of and the update for is s j if if where s z is the operator given by selection of regularization or tuning parameter it is worth noticing that coordinate descent procedures and other computation procedures computing the penalized likelihood estimates rely on the tuning parameter so that the choice of is also becoming an important task the estimation using a large value of tends to have smaller variance but larger biases whereas the estimation using a small value of leads to have zero biases but larger variance the between the biases and the variances yields an optimal choice of fan and lv to select it is reasonable to identify a range of values extending from a maximum value of for which all penalized coefficients are zero to friedman et breheny and huang after that we select a value which optimizes some criterion by fixing a path of we select the tuning parameter which minimizes wqbic a weighted version of the bic criterion defined by wqbic w s log p where s i is the number of selected covariates with nonzero regression coefficients and is the observation volume which represents the sample size for linear regression models y x wang et al proposed bictype criterion for choosing by bic log ky x log df where is the number of observations and df is the degree of freedom this criterion is consistent meaning that it selects the correct model with probability approaching in large samples when a set of candidate models contains the true model their findings is in line with the study of zhang et al for which the criterion was presented in more general way called generalized information criterion gic the criterion wqbic is the specific form of gic proposed by zhang et al the selection of for scad and is another task but we fix for scad and for following fan and li and breheny and huang respectively to avoid more complexities asymptotic theory in this section we present the asymptotic results for the regularized weighted poisson likelihood estimator when considering x as a point process observed over a sequence of observation domain d dn n which expands to rd as n the regularization parameters j for j p are now indexed by these asymptotic results also hold for the regularized unweighted poisson likelihood estimator for sake of conciseness we do not present the asymptotic results for the regularized logistic regression estimate the results are very similar the main difference is lying in the conditions and for which the matrices an bn and cn have a different expression see remark notation and conditions we recall the classical definition of strong mixing coefficients adapted to spatial point processes politis et for k l n and q define l q sup a b p a p b a f b f b rd b rd k l d q where f is the generated by x i d is the minimal distance between sets and and b rd denotes the class of borel sets in rd let denote the vector of true coefficient values where is the vector of nonzero coefficients and is the vector of zero coefficients we define the p p matrices an w bn w and cn w by z an w w u z u z u u du zdn w u z u z u u du and bn w zdn z cn w w u w v z u z v g u v u v dvdu dn dn consider the following conditions which are required to derive our asymptotic results where o denotes the origin of rd for every n dn ne ne e e where e rd is convex compact and contains o in its interior we assume that the intensity function has the specification given by where and is an open convex bounded set of rp the covariates z and the weight function w satisfy sup u and sup u there exists an integer t such that for k the product density k exists and satisfies k for the strong mixing coefficients we assume that there exists some d t such that q o q there exists a p p positive definite matrix such that for all sufficiently large n bn w cn w there exists a p p positive definite matrix such that for all sufficiently large n we have an w the penalty function is nonnegative on continuously differentiable on with derivative assumed to be a lipschitz function on furthermore given j for j s we assume that there exists j where j as n such that for n sufficiently large j is thrice continuously differentiable in the ball centered at with radius j and we assume that the third derivative is uniformly bounded under the condition we define the sequences an bn and cn by an max j s bn cn inf inf j p max j s for these sequences an bn and cn detailed in table for the different methods considered in this paper play a central role in our results even if this will be discussed later in section we specify right now that we require that an bn and cn table details of the sequences an bn and cn for a given regularization method method an ridge max lasso enet al aenet s bn cn n max max j s min j s p max j min j max j s p scad s if as n if as n main results we state our main results here proofs are relegated to appendices we first show in theorem that the penalized weighted poisson likelihood estimator converges in probability and exhibits its rate of convergence theorem assume the conditions hold and let an and cn be given by and if an o and cn o then there exists a local maximizer of q w such that k op an this implies that if an o and cn o the penalized weighted poisson likelihood estimator is consistent furthermore we demonstrate in theorem that such a consistent estimator ensures the sparsity of that is the estimate will correctly set to zero with probability tending to as n and is asymptotically normal theorem assume the conditions hold if an bn and cn as n the consistent local maximiz ers in theorem satisfy i sparsity p as n d ii asymptotic normality w n is where w w w w w s and where w resp w w is the s s corner of an w resp bn w cn w as a consequence w is the asymptotic covariance matrix of note that w is the inverse of w where w is any square matrix with w w w remark for lasso and adaptive lasso for other penalties since cn o then k o since w k o from conditions and k is asymptotically negligible with respect to w remark theorems and remain true for the regularized weighted logistic regression likelihood estimates if we extend the condition by replacing in the expression of the matrices an bn and cn w u by w u u u u u dn and by adding u remark we want to highlight here the main theoretical differences with the work by thurman et al first the methodology and results are available for the logistic regression likelihood second we consider very general penalty function while thurman et al only considered the adaptive lasso method third we do not assume as in thurman et al that mn m as n where mn is an bn or cn when m is a positive definite matrix instead we assume sharper condition assuming mn where mn is either an or bn and is the smallest eigenvalue of a positive definite matrix this makes the proofs a little bit more technical discussion of the conditions we adopt the conditions based on the paper from coeurjolly and in condition the assumption that e contains o in its interior can be made without loss of generality if instead u is an interior point of e then condition could be modified to that any ball with centre u and radius r is contained in dn ne for all sufficiently large condition is quite standard from conditions the matrices an w bn w and cn w are bounded by see coeurjolly and combination of conditions are used to establish a central limit theo rem for n w using a general central limit theorem for triangular arrays of nonstationary random fields obtained by which is an extension from bolthausen then later extended to nonstationary random fields by guyon as pointed out by coeurjolly and condition is a spatial average assumption like when establishing asymptotic normality of ordinary least square estimators for linear models this condition is also useful to make sure that the matrix bn w cn w is invertible conditions ensure that the matrix w is invertible for sufficiently large conditions are discussed in details for several models by coeurjolly and they are satisfied for a large class of intensity functions and a large class of models including poisson and cox processes discussed in section condition controls the higher order terms in taylor expansion of the penalty function roughly speaking we ask the penalty function to be at least lipschitz and thrice differentiable in a neighborhood of the true parameter vector as it is the condition looks technical however it is obviously satisfied for ridge lasso elastic net and the adaptive versions according to the choice of it is satisfied for scad and when for j s is not equal to theorem requires the conditions an bn and cn as n simultaneously by requiring these assumptions the corresponding penalized weighted poisson likelihood estimators possess the oracle property and perform as well as weighted poisson likelihood estimator which estimates knowing the fact that for the ridge regularization method bn preventing from applying theorem for this penalty for lasso and elastic net an bn for some constant for lasso the two conditions an and bn as n can not be satisfied simultaneously this is different for the adaptive versions where a compromise can be found by adjusting the j s as well as the two penalties scad and for which can be adjusted for the regularization methods considered in this paper the condition cn is implied by the condition an as n simulation study we conduct a simulation study with three different scenarios described in section to compare the estimates of the regularized poisson likelihood pl and that of the regularized weighted poisson likelihood wpl we also want to explore the behaviour of the estimates using different regularization methods empirical findings are presented in section furthermore we compare in section the estimates of the regularized un weighted logistic likelihood and the ones of the regularized un weighted poisson likelihood simulation the setting is quite similar to that of waagepetersen and thurman et al the spatial domain is d we center and scale the pixel images of elevation and gradient of elevation contained in the bei datasets of spatstat library in r r core team and use them as two true covariates in addition we create three different scenarios to define extra covariates scenario we generate eighteen pixel images of covariates as standard gaussian white noise and denote them by we define z u x u u u as the covariates vector the regression coefficients for are set to zero scenario first we generate eighteen pixel images of covariates as in the scenario second we transform them together with and to have multicollinearity third we define z u v x u where x u u u more precisely v is such that v v and ij ji for i j except to preserve the correlation between and the regression coefficients for are set to zero scenario we consider a more complex situation we center and scale the soil nutrients covariates obtained from the study in tropical forest of barro colorado island bci in central panama see condit hubbell et and use them as the extra covariates together with and we keep the structure of the covariance matrix to preserve the complexity of the situation in this setting we have z u x u u u the regression coefficients for are set to zero the different maps of the covariates obtained from scenarios and are depicted in appendix except for which has high correlation with the extra covariates obtained from scenario tend to have a constant value figure this is completely different from the ones obtained from scenario figure the mean number of points over the domain d is chosen to be we set the true intensity function to be u u u where represents a relatively large effect of elevation reflects a relatively small effect of gradient and is selected such that each realization has points in average furthermore we erode regularly the domain d such that with the same intensity function the mean number of points over the new domain d r becomes the erosion is used to observe the convergence of the procedure as the observation domain expands we consider the default number of dummy points for the poisson likelihood denoted by as suggested in the spatstat r package where m is the number of points with these scenarios we simulate spatial point patterns from a thomas point process using the rthomas function in the spatstat package we also consider two different parameters as different levels of spatial interaction and let for each of the four combinations of and we fit the intensity to the simulated point pattern realizations we also fit the oracle model which only uses the two true covariates all models are fitted using modified internal function in spatstat baddeley et glmnet friedman et and ncvreg breheny and huang a modification of the ncvreg r package is required to include the penalized weighted poisson and logistic likelihood methods simulation results to better understand the behaviour of thomas processes designed in this study figure shows the plot of the four realizations using different and the smaller value of the tighter the clusters since there are fewer parents when by considering the realizations observed on d r the mean number of points over the replications and standard deviation are and resp and when resp when the mean number of points and standard deviation are and resp and when resp figure realizations of a thomas process for row row column and column tables and present the selection properties of the estimates using the penalized pl and the penalized wpl methods similarly to and van de geer the indices we consider are the true positive rate tpr the false positive rate fpr and the positive predictive value ppv tpr corresponds to the ratio of the selected true covariates over the number of true covariates while fpr table empirical selection properties tpr fpr and ppv in based on replications of thomas processes on the domain d r for different values of and for the three different scenarios different penalty functions are considered as well as two estimating equations the regularized poisson likelihood pl and the regularized weighted poisson likelihood wpl method regularized pl regularized wpl regularized pl regularized wpl tpr fpr ppv tpr fpr ppv tpr fpr ppv tpr fpr ppv scenario ridge lasso enet al aenet scad scenario ridge enet al lasso aenet scad scenario ridge lasso enet al aenet scad approximate value corresponds to the ratio of the selected noisy covariates over the number of noisy covariates tpr explains how the model can correctly select both and finally fpr investigates how the model uncorrectly select among to zp p for scenarios and and p for scenario ppv corresponds to the ratio table empirical selection properties tpr fpr and ppv in based on replications of thomas processes on the domain d for different values of and for the three different scenarios different penalty functions are considered as well as two estimating equations the regularized poisson likelihood pl and the regularized weighted poisson likelihood wpl method regularized pl regularized wpl regularized pl regularized wpl tpr fpr ppv tpr fpr ppv tpr fpr ppv tpr fpr ppv scenario ridge lasso enet al aenet scad scenario ridge lasso enet al aenet scad scenario ridge lasso enet al aenet scad approximate value of the selected true covariates over the total number of selected covariates in the model ppv describes how the model can approximate the oracle model in terms of selection therefore we want to find the methods which have a tpr and a ppv close to and a fpr close to generally for both the penalized pl and the penalized wpl methods the best selection properties are obtained for a larger value of which shows weaker spatial dependence for a more clustered one indicated by a smaller value of it seems more difficult to select the true covariates as increases from table to table the tpr tends to improve so the model can select both and more frequently ridge lasso and elastic net are the regularization methods that can not satisfy our theorems it is firstly emphasized that all covariates are always selected by the ridge so that the rates are never changed whatever method used for the penalized pl with lasso and elastic net regularization it is shown that they tend to have quite large value of fpr meaning that they wrongly keep the noisy covariates more frequently when the penalized wpl is applied we gain smaller fpr but we suffer from smaller tpr at the same time this smaller tpr actually comes from the unselection of which has smaller coefficient than that of when we apply adaptive lasso adaptive elastic net scad and we achieve better performance especially for fpr which is closer to zero which automatically improves the ppv adaptive elastic net resp elastic net has slightly larger fpr than adaptive lasso resp lasso among all regularization methods considered in this paper adaptive lasso seems to outperform the other ones considering scenarios and we observe best selection properties for the penalized pl combined with adaptive lasso as the design is getting more complex for scenario applying the penalized pl suffers from much larger fpr indicating that this method may not be able to overcome the complicated situation however when we use the penalized wpl the properties seem to be more stable for the different designs of simulation study one more advantage when considering the penalized wpl is that we can remove almost all extra covariates it is worth noticing that we may suffer from smaller tpr when we apply the penalized wpl but we lose the only less informative covariates from tables and when we are faced with complex situation we would recommend the use of the penalized wpl method with adaptive lasso penalty if the focus is on selection properties otherwise the use of the penalized pl combined with adaptive lasso penalty is more preferable tables and give the prediction properties of the estimates in terms of biases standard deviations sd and square root of mean squared errors rmse some criterions we define by p p p x x x rmse bias sd where and are respectively the empirical mean and variance of the estimates for j p where p for scenarios and and p for scenario in general the properties improve with larger value of and due to weaker spatial dependence and larger sample size for the oracle model where the model contains only and the wpl estimates are more efficient than the pl estimates particularly in the more clustered case agreeing with the findings by guan and shen table empirical prediction properties bias sd and rmse based on replications of thomas processes on the domain d r for different values of and for the three different scenarios different penalty functions are considered as well as two estimating equations the regularized poisson likelihood pl and the regularized weighted poisson likelihood wpl method regularized pl regularized wpl regularized pl regularized wpl bias sd rmse bias sd rmse bias sd rmse bias sd rmse scenario oracle ridge lasso enet al aenet scad scenario oracle ridge lasso enet al aenet scad scenario oracle ridge lasso enet al aenet scad table empirical prediction properties bias sd and rmse based on replications of thomas processes on the domain d for different values of and for the three different scenarios different penalty functions are considered as well as two estimating equations the regularized poisson likelihood pl and the regularized weighted poisson likelihood wpl method regularized pl regularized wpl regularized pl regularized wpl bias sd rmse bias sd rmse bias sd rmse bias sd rmse scenario oracle ridge lasso enet al aenet scad scenario oracle ridge lasso enet al aenet scad scenario oracle ridge lasso enet al aenet scad when the regularization methods are applied the bias increases in general especially when we consider the penalized wpl method the regularized wpl has a larger bias since this method does not select much more frequently furthermore weighted method seems to introduce extra bias even though the regularization is not considered as in the oracle model for a low clustered process the sd using the penalized wpl is similar to that of the penalized pl which may be because of the weaker dependence represented by larger making weight surface w closer to however a larger rmse is obtained from the penalized wpl when we observe the more clustered process we obtain smaller sd using the penalized wpl which explains why in some cases mainly scenario the rmse gets smaller for the ridge method the bias is closest to that of the oracle model but it has the largest sd among the regularization methods the adaptive lasso method has the best performance in terms of prediction considering scenarios and we obtain best properties when we apply the penalized pl with adaptive lasso penalty as the design is getting much more complex for scenario when we use the penalized pl with adaptive lasso the sd is doubled and even quadrupled due to the overselection of many unimportant covariates in particular for the more clustered process the better properties are even obtained by applying the regularized wpl combined with adaptive lasso from tables and when the focus is on prediction properties we would recommend to apply the penalized wpl combined with adaptive lasso penalty when the observed point pattern is very clustered and when covariates have a complex stucture of covariance matrix otherwise the use of the penalized pl combined with adaptive lasso penalty is more favorable our recommendations in terms of prediction support as what we recommend in terms of selection logistic regression our concern here is to compare the estimates of the penalized un weighted logistic likelihood to that of the penalized un weighted poisson likelihood with different number of dummy points we remind that the number of dummy points comes up when we discretize the integral terms in and in in the following to ease the presentation we use the term poisson estimates resp logistic estimates for parameter estimates obtained using the regularized poisson likelihood resp the regularized logistic regression likelihood we consider three different numbers of dummy points denoted by by these different numbers of dummy points we want to observe the properties with three different situations a m b m and c m where m is the number of points in the following m and and note that the choice by default from the poisson likelihood in spatstat corresponds to case c baddeley et al showed that for datasets with very large number of points and for very structured point processes the logistic likelihood method is clearly preferable as it requires a smaller number of dummy points to perform quickly and efficiently we want to investigate a similar comparison when these methods are regularized table empirical selection properties tpr fpr and ppv in based on replications of thomas processes on the domain d for for two different scenarios and for three different numbers of dummy points different estimating equations are considered the regularized un weighted poisson and un weighted logistic regression likelihoods employing adaptive lasso regularization method scenario method nd unweighted scenario weighted unweighted weighted tpr fpr ppv tpr fpr ppv tpr fpr ppv tpr fpr ppv poisson logistic approximate value we only repeat the results for and and for scenarios and we use the same selection and prediction indices examined in section and consider only the adaptive lasso method table presents selection properties for the poisson and logistic likelihoods with adaptive lasso regularization for unweighted versions of the procedure the regularized logistic method outperforms the regularized poisson method when nd when the number of dummy points is much smaller than the number of points when m or m the methods tend to have similar performances when we consider weighted versions of the regularized logistic and poisson likelihoods the results do not change that much with nd and the regularized poisson likelihood method slightly outperforms the regularized logistic likelihood method in addition for scenario which considers a more complex situation the methods tend to select the noisy covariates much more frequently empirical biases standard deviation and square root of mean squared errors are presented in table we include all empirical results for the standard poisson and logistic estimates no regularization is considered let us first consider the unweighted methods with no regularization the logistic method clearly has smaller bias especially when nd which explains why in most situations the rmse is smaller however for the weighted methods although the logistic method has smaller bias in general it produces much larger sd leading to larger rmse for all cases when we compare the weighted and the unweighted methods for logistic estimates in general not only do we fail to reduce the sd but we also have larger bias when the adaptive lasso regularization is considered combined with the unweighted methods we can preserve the bias in general and simultaneously improve the sd and hence improve the rmse the logistic likelihood method table empirical prediction properties bias sd and rmse based on replications of thomas processes on the domain d for for two different scenarios and for three different numbers of dummy points different estimating equations are considered the regularized un weighted poisson and un weighted logistic regression likelihoods employing adaptive lasso regularization method scenario method nd unweighted scenario weighted unweighted weighted bias sd rmse bias sd rmse bias sd rmse bias sd rmse no regularization poisson logistic adaptive lasso poisson logistic slightly outperforms the poisson likelihood method when the weighted methods are considered we obtain smaller sd but we have larger bias for weighted versions of the poisson and logistic likelihoods the results do not change that much with nd and the weighted poisson method slightly outperforms the weighted logistic method from tables and when the number of dummy points can be chosen as m or m we would recommend to apply the poisson likelihood method when the number of dummy points should be chosen as m the logistic likelihood method is more favorable our recommendations regarding whether weighted or unweighted methods follow the ones as in section application to forestry datasets in a region d of the tropical moist forest of barro colorado island bci in central panama censuses have been carried out where all woody stems at least mm diameter at breast height were identified tagged and mapped resulting in maps of over individual trees with more than species see condit hubbell et it is of interest to know how the very high number of different tree species continues to coexist profiting from different habitats determined by topography or soil properties see waagepetersen waagepetersen and guan in particular the selection of covariates among topological attributes and soil minerals as well as the estimation of their coefficients are becoming our most concern figure maps of locations of bpl trees top left elevation top right slope bottom left and concentration of phosporus bottom right we are particularly interested in analyzing the locations of beilschmiedia pendula lauraceae bpl tree stems we model the intensity of bpl trees as a loglinear function of two topological attributes and soil properties as the covariates figure contains maps of the locations of bpl trees elevation slope and concentration of phosporus bpl trees seem to appear in greater abundance in the areas of high elevation steep slope and low concentration of phosporus the covariates maps are depicted in figure we apply the regularized un weighted poisson and the logistic likelihoods combined with adaptive lasso regularization to select and estimate parameters since we do not deal with datasets which have very large number of points we can set the default number of dummy points for poisson likelihood as in the spatstat package the number of dummy points can be chosen to be larger than the number of points to perform quickly and efficiently it is worth emphasizing that we center and scale the covariates to observe which one has the largest effect on the intensity the results are presented in table covariates for the poisson likelihood and for the logistic method are selected out of the covariates using the unweighted methods while only covariates both for the poisson and logistic methods are selected using the weighted versions the unweighted methods tend to overfit the model by overselecting unimportant covariates the weighted methods tend to keep out the uninformative covariates both poisson and logistic estimates own similar selection and estimation results first table barro colorado island data analysis parameter estimates of the regression coefficients for beilschmiedia pendula lauraceae trees applying regularized un weighted poisson and logistic regression likelihoods with adaptive lasso regularization unweighted method weighted method poisson estimates logistic estimates poisson estimates logistic estimates elev slope al b ca cu fe k mg mn p zn n ph nb of cov we find some differences on estimation between the unweighted and the weighted methods especially for slope and manganese mn for which the weighted methods have approximately two times larger estimators second we may loose some nonzero covariates when we apply the weighted methods even though it is only for the covariates which have relatively small coefficient boron b has high correlation with many of the other covariates particularly with those which are not selected this is possibly why boron which is selected and may have nonnegligible coefficient in the unweighted methods is not chosen in the model this may explain why the weighted methods introduce extra biases however since the situation appears to be quite close to the scenario from the simulation study the weighted methods are more favorable in terms of both selection and prediction in this application we do not face any computational problem nevertheless if we have to model a species of trees with much more points the default value for nd will lead to numerical problems in such a case the logistic likelihood would be a good alternative these results suggest that bpl trees favor to live in areas of higher elevation and slope this result is different from the findings by waagepetersen and guan and loh which concluded based on standard error estimation that bpl trees do not really prefer either high or low altitudes however we have the same conclusion with the analysis by guan and shen and thurman et al that bpl trees prefer to live on higher altitudes further higher levels of manganese mn and lower levels of both phosporus p and zinc zn concentrations in soil are associated with higher appearance of bpl trees conclusion and discussion we develop regularized versions of estimating equations based on campbell theorem derived from the poisson and the logistic likelihoods our procedure is able to estimate intensity function of spatial point processes when the intensity is a function of many covariates and has a form furthermore our procedure is also generally easy to implement in r since we need to combine spatstat package with glmnet and ncvreg packages we study the asymptotic properties of both regularized weighted poisson and logistic estimates in terms of consistency sparsity and normality distribution we find that among the regularization methods considered in this paper adaptive lasso adaptive elastic net scad and are the methods that can satisfy our theorems we carry out some scenarios in the simulation study to observe selection and prediction properties of the estimates we compare the penalized poisson likelihood pl and the penalized weighted poisson likelihood wpl with different penalty functions from the results when we deal with covariates having a complex covariance matrix and when the point pattern looks quite clustered we recommend to apply the penalized wpl combined with adaptive lasso regularization otherwise the regularized pl with adaptive lasso is more preferable the further and more careful investigation to choose the tuning parameters may be needed to improve the selection properties we note the bias increases quite significantly when the regularized wpl is applied when the penalized wpl is considered a procedure may be needed to improve the prediction properties use the penalized wpl combined with adaptive lasso to chose the covariates then use the selected covariates to obtain the estimates this inference procedure has not been investigated in this paper we also compare the estimates obtained from the poisson and the logistic likelihoods when the number of dummy points can be chosen to be either similar to or larger than the number of points we recommend the use of the poisson likelihood method nevertheless when the number of dummy points should be chosen to be smaller than the number of points the logistic method is more favorable a further work would consist in studying the situation when the number of the covariates is much larger than the sample size in such a situation the coordinate descent algorithm used in this paper may cause some numerical troubles the dantzig selector procedure introduced by candes and tao might be a good alternative as the implementaion for linear models and for generalized linear els results in a linear programming it would be interesting to bring this approach to spatial point process setting acknowledgements we thank thurman who kindly shared the r code used for the simulation study in thurman et al and breheny who kindly provided his code used in ncvreg r package we also thank drouilhet for technical help the research of coeurjolly is funded by project and persyvact the research of is funded by project the bci soils data sets were collected and analyzed by dalling john harms stallard and yavitt with support from nsf and oise and stri soils initiative and ctfs and assistance from segre and trani datasets are available at the center for tropical forest science website http references adrian baddeley and rolf turner practical maximum pseudolikelihood for spatial point patterns australian new zealand journal of statistics adrian baddeley and rolf turner spatstat an r package for analyzing spatial point pattens journal of statistical software adrian baddeley coeurjolly ege rubak and rasmus plenge waagepetersen logistic regression for spatial gibbs point processes biometrika adrian baddeley ege rubak and rolf turner spatial point patterns methodology and applications with crc press mark berman and rolf turner approximating point process likelihoods with glim applied statistics erwin bolthausen on the central limit theorem for stationary mixing random fields the annals of probability patrick breheny and jian huang coordinate descent algorithms for nonconvex penalized regression with applications to biological feature selection the annals of applied statistics leo breiman better subset regression using the nonnegative garrote technometrics peter and sara van de geer statistics for data methods theory and applications springer science business media emmanuel candes and terence tao the dantzig selector statistical estimation when p is much larger than the annals of statistics coeurjolly and jesper variational approach to estimate the intensity of spatial point processes bernoulli richard condit tropical forest census plots and landes company berlin germany and georgetown texas bradley efron trevor hastie iain johnstone robert tibshirani et al least angle regression the annals of statistics jianqing fan and runze li variable selection via nonconcave penalized likelihood and its oracle properties journal of the american statistical association jianqing fan and jinchi lv a selective overview of variable selection in high dimensional feature space statistica sinica jerome friedman trevor hastie holger robert tibshirani et al pathwise coordinate optimization the annals of applied statistics jerome friedman trevor hastie and rob tibshirani regularization paths for generalized linear models via coordinate descent journal of statistical software yongtao guan and ji meng loh a thinned block bootstrap variance estimation procedure for inhomogeneous spatial point patterns journal of the american statistical association yongtao guan and ye shen a weighted estimating equation approach for inhomogeneous spatial point processes biometrika yongtao guan abdollah jalilian and rasmus plenge waagepetersen quasilikelihood for spatial point processes journal of the royal statistical society series b statistical methodology xavier guyon random fields on a network modeling statistics and applications springer science business media arthur e hoerl and robert w kennard ridge regression encyclopedia of statistical sciences stephen p hubbell robin b foster sean t o brien ke harms richard condit b wechsler s joseph wright and s loo de lao disturbances recruitment limitation and tree diversity in a neotropical forest science stephen p hubbell richard condit and robin b foster barro colorado forest census plot data url http janine illian antti penttinen helga stoyan and dietrich stoyan statistical analysis and modelling of spatial point patterns volume john wiley sons zsolt a central limit theorem for mixing random fields miskolc mathematical notes lavancier jesper and ege rubak determinantal point process models and statistical inference journal of the royal statistical society series b statistical methodology rahul mazumder jerome h friedman and trevor hastie sparsenet coordinate descent with nonconvex penalties journal of the american statistical association jesper and rasmus plenge waagepetersen statistical inference and simulation for spatial point processes crc press jesper and rasmus plenge waagepetersen modern statistics for spatial point processes scandinavian journal of statistics art b owen a robust hybrid of lasso and ridge regression contemporary mathematics dimitris n politis efstathios paparoditis and joseph p romano large sample inference for irregularly spaced dependent observations based on subsampling the indian journal of statistics series a r core team r a language and environment for statistical computing r foundation for statistical computing vienna austria url https stephen l rathbun and noel cressie asymptotic properties of estimators for the parameters of spatial inhomogeneous poisson point processes advances in applied probability frederic paik schoenberg consistent parametric estimation of the intensity of a point process journal of statistical planning and inference andrew l thurman and jun zhu variable selection for spatial poisson point processes via a regularization method statistical methodology andrew l thurman rao fu yongtao guan and jun zhu regularized estimating equations for model selection of clustered spatial point processes statistica sinica robert tibshirani regression shrinkage and selection via the lasso journal of the royal statistical society series b statistical methodology robert tibshirani michael saunders saharon rosset ji zhu and keith knight sparsity and smoothness via the fused lasso journal of the royal statistical society series b statistical methodology rasmus plenge waagepetersen an estimating function approach to inference for inhomogeneous processes biometrics rasmus plenge waagepetersen estimating functions for inhomogeneous spatial point processes with incomplete covariate data biometrika rasmus plenge waagepetersen and yongtao guan estimation for inhomogeneous spatial point processes journal of the royal statistical society series b statistical methodology hansheng wang guodong li and guohua jiang robust regression shrinkage and consistent variable selection through the journal of business economic statistics hansheng wang runze li and tsai tuning parameter selectors for the smoothly clipped absolute deviation method biometrika larry wasserman and kathryn roeder variable selection the annals of statistics ming yuan and yi lin model selection and estimation in regression with grouped variables journal of the royal statistical society series b statistical methodology yu ryan yue and ji meng loh variable selection for inhomogeneous spatial point process models canadian journal of statistics zhang nearly unbiased variable selection under minimax concave penalty the annals of statistics zhang and jian huang the sparsity and bias of the lasso selection in linear regression the annals of statistics yiyun zhang runze li and tsai regularization parameter selections via generalized information criterion journal of the american statistical association hui zou the adaptive lasso and its oracle properties journal of the american statistical association hui zou and trevor hastie regularization and variable selection via the elastic net journal of the royal statistical society series b statistical methodology hui zou and runze li sparse estimates in nonconcave penalized likelihood models the annals of statistics hui zou and hao helen zhang on the adaptive with a diverging number of parameters the annals of statistics a auxiliary lemma the following result is used in the proof of theorems throughout the proofs the notation xx op xn or xb op xn for a random vector xn and a sequence of real numbers xn means that kxn k op xn and kxn k op xn in the same way for a vector vn or a squared matrix mn the notation vn o xn and mn o xn mean that kvn k o xn and kmn k o xn lemma under the conditions the following convergence holds in distribution as n d bn w cn w n ip n w moreover as n n w op proof let us first note that using campbell theorems var n w bn w cn w the proof of follows coeurjolly and let ci i d be the unit box centered at i zd and define in i zd ci dn set dn ci n where ci n ci dn we have n w x yi n where yi n x z w u z u exp z u du w u z u ci n n for any n and any i in yi n has zero mean and by condition sup sup e kyi n if we combine with conditions we can apply theorem a central limit theorem for triangular arrays of random fields to obtain which also implies that bn w cn w n w op as n the second result is deduced from condition which in particular implies that bn w cn w o b proof of theorem in the proof of this result and the following ones the notation stands for a generic constant which may vary from line to line in particular this constant is independent of n and proof let dn and k kp rp we remind the reader that the estimate of is defined as the maximum of the function q given by over an open convex bounded set of rp for any k such that kkk k dn k for n sufficiently large assume this is valid in the following to prove theorem we follow the main argument by fan and li and aim at proving that for any given there exists k such that for n sufficiently large p sup k where k q w dn k q w equation will imply that with probability at least there exists a local maximum in the ball dn k kkk k and therefore a local maximizer such that k op dn we decompose k as k where n w dn k n w p x j j dn kj since u is infinitely continuously differentiable and n w w then using a taylor expansion there exists t such that dn k n w dn k an w k dn k an w an w tdn k since is convex and bounded and since w and z are uniformly bounded by conditions there exists a nonnegative constant such that kan w an w tdn k k let m be the smallest eigenvalue of a squared matrix by condition k an w k lim inf an w lim inf hence dn k n w k kkk dn regarding the term s x j j dn kj since for any j the penalty function j is nonnegative and j for j s since dn o then by for n sufficiently large j is twice continuously differentiable for every tdn kj with t therefore using a taylor expansion there exist tj j s such that dn s x kj j sign j s x j dn s x dn j tj dn kj now by definition of an and cn and from condition we deduce that there exists such that an dn cn san dn cn from inequality since cn o dn o and an dn o then for n sufficiently large k dn k n w k kkk dn sdn we now return to for n sufficiently large p sup k p k n w k dn sdn since dn o by choosing k large enough there exists such that for n sufficiently large p sup k p k n w k for any given from c proof of theorem to prove theorem i we provide lemma as follows lemma assume the conditions and condition hold if an o and bn as n then with probability tending to for any satisfying k op and for any constant q w max q w proof it is sufficient to show that with probability tending to as n for any satisfying k op for some small and for j s p w w for and for first note that by we obtain k n w k op second by conditions there exists t such that p x n w n w n w t j op op op third let and bn the sequence given by by condition bn is and since by assumption bn in particular bn for n sufficiently large therefore for n sufficiently large w n w p j sign n w j n w n w bn p w as n since n w op and bn this proves we proceed similarly to prove proof we now focus on the proof of theorem since theorem i is proved by lemma we only need to prove theorem ii which is the asymptotic normality of as shown in theorem there is a consistent local maximizer of q w and it can be shown that there exists an estimator in theorem that is a consistent local maximizer of q w which is regarded as a function of and that satisfies w for j s and there exists t and t such that n w j sign s n w x n w j sign s s x n w x n w jl j j sign j where jl n w n w and j j sign j sign we decompose j as j where j i j and j i j and where j is the sequence defined in the condition under this condition the following taylor expansion can be derived for the term there exists t and t such that j i j j sign i j j i j op where the latter equation ensues from theorem and condition again from p theorem i j which implies that i j so j op op regarding the term since is a lipschitz function there exists such that i j by theorem op and i j op so op and we deduce that j j op op let w resp w be the first s components resp s s corner of n w resp n w let also be the s s matrix containing jl j l finally let the vector the vector and the s s matrix mn be sign s sign s and mn w w we rewrite both sides of as w w by definition of given by and from we obtain op op using this we deduce by premultiplying both sides of by mn that mn w w o kmn k op kmn k op kmn k op kmn k the condition implies that there exists an s s positive definite matrix such that for all sufficiently large n we have w w hence kmn k o now k op by conditions and by theorem and k op by theorem and by theorem i finally since by assumption an o we deduce that kmn k op op kmn k op kmn k o kmn k o an o therefore we have that mn w mn w op from theorem i and by slutsky s theorem we deduce that d w w w n is as n which can be rewritten in particular under as d w n is where w is given by d map of covariates figure maps of covariates designed in scenario the first two top left images are the elevation and the slope the other covariates are generated as standard gaussian white noise but transformed to get multicollinearity figure maps of covariates used in scenario and in application from left to right elevation slope aluminium boron and calcium row copper iron potassium magnesium and manganese row phosporus zinc nitrogen nitrigen mineralisation and ph row
10
wireless network design for control systems a survey aug pangun park sinem coleri ergen carlo fischione chenyang lu and karl henrik johansson networked control systems wncs are composed of spatially distributed sensors actuators and controllers communicating through wireless networks instead of conventional wired connections due to their main benefits in the reduction of deployment and maintenance costs large flexibility and possible enhancement of safety wncs are becoming a fundamental infrastructure technology for critical control systems in automotive electrical systems avionics control systems building management systems and industrial automation systems the main challenge in wncs is to jointly design the communication and control systems considering their tight interaction to improve the control performance and the network lifetime in this survey we make an exhaustive review of the literature on wireless network design and optimization for wncs first we discuss what we call the critical interactive variables including sampling period message delay message dropout and network energy consumption the mutual effects of these communication and control variables motivate their joint tuning we discuss the effect of controllable wireless network parameters at all layers of the communication protocols on the probability distribution of these interactive variables we also review the current wireless network standardization for wncs and their corresponding methodology for adapting the network parameters moreover we discuss the analysis and design of control systems taking into account the effect of the interactive variables on the control system performance finally we present the wireless network design and optimization for wncs while highlighting the tradeoff between the achievable performance and complexity of various approaches we conclude the survey by highlighting major research issues and identifying future research directions index networked control systems wireless sensor and actuator networks joint design delay reliability sampling rate network lifetime optimization i ntroduction recent advances in wireless networking sensing computing and control are revolutionizing how control systems interact with information and physical processes such as systems cps internet of things iot and tactile internet in wireless networked control systems wncs sensor nodes attached to the physical plant sample and transmit their measurements to the controller over a wireless channel controllers compute control commands park is with the department of radio and information communications engineering chungnam national university korea email pgpark coleri ergen is with the department of electrical and electronics engineering koc university istanbul turkey email sergen lu is with the department of computer science and engineering washington university in louis louis usa lu fischione and johansson are with the access linnaeus center electrical engineering royal institute of technology stockholm sweden carlofi kallej park and coleri ergen contributed equally to this work based on these sensor data which are then forwarded to the actuators in order to influence the dynamics of the physical plant in particular wncs are strongly related to cps and tactile internet since these emerging techniques deal with the control of physical systems over the networks there is a strong technology push behind wncs through the rise of embedded computing wireless networks advanced control and cloud computing as well as a pull from emerging applications in automotive avionics building management and industrial automation for example wncs play a key role in industry the ease of installation and maintenance large flexibility and increased safety make wncs a fundamental infrastructure technology for control systems wncs applications have been backed up by several international organizations such as wireless avionics alliance zigbee alliance alliance international society of automation highway addressable remote transducer communication foundation and wireless industrial networking alliance wncs require novel design mechanisms to address the interaction between control and wireless systems for maximum overall system performance and efficiency conventional control system design is based on the assumption of instantaneous delivery of sensor data and control commands with extremely high reliabilities the usage of wireless networks in the data transmission introduces delay and message error probability at all times transmission failures or deadline misses may result in the degradation of the control system performance and even more serious economic losses or reduced human safety hence control system design needs to include mechanisms to tolerate message loss and delay on the other hand wireless network design needs to consider the strict delay and reliability constraints of control systems the data transmissions should be sufficiently reliable and deterministic with the latency on the order of seconds or even milliseconds depending on the time constraints of the closedloop system furthermore removing cables for the data communication of sensors and actuators motivates the removal of the power supply to these nodes to achieve full flexibility the limited stored battery or harvested energy of these components brings additional limitation on the energy consumption of the wireless network the interaction between wireless networks and control systems can be illustrated by an example a wncs connects sensors attached to a plant to a controller via the wireless networking protocol ieee fig shows the control cost of the wncs using the ieee protocol message loss probability sfrag replacements iii wireless networked control systems iv critical interactive system variables sampling period network delay message dropout network energy consumption allowable control cost network constraints wireless network sampling period ms a control cost for various sampling periods and message loss probabilities standardization wireless network parameters vi control system analysis and design sampling sampling message delay ms vii wireless network design techniques for control systems interactive design joint design sfrag replacements allowable control cost network constraints fig main section structure and relations message loss probability b control cost for various message delays and message loss probabilities fig control cost of a wncs using ieee protocol for various sampling periods message delays and message loss probabilities for different sampling periods message delays and message loss probabilities the quadratic control cost is defined as a sum of the deviations of the plant state from its desired setpoint and the magnitude of the control input the maximum allowable control cost is set to the transparent region indicates that the maximum allowable control cost or network requirements are not feasible for instance the control cost would be minimized when there is no message loss and no delay but this point is infeasible since these requirements can not be met by the ieee protocol the control cost generally increases as the message loss probability message delay and sampling period increase since short sampling periods increase the traffic load the message loss probability and the message delay are then closer to their critical values above which the system is unstable hence the area and shape of the feasible region significantly depends on the network performance determining the optimal parameters for minimum network cost while achieving feasibility is not trivial because of the complex interdependence of the control and communication systems recently network lpwan such as wan lora and narrowband iot iot are developed to enable iot connections over km even though some related works of wncs are applicable for control applications such as smart grid smart transportation and remote healthcare this survey focuses on wireless control systems based on wireless personal area networks lowpan with radios and their applications some recent excellent surveys exist on wireless networks particularly for industrial automation specifically discusses the general requirements and representative protocols of wireless sensor networks wsns for industrial applications compares popular industrial wsn standards in terms of architecture and design mainly elaborates on scheduling algorithms and protocols for wirelesshart networks experimentation and joint design approaches for industrial automation while focused on wirelesshart networks and their control applications this article provides a comprehensive survey of the design space of wireless networks for control systems and the potential synergy and interaction between control and communication designs specifically our survey touches on the importance of interactions between recent advanced works of ncs and wsn as well as different approaches of wireless network design and optimization for various wncs applications the goal of this survey is to unveil and address the requirements and challenges associated with wireless network design for wncs and present a review of recent advances in novel design approaches optimizations algorithms and protocols for effectively developing wncs the section structure and relations are illustrated in fig section ii introduces some inspiring applications of wncs in automotive electronics avionics building automation and industrial automation section iii describes wncs where multiple plants are remotely controlled over a wireless network section iv presents the critical interactive variables of communication and control systems including sampling period message delay message dropout and energy consumption section v introduces basic wireless network standardization and key network parameters at various protocol layers useful to tune the distribution of the critical interactive variables section vi then provides an overview of recent control design methods incorporating the interactive variables section vii presents various optimization techniques for wireless networks integrating the control systems we classify the design approaches into two categories based on the degree of the integration interactive designs and joint designs in the interactive design the wireless network parameters are tuned to satisfy given requirements of the control system in the joint design the wireless network and control system parameters are jointly optimized considering the tradeoff between their performances section viii describes three experimental testbeds of wncs we conclude this article by highlighting promising research directions in section ix ii m otivating a pplications this section explores some inspiring applications of wncs wireless network wireless networks have been recently proposed with the goal of reducing manufacturing and maintenance cost of a large amount of wiring harnesses within vehicles the wiring harnesses used for the transmission of data and power delivery within the current vehicle architecture may have up to parts weigh as much as kg and contain up to km of wiring eliminating these wires would additionally have the potential to improve fuel efficiency greenhouse gas emission and spur innovation by providing an open architecture to accommodate new systems and applications an wireless network consists of a central control unit a battery electronic control units wireless sensors and wireless actuators wireless sensor nodes send their data to the corresponding electronic control unit while scavenging energy from either one of the electronic control units or energy scavenging devices attached directly to them actuators receive their commands from the corresponding electronic control unit and power from electronic control units or an energy scavenging device the reason for incorporating energy scavenging into the envisioned architecture is to eliminate the lifetime limitation of fixed storage batteries the applications that can exploit a wireless architecture fall into one of three categories powertrain chassis and body powertrain applications use automotive sensors in engine transmission and onboard diagnostics for control of vehicle energy use driveability and performance chassis applications control vehicle handling and safety in steering suspension braking and stability elements of the vehicle body applications include sensors mainly used for vehicle occupant needs such as occupant safety security comfort convenience and information the first wireless network applications are the tire pressure monitoring system tpms and intelligent tire tpms is based on the wireless transmission of tire pressure data from the sensors to the vehicle body it is currently being integrated into all new cars in both and europe intelligent tire is based on the placement of wireless sensors inside the tire to transfer accelerometer data to the coordination nodes in the body of the car with the goal of improving the performance of active safety systems since accelerometer data are generated at much higher rate than the pressure data and batteries can not be placed within the tire intelligent tire contains an ultralow power wireless communication system powered by energy scavenging technology which is now being commercialized by pirelli wireless avionics wireless avionics waic have a tremendous potential to improve an aircraft s performance through more flight operations reduction in overall weight and maintenance costs and enhancement of the safety currently the cable harness provides the connection between sensors and their corresponding control units to sample and process sensor information and then among multiple control units over a backbone network for the safetycritical flight control due to the high demands on safety and efficiency the modern aircraft relies on a large wired sensor and actuator networks that consist of more than devices wiring harness usually represents of an aircraft s weight for instance the wiring harness of the airbus weights kg the waic alliance considers wireless sensors of avionics located at various locations both within and outside the aircraft the sensors are used to monitor the health of the aircraft structure smoke sensors and ice detectors and its critical systems engine sensors and landing gear sensors the sensor information is communicated to a central onboard entity potential waic applications are categorized into two broad classes according to application data rate requirements low and high data rate applications have data rates less than and above respectively at the world radio conference the international telecommunication union voted to grant the frequency band ghz for waic systems to allow the replacement of the heavy wiring used in aircraft the waic alliance is dedicating efforts to the performance analysis of the assigned frequency band and the design of the wireless networks for avionics control systems space shuttles and international space stations have already been using commercially available wireless solutions such as ewb microtau and ultrawis of invocon building automation wireless network based building automation provides significant savings in installation cost allowing a large retrofit market to be addressed as well as new constructions building automation aims to achieve optimal level occupant comfort while minimizing energy usage these control systems are the integrative component to fans pumps equipment dampers and thermostats the modern building control systems require a wide variety of sensing capabilities in order to control temperature pressure humidity and flow rates the european environment agency shows that the electricity and water consumption of buildings are about and of the total resource consumptions respectively an on world survey reports that of early adopters in five continents are interested in new technologies that will help them better manage their energy consumption and are willing to pay for energy management equipment if they could save up to on their energy bill for smart energy home applications an example of energy management systems using wsns is the intelligent building ventilation control described in an underfloor air distribution indoor climate regulation process is set with the injection of a fresh airflow from the floor and an exhaust located at the ceiling level the considered system is composed of ventilated rooms fans plenums and wireless sensors a underfloor air distribution systems can reduce the energy consumption of buildings while improving the thermal comfort ventilation efficiency and indoor air quality by using the wsns industrial automation wireless sensor and actuator network wsan is an effective smart infrastructure for process control and factory automation emerson process management estimates that wsns enable cost savings of up to compared to the deployment cost of wired field devices in the industrial automation domain in industrial process control the product is processed in a continuous manner oil gas chemicals in factory automation or discrete manufacturing instead the products are processed in discrete steps with the individual elements cars drugs food industrial wireless sensors typically report the state of a fuse heating ventilation or vibration levels on pumps since the discrete product of the factory automation requires sophisticated operations of robot and belt conveyors at high speed the sampling rates and requirements are often stricter than those of process automation furthermore many industrial automation applications might in the future require networks of hundreds of sensors and actuators communicating with access points according to technavio wsn solutions in industrial control applications is one of the major emerging industrial trends many wireless networking standards have been proposed for industrial processes wirelesshart by abb emerson and siemens and isa by honeywell some industrial wireless solutions are also commercially available and deployed such as tropos of abb and smart wireless of emerson iii w ireless n etworked c ontrol s ystems fig depicts the generalized diagram of wncs where multiple plants are remotely controlled over a actuators sensors plant sensors actuators plant wireless networks controller controller fig overview of the considered ncs setup multiple plants are controlled by multiple controllers a wireless network closes the loop from sensor to controller and from controller to actuator the network includes not only nodes attached to the plant or controller but also relay nodes wireless network the wireless network includes sensors and actuators attached to the plants controllers and relay nodes a plant is a physical system to be controlled the inputs and outputs of the plant are continuoustime signals outputs of plant i are sampled at periodic or aperiodic intervals by the wireless sensors each packet associated to the state of the plant is transmitted to the controller over a wireless network when the controller receives the measurements it computes the control command the control commands are then sent to the actuator attached to the plant hence the system contains both a and a component since both and channels use a wireless network general wncs of fig are also called feedback ncs the system scenario is quite general as it applies to any interconnection between a plant and a controller a control systems the objective of the feedback control system is to ensure that the system has desirable dynamic and steadystate response characteristics and that it is able to efficiently attenuate disturbances and handle network delays and loss generally the system should satisfy various design objectives stability fast and smooth responses to setpoint changes elimination of errors avoidance of excessive control actions and a satisfactory degree of robustness to process variations and model uncertainty in particular the stability of a control system is an extremely important requirement most ncs design methods consider subsets of these requirements to synthesize the estimator and the controller in this subsection we briefly introduce some fundamental aspects of modeling stability control cost and controller and estimator design for ncss ncs modeling ncss can be modeled using three main approaches namely the approach the sampleddata approach and the approach dependent on the controller and the plant the approach considers controllers and a plant model the representation leads often to an uncertain system in which the uncertainties appear in the matrix exponential form due to discretization typically this approach is applied to ncs with linear plants and controllers since in that case exact models can be derived secondly the approach considers discretetime controllers but for a model that describes the ncs dynamics without exploiting any form of discretization equations can be used to model the dynamics this approach is able to deal simultaneously with delays and sampling intervals finally the approach designs a continuoustime controller to stabilize a plant model the controller then needs to be approximated by a representation suitable for computer implementation whereas typical wncs consider the controller we will discuss more details of the analysis and design of wncs to deal with the network effects in section vi stability stability is a base requirement for controller design we briefly describe two fundamental notions of stability namely stability and internal stability while the stability is the ability of the system to produce a bounded output for any bounded input the internal stability is the system ability to return to equilibrium after a perturbation for linear systems these two notions are closely related but for nonlinear system they are not the same stability concerns the forced response of the system for a bounded input a system is defined to be bibo stable if every bounded input to the system results in a bounded output if for any bounded input the output is not bounded the system is said to be unstable internal stability is based on the magnitude of the system response in steady state if the response is unbounded the system is said to be unstable a system is said to be asymptotically stable if its response to any initial conditions decays to zero asymptotically in the steady state a system is defined to be exponentially stable if the system response in addition decays exponentially towards zero the faster convergence often means better performance in fact many ncs researches analyze exponential stability conditions furthermore if the response due to the initial conditions remains bounded but does not decay to zero the system is said to be marginally stable hence a system can not be both asymptotically stable and marginally stable if a linear system is asymptotically stable then it is bibo stable however bibo stability does not generally imply internal stability internal stability is stronger in some sense because bibo stability can hide unstable internal behaviors which do not appear in the output control cost besides stability guarantees typically a certain control performance is desired the closedloop performance of a control system can be quantified by the control cost as a function of plant state and control inputs a general regulation control goal is to keep the state error from the setpoint close to zero while minimizing the control actions hence the control cost often consists of two terms namely the deviations of plant state from their desired setpoint and the magnitude of the control input a common controller design approach is via a linear quadratic control formulation for linear systems and a quadratic cost function the quadratic control cost is defined as a sum of the quadratic functions of the state deviation and the control effort in such formulation the optimal control policy that minimizes the cost function can be explicitly computed from a riccati equation controller design the controller should ensure that the system has desirable dynamic and steady state response characteristics for ncs the network delay and loss may degrade the control performance and even destabilize the system some surveys present controller design for ncss for a historical review see the survey we briefly describe three representative controllers namely pid controller linear quadratic regulator lqr control and model predictive control mpc pid control is almost a century old and has remained the most widely used controller in process control until today one of the main reasons for this controller to be so widely used is that it can be designed without precise knowledge of the plant model a pid controller calculates an error value as the difference between a desired setpoint and a measured plant state the control signal is a sum of three terms the pterm which is proportional to the error the which is proportional to the integral of the error and the which is proportional to the derivative of the error the controller parameters are proportional gain integral time and derivative time the integral proportional and derivative part can be interpreted as control actions based on the past the present and the future of the plant state several parameter tuning methods for pid controllers exist historically pid tuning methods require a trial and error process in order to achieve a desired stability and control performance the linear quadratic problem is one of the most fundamental optimal control problems where the objective is to minimize a quadratic cost function subject to plant dynamics described by a set of linear differential equations the quadratic cost is a sum of the plant state cost final state cost and control input cost the optimal controller is a linear feedback controller the lqr algorithm is basically an automated way to find the controller furthermore the lqr is an important subproblem of the general linear quadratic gaussian lqg problem the lqg problem deals with uncertain linear systems disturbed by additive gaussian noise while the lqr problem assumes no noise and full state observation the lqg problem considers input and measurement noise and partial state observation finally mpc solves an optimal linear quadratic control problems over a receding horizon hence the optimization problem is similar to the controller design problem of lqr but solved over a moving horizon in order to handle model uncertainties in contrast to controllers such as a pid or a lqr controller which compute the current control action as a function of the current plant state using the information about the plant from the past predictive controllers compute the control based on the systems predicted future behaviour mpc tries to optimize the system behaviour in a receding horizon fashion it takes control commands and sensing measurements to estimate the current message generated sampling period and future state of plant based on the control system model the control command is optimized to get the desired plant plant state based on a quadratic cost in practice there are often time message delay hard constraints imposed on the state and the control input psfrag replacements compared to the pid and lqr control the mpc framework maximum allowable control cost controller efficiently handles constraints moreover mpc can handle network constraints disordered missing measurements or control commands which message dropout packet loss message arrivals message delay ms can appear in a ncs setting packet loss probability estimator design due to network uncertainties plant sampling period ms actuator state estimation is a crucial and significant research field of fig timing diagram for control over a wireless ncss an estimator is used to predict the plant state network with sampling period message delay and message dropouts by using partially received plant measurements moreover the estimator typically compensates measurement noise network energy harvesting techniques on the other hand may rely delays and packet losses this predicted state is sometimes on natural sources such as solar indoor lighting vibrational used in the calculation of the control command kalman filter thermal inductive and magnetic resonant coupling is one of the most popular approaches to obtain the estimated and radio frequency efficient usage of energy harvesting plant states for ncs modified kalman filters are may attain infinite lifetime for the sensor and actuator nodes posed to deal with different models of the network delay and in most situations the actuations need to be powered sepaloss the state estimation problem is often rately because significant amount of energy is required for the formulated by probabilistically modeling the uncertainties actuation commands opening a valve curring between the sensor and the controller however a approach by iv c ritical i nteractive s ystem variables the measurement packets is proposed in the critical system variables creating interactions between in lqg control a kalman filter is used to estimate the wncs control and communication systems are sampling state from the plant output the optimal state estimator and period message delay and message dropout fig illustrates the optimal state feedback controller are combined for the the timing diagram of the control over a wireless lqg problem the controller is the linear feedback controller of lqr the optimal lqg estimator and controller network with sampling period message delay and message can be designed separately if the communication protocol dropouts we distinguish messages of the control application supports the acknowledgement of the packet transmission of layer with packets of the communication layer the control both and channels system generates messages such as the sensor samples of in sharp contrast the separation principle between estimator the channel or the control commands of and controller does not hold if the acknowledgement is not the channel the control system generally supported hence the underlying network operation is determines the sampling period the communication protocols critical in the design of the overall estimator and the controller then convert the message to the packet format and transmit the packet to the destination since the wireless channel is lossy the transmitter may have multiple packet retransmissions wireless networks associated to one message depending on the communication for the vast majority of control applications most of the protocol if all the packet transmissions of the message fail traffic over the wireless network consists of sensor due to a bursty channel then the message is considered to be data from sensor nodes towards one or more controllers lost the controller either sits on the backbone or is reachable in fig the message delay is the time delay between when via one or more backbone access points therefore data the message was generated by the control system at a sensor or flows between sensor nodes and controllers are not necessarily a controller and when it is received at the destination hence symmetric in wncs in particular asymmetrical link cost the message delay of a successfully received message depends and unidirectional routes are common for the most part of on the number of packet retransmissions furthermore since the sensor traffic furthermore multiple sensors attached to a the routing path or network congestion affects the message single plant may independently transmit their measurements delay the message arrivals are possibly disordered as shown to the controller in some other process automation in fig environments multicast may be used to deliver data to multiple the design of the wireless network at multiple protocol nodes that may be functionally similar such as the delivery layers determines the probability distribution of message delay of alerts to multiple nodes in an automation control room and message dropout these variables together with the samwireless sensors and actuators in control environments can pling period influence the stability of the ncs be powered by battery energy scavenging or power cable and the energy consumption of the network fig presents battery storage provides a fixed amount of energy and requires the dependences between the critical system variables since replacement once the energy is consumed therefore efficient wncs design requires an understanding of the interplay usage of energy is vital in achieving high network lifetime between communication and control we discuss the effect psfrag replacements aximum allowable control cost network constraints message delay ms packet loss probability sampling period ms communication system aspect control system aspect sampling period energy consumption message delay message dropout message discard message loss packet delay packet loss transmission shadow fading medium access multipath fading queueing doppler shift interference fig complex interactions between critical system variables the arrows represent some of the explicit relationships of these system variables on both control and communication system performance a sampling period control system aspect signals of the plant need to be sampled before they are transmitted through a wireless network it is important to note that the choice of the sampling should be related to the desired properties of the system such as the response to reference signals influence of disturbances network traffic and computational load there are two methods to sample continuoustime signals in wncs and sampling in sampling the next sampling instant occurs after the elapse of a fixed time interval regardless of the plant state periodic sampling is widely used in digital control systems due to the simple analysis and design of such systems based on experience and simulations a common rule for the selection of the sampling period is to make sure h be in the range where is the desired natural frequency of the system and h is the sampling period this implies typically that we are sampling up to samples per period of the dominating mode of the system in a traditional digital control system based on wired connections the smaller the sampling period is chosen the better the performance is achieved for the control system however in wireless networks the decrease in sampling period increases the network traffic which in turn increases the message loss probability and message delay therefore the decrease in sampling period eventually degrades the control performance as illustrated in fig recently control schemes such and control systems have been proposed where sensing and actuating are performed when the system needs attention hence the traffic pattern of and selftriggered control systems is asynchronous rather than periodic in control the execution of control tasks is determined by the occurrence of an event rather than the elapse of a fixed time period as in control events are triggered only when stability or a control performance are about to be lost control can significantly reduce the traffic load of the network with no or minor control performance degradation since the traffic is generated only if the signal changes by a specified amount however since most trigger conditions depend on the instantaneous state the plant state is required to be monitored control has been proposed to prevent such monitoring in control an estimation of the next event time instant is made the online detection of plant disturbances and corresponding control actions can not be generated with selftriggered control a combination of and control is therefore often desirable communication system aspect the choice of timetriggered and sampling in the control system determines the pattern of message generation in the wireless network sampling results in regular periodic message generation at predetermined rate if random medium access mechanism is used the increase in network load results in worse performance in the other critical interactive system variables message delay message dropout and energy consumption the increase in control system performance with higher sampling rates therefore does not hold due to these network effects on the other hand the predetermined nature of packet transmissions in sampling allows explicit scheduling of sensor node transmissions hand reducing the message loss and delay caused by random medium access a scheduled access mechanism can predetermine the transmission time of all the components such that additional nodes have minimal effect on the transmission of existing nodes when the transmission of the periodically transmitting nodes are distributed uniformly over time rather than being allocated immediately as they arrive additional nodes may be allocated without causing any jitter in their periodic allocation the optimal choice of medium access control mechanism is not trivial for control the overall performance of control systems significantly depends on the plant dynamics and the number of control loops the random access mechanism is a good alternative if a large number of slow dynamical plants share the wireless network in this case the scheduled access mechanism may result in significant delay between the triggering of an event and a transmission in its assigned slot due to the large number of control loops however most time slots are not utilized since the traffic load is low for slow plants on the other hand the scheduled access mechanism performs well when a small number of the fast plants is controlled by the control algorithm random access generally degrades the reliability and delay performance for the high traffic load of fast plants when there are packet losses in the random access scheme the control further increases the traffic load which may eventually incur stability problems the possible prediction of control alleviates the high network load problem of sampling and random message generation nature of eventtriggered sampling by predicting the evolution of the triggering threshold crossings of the plant state the prediction allows the explicit scheduling of sensor node transmissions eliminating the high message delays and losses of random medium access most existing works of and control assume that message dropouts and message disorders do not occur this assumption is not practical when the packets of messages are transmitted through a wireless network dealing with message dropouts and message disorders in these control schemes is challenging for both the wireless network and the control system b message delay control system aspect there are mainly two kinds of message delays of ncss delay and delay as illustrated in fig the controller delay represents the time interval from the instant when the physical plant is sampled to the instant when the controller receives the sampled message and the actuator delay indicates the time duration from the generation of the control message at the controller until its reception at the actuator the increase in both delays prevents the timely delivery of the control feedback which degrades system performance as exemplified in fig in control theory these delays cause phase shifts that limit the control bandwidth and affect stability since delays are especially pernicious for systems some forms of modeling and prediction are essential to overcome their effects techniques proposed to overcome delays use predictive filters including kalman filter in practice message delay can be estimated from time stamped data if the receiving node is synchronized through the wireless network the control algorithm compensates the measured or predicted delay unless it is too large such compensation is generally impossible for delays hence actuator delays are more critical than the delays the packet delay variation is another interesting metric since it significantly affects the control performance and causes possible instability even when the mean delay is small in particular a heavy tail of the delay distribution significantly degrades the stability of the system the amount of degradation depends on the dynamics of the process and the distribution of the delay variations one way to eliminate delay variations is to use a buffer trading delay for its variation communication system aspect message delay in a multihop wireless network consists of transmission delay access delay and queueing delay at each hop in the path from the source to the destination transmission delay is defined as the time required for the transmission of the packet transmission delay depends on the amount of data to be transmitted to the destination and the transmission rate which depends on the transmit power of the node itself and its simultaneously active neighboring nodes as the transmit power of the node increases its own transmission rate increases decreasing its own transmission delay while causing more interference to simultaneously transmitting nodes increasing their delay the optimization of transmission power and rate should take into account this tradeoff medium access delay is defined as the time duration required to start the actual transmission of the packet access delay depends on the choice of medium access control mac protocol if random access mechanism is used this delay depends on the network load mechanism used in the transmitter and receiver and random access control protocol as the network load increases the access delay increases due to the increase in either busy sensed channel or failed transmissions the receiver decoding capability determines the number of simultaneously active neighboring transmitters the decoding technique may be based on interference avoidance in which only one packet can be received at a time cancellation where the node can transmit another packet while receiving or interference cancellation where the node may receive multiple packets simultaneously and eliminate interference similarly a transmitter may have the capability to transmit multiple packets simultaneously the execution of the random access algorithm together with its parameters also affect the message delay on the other hand if access is used the access delay in general increases as the network load increases however this effect may be minimized by designing efficient scheduling algorithms adopting uniform distribution of transmissions via exploiting the periodic transmission of control similar to random access more advanced capability of the nodes may further decrease this access delay moreover packet losses over the channel may require retransmissions necessitating the repetition of medium access and transmission delay over time this further increases message delay as illustrated in fig queueing delay depends on the message generation rate at the nodes and amount of data they are relaying in the multihop routing path the message generation and forwarding rate at the nodes should be kept at an acceptable level so as not to allow packet build up at the queue moreover scheduling algorithms should consider the multihop forwarding in order to minimize the delay from the source to the destination the destination may observe disordered messages since the packet associated to the message travels several hops with multiple routing paths or experiences network congestion message dropout control system aspect generally there are two main reasons for message dropouts namely message discard due to the control algorithm and message loss due to the wireless network itself the logical hold zoh mechanism is one of the most popular and simplest approaches to discard disordered messages in this mechanism the latest message is kept and old messages are discarded based on the time stamp of the messages however some alternatives are also proposed to utilize the disordered messages in a filter bank a message is considered to be lost if all packet transmissions associated to the message have eventually failed the effect of message dropouts is more critical than message delay since it increases the updating interval with a multiple of the sampling period there are mainly two types of dropouts message dropouts and message dropouts the controller estimates the plant state to compensate possible message dropouts of the channel remind that kalman filtering is one of the most popular approaches to estimate the plant state and works well if there is no significant message loss since the control command directly affects the plant dropouts are more critical than dropouts many practical ncss have several channels whereas the controllers are collocated with the actuators heat ventilation and control systems ncs literatures often model the message dropout as a stochastic variable based on different assumptions of the maximum consecutive message dropouts in particular significant work has been devoted for deriving upper bounds on the updating interval for which stability can be guaranteed the upper bounds could be used as the update deadline over the network as we will discuss in more detail in section vi the bursty message dropout is very critical for control systems since it directly affects the upper bounds on the updating interval communication system aspect data packets may be lost during their transmissions due to the susceptibility of wireless channel to blockage multipath doppler shift and interference obstructions between transmitter and receiver and their variation over time cause random variations in the received signal called shadow fading the probabilistic distribution of the shadow fading depends on the number size and material of the obstructions in the environment multipath fading mainly caused by the multipath components of the transmitted signal reflected diffracted or scattered by surrounding objects occurs over shorter time periods or distances than shadow fading the multipath components arriving at the receiver cause constructive and destructive interference changing rapidly over distance doppler shift due to the relative motion between the transmitter and the receiver may cause the signal to decorrelate over time or impose lower bound on the channel error rate furthermore unintentional interference from the simultaneous transmissions of neighboring nodes and intentional interference in the form of can disturb the successful reception of packets as well network energy consumption a truly wireless solution for wncs requires removing power cables in addition to the data cables to provide full flexibility of installation and maintenance therefore the nodes need to rely on either battery storage or energy harvesting techniques limiting the energy consumption in the wireless network prolongs the lifetime of the nodes if enough energy scavenging can be extracted from natural sources inductive or magnetic resonant coupling or radio frequency then infinite lifetime may be achieved decreasing sampling period message delay and message dropout improves the performance of the control system but at the cost of higher energy consumption in the communication system the higher the sampling rate the greater the number of packets to be transmitted over the channel this increases the energy consumption of the nodes moreover decreasing message delay requires increasing the transmission rate or data capability at the transceivers this again comes at the cost of increased energy consumption finally decreasing message dropout requires either increasing transmit power to combat fading and interference or increasing data capabilities this again translates into energy consumption w ireless n etwork a standardization the most frequently adopted communication standards for wncs are ieee and ieee with some enhancements particularly wirelesshart and ieee are all based on ieee furthermore some recent works of ietf consider internet protocol version over and lossy networks such as routing protocol for and lossy networks rpl and which are all compatible with ieee physical layer medium access control data link layer ieee dsss gts allocation wirelesshart ieee phy ieee mac tdma channel hoping channel blacklisting tdma channel hoping channel blacklisting compaction fragmentation management resource allocation performance monitoring ieee phy ieee mac ieee rpl ieee phy ieee phy any tsch dsme lldn ieee mac any ieee phy tsch ieee ieee dsss ofdm dsss ofdm dcf pcf edca hcca routing source routing graph routing source routing graph routing source routing distance vector routing table i comparison of wireless standards ieee is originally developed for lowrate and personal area networks pans without any concern on delay and reliability the standards such as wirelesshart and ieee are built on top of the physical layer of ieee with additional time division multiple access tdma frequency hopping and multiple path features to provide delay and reliable packet transmission guarantees while further lowering energy consumption in this subsection we first introduce ieee and then discuss wirelesshart ieee and the higher layers of ietf activities such as rpl and on the other hand although the key intentions of the ieee family of wireless local area network wlan standards are to provide high throughput and a continuous network connection several extensions have been proposed to support qos for wireless industrial communications in particular the ieee specification amendment introduces significant enhancements to support the soft applications in this subsection we will describe the fundamental operations of basic ieee and ieee the standards are summarized in table i ieee ieee standard defines the physical and mac layers of the protocol stack a pan consists of a pan coordinator that is responsible of managing the network and many associated nodes the standard supports both star topology in which all the associated nodes directly communicate with the pan coordinator and topology where the nodes can communicate with any neighbouring node while still being managed by the pan coordinator the physical layer adopts direct sequence spread spectrum which is based on spreading the transmitted signal over a large bandwidth to enable greater resistance to interference a single channel between and mhz channels between and mhz and channels between and ghz are used the transmission data rate is kbps in the ghz band kbps in mhz and kbps in mhz band the standard defines two channel access modalities the beacon enabled modality which uses a slotted and the optional guaranteed time slot gts allocation mechanism and a simpler unslotted without beacons the communication is organized in temporal windows denoted superframes fig shows the superframe structure of the beacon enabled mode beacon beacon so abaseslotduration gts gts gts inactive period min so abasesuperframeduration fig superframe structure of ieee in the following we focus on the beacon enabled modality the network coordinator periodically sends beacon frames in every beacon interval tbi to identify its pan and to synchronize nodes that communicate with it the coordinator and nodes can communicate during the active period called the superframe duration tsd and enter the mode during the inactive period the structure of the superframe is defined by two parameters the beacon order bo and the superframe order so which determine the length of the superframe and its active period given by tbi abasesuperframeduration tsd abasesuperframeduration so respectively where so bo and abasesuperframeduration is the number of symbols forming a superframe when so is equal to in addition the superframe is divided into equally sized superframe slots of length abaseslotduration each active period can be further divided into a contention access period cap and an optional contention free period cfp composed of gtss a slotted mechanism is used to access the channel of non data frames and gts requests during the cap in the cfp the dedicated bandwidth is used for data frames fig illustrates the date transfer mechanism of the beacon enabled mode for the cap and cfp in the following we describe the data transmission mechanism for both cap and cfp mechanism of cap is used both during the cap in beacon enabled mode and all the time in enabled mode in cap the nodes access the network by using slotted as described in fig the major difference of in different channel access modes is that the backoff timer starts at the beginning of the next backoff slot in beacon enabled mode and immediately in enabled mode upon the request of the transmission of a packet the following steps of the pan coordinator device pan coordinator device gts request beacon with cap length data nb cw be macm inbe acknowledgement cap cap delay for random unit backoff periods beacon with gts descriptor acknowledgement optional data cfp acknowledgement optional perform cca a non data packet or b data packet transgts request transmission mission channel idle fig data transfers of beacon enabled mode during the cap and cfp yes cw cw no cw nb nb no cw be min be macm axbe algorithms are performed the channel access variables are initialized contention window size denoted by cw is initialized to for the slotted the backoff exponent called be and number of backoff stages denoted by nb are set to and macm inbe respectively a backoff time is chosen randomly from interval the node waits for the backoff time in units of backoff period slots when the backoff timer expires the clear channel assessment is performed a if the channel is free in nonbeacon enabled mode the packet is transmitted b if the channel is free in beacon enabled mode cw is updated by subtracting if cw the packet is transmitted otherwise the second channel assessment is performed c if the channel is busy the variables are updated as follows nb nb be min macm axbe cw the algorithm continues with step if nb macm axcsm abackof f s otherwise the packet is discarded gts allocation of cfp the coordinator is responsible for the gts allocation and determines the length of the cfp in a superframe to request the allocation of a new gts the node sends the gts request command to the coordinator the coordinator confirms its receipt by sending an ack frame within cap upon receiving a gts allocation request the coordinator checks whether there are sufficient resources and if possible allocates the requested gts we recall that fig b illustrates the gts allocation mechanism the cfp length depends on the gts requests and the current available capacity in the superframe if there is sufficient bandwidth in the next superframe the coordinator determines a node list for gts allocation based on a policy then the coordinator transmits the beacon including the gts descriptor to announce the node list of the gts allocation information note that on receipt of the ack to the gts request command the node continues to track beacons and waits for at most agtsdescpersistencetime superframes a node uses the dedicated bandwidth to transmit the packet within the cfp wirelesshart wirelesshart was released in september as the first wireless communication standard for process control applications the standard adopts the ieee physical layer on channels at ghz tdma is used to allow the nodes to put their radio in sleep when they are not scheduled to transmit or receive a packet for better energy efficiency and eliminate collisions for better reliability the slot size of the tdma is fixed at ms yes yes n b macm axcsm abackof f s transmission no failure fig slotted algorithm of ieee beacon enabled mode to increase the robustness to interference in the harsh industrial environments channel hopping and channel blacklisting mechanisms are incorporated into the direct sequence spread spectrum technique adopted in the ieee standard frequency hopping spread spectrum is used to alternate the channel of transmission on a packet level the channel does not change during the packet transmission the frequency hopping pattern is not explicitly defined in the standard but needs to be determined by the network manager and distributed to the nodes channel blacklisting may also be used to eliminate the channels containing high interference levels the network manager performs the blacklisting based on the quality of reception at different channels in the network wirelesshart defines two primary routing approaches for multihop networks source routing and graph routing source routing provides a single route of each flow while graph routing provides multiple redundant routes since the source routing approach only establishes a fixed single path between source and destination any link or node failure disturbs the communication for this reason source routing is mostly used for network diagnostics purposes to test the connection multiple redundant routes in the graph routing provide significant improvement over source routing in terms of the routing reliability the routing paths are determined by the network manager based on the periodic reports received from the nodes including the historical and instantaneous quality of the wireless links standard was released in september with many similar features to wirelesshart but providing more flexibility and adaptivity similar to wirelesshart the standard adopts the ieee physical layer on channels at ghz but with the optional additional usage of channel tdma is again used for better energy consumption and reliability performance but with a configurable slot size on a superframe base adopts channel hopping and blacklisting anism to improve the communication robustness similar to wirelesshart but with more flexibility the standard adopts three channel hopping mechanisms slotted hopping slow hopping and hybrid hopping in slotted hopping the channel is varied in each slot same as wirelesshart in slow hopping the node stays on the same channel for consecutive time slots a number which is configurable slow hopping facilitates the communication of nodes with imprecise synchronization join process of new nodes and transmission of packets transmissions in a slow hopping period is performed by using this mechanism decreases the delay of packets while increasing energy consumption due to unscheduled transmission and reception times in hybrid hopping slotted hopping is combined with slow hopping by accommodating slotted hopping for periodical messages and slow hopping for less predictable new or messages there are five predetermined channel hopping patterns in this standard in contrast to wirelesshart that does not explicitly define hopping patterns ieee this standard has been released in with the goal of introducing new access modes to address the delay and reliability constraints of industrial applications ieee defines three major mac modes namely time slotted channel hopping tsch deterministic and synchronous multichannel extension dsme and low latency deterministic network lldn time slotted channel hopping tsch is a medium access protocol based on the ieee standard for industrial automation and process control the main idea of tsch is to combine the benefits of time slotted access with multichannel and channel hopping capabilities time slotted access increases the network throughput by scheduling the links to meet the traffic demands of all nodes multichannel allows more nodes to exchange their packets at the same time by using different channel offsets since tsch is based on the scheduling of tdma slot and fdma the delay is deterministically bounded depending on the timefrequency pattern furthermore the packet based frequency hopping is supported to achieve a high robustness against interference and other channel impairments tsch also supports various network topologies including star tree and mesh tsch mode exhibits many similarities to wirelesshart and including slotted access multichannel communication and frequency hopping for mesh networks in fact it defines more details of the mac operation with respect to wirelesshart and in the tsch mode nodes synchronize on a periodic slotframe consisting of a number of time slots each node obtains synchronization channel hopping time slot and slotframe information from enhanced beacons ebs that are periodically sent by other nodes in order to advertise the network the slots may be dedicated to one link or shared among links a dedicated link is defined as the pairwise assignment of a directed communication between nodes in a given time slot on a given channel offset hence a link between communicating nodes can be represented by a pair specifying the time slot in the slotframe and the channel offset used by the nodes in that time slot however the tsch standard does not specify how to derive an appropriate link schedule since collisions may occur in shared slots the exponential backoff algorithm is used to retransmit the packet in the case of a transmission failure to avoid repeated collisions differently from the original ieee algorithm the backoff mechanism is activated only after a collision is experienced rather than waiting for a random backoff time before the transmission deterministic and synchronous multichannel extension dsme is designed to support stringent timeliness and reliability requirements of factory automation home automation smart metering smart buildings and patient monitoring dsme extends the beacon enabled mode of the ieee standard relying on the superframe structure consisting of caps and cfps by increasing the number of gts time slots and frequency channels used the channel access of dsme relies on a specific structure called each consists of a collection of superframes defined in ieee the beacon transmission interval is a multiple number of without inactive period by adopting a structure dsme tries to support both periodic and aperiodic or traffic even in large multihop networks in a dsme network some coordinators periodically transmit an eb used to keep all the nodes synchronized and allow new nodes to join the network the distributed beacon and gts scheduling algorithms of dsme allow to quickly react to traffic and changes in the network topology specifically dsme allows to establish dedicated links between any two nodes of the network for the multihop mesh networks with deterministic delay dsme is scalable and does not suffer from a single point of failure because beacon scheduling and slot allocation are performed in a distributed manner this is the major difference with tsch which relies on a central entity given the large variety of options and features dsme turns out to be one of the most complex modes of the ieee standard due to the major complexity issue dsme still lacks a complete implementation moreover all the current studies on dsme are limited to or networks and do not investigate the potentialities of mesh topologies low latency deterministic network lldn is designed for very low latency applications of the industrial automation where a large number of devices sense and actuate the factory production in a specific location differently from tsch and dsme lldn is designed only for star topologies where a number of nodes need to periodically send data to a central sink using just one channel frequency specifically the design target of lldn is to support the data transmissions from sensor nodes every ms since the former ieee standard does not fulfill this constraint the lldn mode defines a fine granular deterministic tdma access similarly to ieee each lldn device can obtain the exclusive access for a time slot in the superframe to send data to the pan coordinator the number of time slots in a superframe determines how many nodes can access the channel if many nodes need to send their packets the pan coordinator needs to equip with multiple transceivers so as to allow simultaneous communications on different channels in lldn short mac frames with just a mac header are used to accelerate frame processing and reduce transmission time moreover a node can omit the address fields in the header since all packets are destined to the pan coordinator compared with tsch lldn nodes do not need to wait after the beginning of the time slot in order to start transmitting moreover lldn provides a group ack feature hence time slots can be much shorter than the one of tsch since it is not necessary to accommodate waiting times and ack frames provides a compaction and fragmentation mechanism to efficiently transport packets in ieee frames the header is compressed by the removal of the fields that are not needed or always have the same contents and inferring addresses from link layer addresses moreover fragmentation rules are defined so that multiple ieee frames can form one packet allows devices to communicate by using ip rpl rpl is an routing protocol for and lossy networks llns proposed to meet the delay reliability and high availability requirements of critical applications in industrial and environmental monitoring rpl is a distance vector and source routing protocol it can operate on top of any link layer mechanism including ieee phy and mac rpl adopts destination oriented directed acyclic graphs dodags where most popular destination nodes act as the roots of the directed acyclic graphs directed acyclic graphs are structures that allow the nodes to associate with multiple parent nodes the selection of the stable set of parents for each node is based on the objective function the objective function determines the translation of routing metrics such as delay link quality and connectivity into ranks where the rank is defined as an integer strictly decreasing in the downlink direction from the root rpl left the routing metric open to the implementation integrates an upper stack including rpl and ieee tsch link layer this integration allows achieving industrial performance in terms of reliability and power consumption while providing an upper stack operation sublayer is used to manage tsch schedule by allocating and deallocating resources within the schedule monitor performance and collect statistics uses either centralized or distributed scheduling in centralized scheduling an entity in the network collects topology and traffic requirements of the nodes in the network computes the schedule and then sends the schedule to the nodes in the network in distributed scheduling nodes communicate with each other to compute their own schedule based on the local topology information labels the scheduled cells as either hard or soft depending on their dynamic reallocation capability a hard cell is scheduled by the centralized entity and can be moved or deleted inside the tsch schedule only by that entity maintains statistics about the network performance in the scheduled cells this information is then used by the centralized scheduling entity to update the schedule as needed moreover this information can be used in the objective function of rpl on the other hand a soft cell is typically scheduled by a distributed scheduling entity if a cell performs significantly worse than other cells scheduled to the same neighbor it is reallocated providing an interference avoidance mechanism in the network the distributed scheduling policy called scheduling specifies the structure and interfaces of the scheduling if the outgoing packet queue of a node fills up the scheduling negotiates additional time slots with the corresponding neighbors if the queue is empty it negotiates the removal of the time slots ieee the basic mac layer uses the distributed coordination function dcf with a simple and flexible exponential backoff based and optional for medium sharing if the medium is sensed idle the transmitting node transmits its frame otherwise it postpones its transmission until the medium is sensed free for a time interval equal to the sum of an arbitration interframe spacing aifs and a random backoff interval dcf experiences a random and unpredictable backoff delay as a result the periodic ncs packets may miss their deadlines due to the long backoff delay particularly under congested network conditions to enforce a timeliness behavior for wlans the original mac defines another coordination function called the point coordination function pcf this is available only in infrastructure mode where nodes are connected to the network through an access point ap aps send beacon frames at regular intervals between these beacon frames pcf defines two periods the contention free period cfp and the contention period cp while dcf is used for the cp in the cfp the ap sends packets to give them the right to send a packet hence each node has an opportunity to transmit frames during the cfp in pcf data exchange is based on a periodically repeated cycle superframe within which time slots are defined and exclusively assigned to nodes for transmission pcf does not provide differentiation between traffic types and thus does not fulfill the deadline requirements for the control systems furthermore this mode is optional and is not widely implemented in wlan devices ieee as an extension of the basic dcf mechanism of the enhances the dcf and the pcf by using a new coordination function called the hybrid coordination function hcf similar to those defined in the legacy mac there are two methods of channel accesses namely enhanced distributed channel access edca and hcf controlled channel access hcca within the hcf both edca and hcca define traffic categories to support various qos requirements the ieee edca provides differentiated access to individual traffic known as access categories acs at the mac layer each node with high priority traffic basically waits a little less before it sends its packet than a node with low priority traffic this is accomplished through the variation of using a shorter aifs and contention window range for higher priority packets considering the requirements of ncss the periodic ncs traffic should be defined as an ac with a high priority and saturation must be avoided for high priority acs hcca extends pcf by supporting parametric traffic and comes close to actual transmission scheduling both pcf and hcca enable access to support collisionfree and transmissions in contrast to pcf the hcca allows for cfps being initiated at almost anytime to support qos differentiation the coordinator drives the data exchanges at runtime according to specific rules depending on the qos of the traffic demands although hcca is quite appealing like pcf hcca is also not widely implemented in network equipment hence some researches adapt the dcf and edca mechanisms for practical control applications wireless network parameters to fulfill the control system requirements the bandwidth of the wireless networks needs to be allocated to high priority data for sensing and actuating with specific deadline requirements however existing wireless standards do not explicitly consider the deadline requirements and thus lead to unpredictable performance of wncs the wireless network parameters determine the probability distribution of the critical interactive system variables some design parameters of different layers are the transmission power and rate of the nodes the decoding capability of the receiver at the physical layer the protocol for channel access and energy saving mechanism at the mac layer and the protocol for packet forwarding at the routing layer physical layer the physical layer parameters that determine the values of the critical interactive system variables are the transmit power and rate of the network nodes the decoding capability of the receiver depends on the ratio sinr at the receiver and sinr criteria sinr is obviously the ratio of the signal power to the total power of noise and interference while sinr criteria is determined by the transmission rate and decoding capability of the receiver the increase in the transmit power of the transmitter increases sinr at the receiver however the increase in the transmit power at the neighboring nodes causes a decrease at the sinr due to the increase in interference optimizing the transmit power of neighboring nodes is therefore critical in achieving sinr requirements at the receivers the transmit rate determines the sinr threshold at the receivers as the transmit rate increases the required sinr threshold increases moreover depending on the decoding capability of the receiver there may be multiple sinr criteria for instance in successive interference cancellation multiple packets can be received simultaneously based on the extraction of multiple signals from the received composite signal through successive decoding ieee allows the adjustment of both transmit power and rate however wirelesshart and use fixed power and rate operating at the suboptimal region medium access control mac protocols fall into one of three categories access access and hybrid access protocols access protocol random access protocols used in wncs mostly adopt the mechanism of ieee the values of the parameters that determine the probability distribution of delay message loss probability and energy consumption include the minimum and maximum value of backoff exponent denoted by macm inbe and macm axbe respectively and maximum number of backoff stages called macm axcsm abackof f similarly to ieee the corresponding parameters for ieee mac include the ifs time contention window size number of tries to sense the clean channel and retransmission limits due to missing acks the energy consumption of has been shown to be mostly dominated by the constant listening to the channel therefore various energy conservation mechanisms adopting low operation have later been proposed in low operation the nodes periodically cycle between a sleep and listening state with the corresponding durations of sleep time and listen time respectively low protocols may be synchronous or asynchronous in synchronous protocols the listen and sleep time of neighboring nodes are aligned in time however this requires an extra overhead for synchronization and exchange of schedules in asynchronous protocols on the other hand the transmitting node sends a long preamble or multiple short preambles to guarantee the wakeup of the receiver node the parameters sleep time and listen time significantly affect the delay message loss probability and energy consumption of the network using a larger sleep time reduces the energy consumption in idle listening at the receiver while increasing the energy consumption at the transmitter due to the transmission of longer preamble moreover the increase in sleep time significantly degrades the performance of message delay and reliability due to the high contention in the medium with increasing traffic access protocol protocols are based on assigning time slots of possibly variable length and frequency bands to a subset of nodes for concurrent transmission since the nodes know when to transmit or receive a packet they can put their radio in sleep mode when they are not scheduled for any activity the scheduling algorithms can be classified into two categories fixed priority scheduling and dynamic priority scheduling in fixed priority scheduling each flow is assigned a fixed priority as a function of its periodicity parameters including sampling period and delay constraint for instance in rate monotonic and deadline monotonic scheduling the flows are assigned priorities as a function of their sampling periods and deadlines respectively the shorter the sampling period and deadline the higher the priority fixed priority scheduling algorithms are preferred due to their simplicity and lower scheduling overhead but are typically since they do not take the urgency of transmissions into account on the other hand in dynamic priority scheduling algorithms the priority of the flow changes over time depending on the execution of the schedule for instance in earliest deadline first edf scheduling the transmission closest to the deadline will be given highest priority so scheduled next whereas in least laxity first algorithm the priority is assigned based on the slack time which is defined as the amount of time left after the transmission if the transmission started now although dynamic priority scheduling algorithms have higher scheduling overhead they perform much better due to the dynamic adjustment of priorities over time hybrid access protocol hybrid protocols aim to combine the advantages of random access and protocols random access eliminates the overhead of scheduling and synchronization whereas scheduled access provides message delay and reliability guarantees by eliminating collisions ieee already provides such a hybrid architecture for flexible usage depending on the application requirements network routing on the network layer the routing protocol plays an extremely important role in achieving high reliability and forwarding together with energy efficiency for large scale wncs such as aircraft avionics and industrial automation various routing protocols are proposed to achieve energy efficiency for traditional wsn applications however to deal with much harsher and noisier environments the routing protocol must additionally provide reliable transmissions multipath routing has been extensively studied in wireless networks for overcoming wireless errors and improving routing reliability most of previous works focus on identifying multiple paths to guarantee energy efficiency and robustness against node failures isa and wirelesshart employ a simple and reliable routing mechanism called graph routing to enhance network reliability through multiple routing paths when using graph routing the network manager builds multiple graphs of each flow each graph includes some device numbers and forwarding list with unique graph identification based on these graphs the manager generates the corresponding subroutes for each node and transmits to every node hence all nodes on the path to the destination are with graph information that specifies the neighbors to which the packets may be forwarded for example if the link of the is broken then the node forwards the packet to another neighbor of other corresponding to the same flow there has been an increasing interest in developing new approaches for graph routing with different routing costs dependent on reliability delay and energy consumption rpl employs the objective function to specify the selection of the routes in meeting the qos requirements of the applications various routing metrics have been proposed in the objective function to compute the rank value of the nodes in the network the rank represents the virtual coordinate of the node its distance to the dodag root with respect to a given metric some approaches propose the usage of a single metric including link expected transmission count node remaining energy link delay mac based metrics considering packet losses due to contention and queue utilization proposes two methods vi control system analysis and design sampling hard sampling period unbounded consecutive message dropout bounded consecutive message dropout sampling control control soft sampling period comparison between and sampling fig subsection structure of section vi namely simple combination and lexical combination for combining two routing metrics among the hop count expected transmission count remaining energy and received signal strength indicator in simple combination the rank of the node is determined by using a composition function as the weighted sum of the ranks of two selected metrics in lexical combination the node selects the neighbor with the lower value of the first selected metric and if they are equal in the first metric the node selects the one with the lower value of the second composition metric finally combines a set of these metrics in order to provide a configurable routing decision depending on the application requirements based on the fuzzy parameters vi c ontrol s ystem a nalysis and d esign this section provides a brief overview of the analysis and design of control systems to deal with the critical interactive system variables resulting from the wireless network the presence of an imperfect wireless network degrades the performance of the control loop and can even lead to instability therefore it is important to understand how these interactive system variables influence the closedloop performance in a quantitative manner fig illustrates the section structure and relations control system analysis has two main usages here requirement definition for the network design and the actual control algorithm design first since the control cost depends on the network performance such as message loss and delay the explicit set of requirements for the wireless network design are determined to meet a certain control performance this allows the optimization of the network design to meet the given constraints imposed by the control system instead of just improving the reliability delay or energy efficiency second based on the control system analysis the controller is designed to guarantee the control performance under imperfect network operation despite the interdependence between the three critical interactive variables of sampling period message delay and message dropout as we have discussed in section iv much of the available literature on ncs considers only a subset of these variables due to the high complexity of the problem since any practical wireless network incurs imperfect network performance the wncs designers must carefully consider the performance feasibility and tradeoffs previous studies in the literature analyze the stability of control systems by considering either only wireless channel or both and hybrid system and markov jump linear system have been applied for the modeling and control of ncs under message dropout and message delay the hybrid or switched system approach refers to dynamics with isolated discrete switching events mathematically these components are usually described by a collection of indexed differential or difference equations for ncs a control system can be modelled as the continuous dynamics and network effects such as message dropouts and message delays are modelled as the discrete dynamics compared to switched systems in markov jump linear system the mode switches are governed by a stochastic process that is statistically independent from the state values markov systems may provide less conservative requirements than switched systems however the network performance must support the independent transitions between states in other words this technique is effective if the network performance is statistically independent or modelled as a simple markov model the above theoretical approaches can be used to derive network requirements as a function of the sampling period message dropout and message delay some network requirements are explicitly related to the message dropout and message delay such as maximum allowable message dropout probability number of consecutive message dropouts and message delay furthermore since various analytical tools only provide sufficient conditions for stability their requirements might be too conservative in fact many existing results are shown to be conservative in simulation studies and finding tighter bounds on the network is an area of great interest to highlight the importance of the sampling mechanism we classify ncs analysis and design methods into sampling and sampling sampling ncss can be classified into two categories based on the relationship between sampling period and message delay hard sampling period and soft sampling period the message delay of hard sampling period is smaller than the sampling period the network discards the message if is not successfully transmitted within its sampling period and tries to transmit the latest sampled message for the hard sampling period on the other hand the node of the soft sampling period continues to transmit the outdated messages even after its sampling period the wireless network design must take into account which sampling method is implemented hard sampling period the message dropouts of ncss are generally modelled as stochastic variables with and without limited number of consecutive message dropouts hence we classify hard sampling period into unbounded consecutive message dropout and bounded consecutive message dropout unbounded consecutive message dropout when the controller is collocated with the actuators a markov jump linear system can be used to analyze the effect of the message dropout in the message dropout is modelled as a bernoulli random process with dropout probability p under the bernoulli dropout model the system model of the augmented state is a special case of a markov jump linear system the matrix theory is used to show exponential stability of the ncs with dropout probability the stability condition interpreted as a linear matrix inequality is a useful tool to design the output feedback controller as well as requirement derivation of the maximum allowable probability of message dropouts for the network design however the main results of are hard to apply for wireless network design since they ignore the message delay for a fixed sampling period furthermore the link reliability of wireless networks does not follow a bernoulli random process since wireless links are highly correlated over time and space in practice while the communication is considered without any delays in the and channels are modelled as two switches indicating whether the corresponding message is dropped or not in a switched system is used to model the ncs with message dropouts when the message delay and sampling period are fixed by using switched system theory sufficient conditions for exponential stability are presented in terms of nonlinear matrix inequalities the proposed methods provide an explicit relation between the message dropout rate and the stability of the ncs such a quantitative relation enables the design of a state feedback controller guaranteeing the stability of the ncs under a certain message dropout rate the network may assign a fixed time slot for a single packet associated to the message to guarantee the constant message delay however since this does not allow any retransmissions it will significantly degrade the message dropout rate another way to achieve constant message delay may be to buffer the received packet at the sink however this will again degrade the control performance with higher average delay in order to apply the results of the wireless network needs to monitor the message dropout probability and adapt its operation in order to meet the maximum allowable probability of message dropouts these results can further be used to save network resources while preserving the stability of the ncs by dropping messages at a certain rate in fact most ncs research focuses on the stability analysis and design of the control algorithm rather than explicit derivation of network requirements useful for the wireless network design since the joint design of controller and wireless networks necessitates the derivation of the required message dropout probability and message delay to achieve the desired control cost provides the formulation of the control cost function as a function of the sampling period message dropout probability and message delay most ncs researches use the linear quadratic cost function as the control objective the model combines the stochastic models of the message dropout and the message delay furthermore the estimator and controller are obtained by extending the results of the optimal stochastic estimator and controller of given a control cost numerical methods are used to derive a set of the network requirements imposed on the sampling period message dropout and message delay one of the major drawbacks is the high computation complexity to quantify the control cost in order to find the feasible region of the network requirements bounded consecutive message dropout some ncs literatures assume limited number of consecutive message dropouts such hard requirements are unreasonable for wireless networks where the packet loss probability is greater than zero at any point in time hence some other approaches set stochastic constraints on the maximum allowable number of consecutive message dropouts control theory provides deterministic bounds on the maximum allowable number of consecutive message dropouts in a switched linear system is used to model ncss with constant message delay and arbitrary but finite message dropout over the channel the message dropout is said to be arbitrary if the sampling sequence of the successfully applied actuation is an arbitrary variable within the maximum number of consecutive message dropouts based on the stability criterion of the switched system a linear matrix inequality is used to analyze sufficient conditions for stability then the maximum allowable bound of consecutive message dropouts and the feedback controllers are derived via the feasible solution of a linear matrix inequality a characterization of stability is provided and explicit bounds on the maximum allowable transfer interval mati and the maximally allowable delay mad are derived to guarantee the control stability of ncss by considering sampling period and message delays in if there are message dropouts for the sampling its effect is modelled as a timevarying sampling period from receiver mati is the upper bound on the transmission interval for which stability can be guaranteed if the network performance exceeds the given mati or mad then the stability of the overall system could not be guaranteed the developed results lead to tradeoff curves between mati and mad these tradeoff curves provide effective quantitative information to the network designer when selecting the requirements to guarantee stability and a desirable level of control performance many control applications such as wireless industrial automation air transportation systems and autonomous vehicular systems set a stochastic mati constraint in the form of keeping the time interval between subsequent state vector reports above the mati value with a predefined probability to guarantee the stability of control systems stochastic mati constraint is an efficient abstraction of the performance of the control systems since it is directly related to the deadline of the scheduling of the network design soft sampling period sometimes it is reasonable to relax the strict assumption on the message delay being smaller than the sampling period some works assume the eventual successful transmission of all messages with various types of deterministic or stochastic message delays since the packet retransmission corresponding to the message is allowed beyond its sampling period one can consider the packet loss as a message delay while the actuating signal is updated after the message delay of each sampling period if the delay is smaller than its sampling period the delays longer than one sampling period may result in more than one or none arriving during a single sampling period it makes the derivation of recursive formulas of the augmented matrix of system harder compared to the hard sampling period case to avoid high computation complexity an alternative approach defines slightly different augmented state to use the stability results of switched systems in even though the stability criterion defines the mati and mad requirements there are fundamental limits of this approach to apply for wireless networks the stability results hold if there is no message dropout for the fixed sampling period and constant message delay since the augmented matrix consiered is a function of the fixed sampling period with the constant message delay hence the mati and mad requirements are only used to set the fixed sampling period and message delay deadline on the other hand the ncs of uses the sampling and varying message delay to take into account the message dropout and stochastic message delay hence the mati and mad requirements of are more practical control constraints than the ones of to apply to wireless network design in a stochastic optimal controller is proposed to compensate long message delays of the channel for fixed sampling period the stochastic delay is assumed to be bounded with a known probability density function hence the network manager needs to provide the stochastic delay model by analyzing delay measurements in both and the ncss assume the eventual successful transmission of all messages this approach is only reasonable if mati is large enough compared to the sampling period to guarantee the eventual successful transmission of messages with high probability however it is not applicable for fast dynamical system small mati requirement while do not explicitly consider message dropouts jointly considers the message dropout and message delay longer than the fixed sampling period over the channel from the derived stability criteria the controller is designed and the mad requirement is determined under a fixed message dropout rate by solving a set of matrix inequalities even though the message dropout and message delay are considered the tradeoff between performance measures is not explicitly derived however it is still possible to obtain tradeoff curves by using numerical methods the network is allowed to transmit the packet associated to the message within the mad the network also monitors the message dropout rate stability is guaranteed if the message dropout rate is lower than its maximum allowable rate furthermore the network may discard outdated messages to efficiently utilize the network resource as long as the message dropout rate requirement is satisfied sampling control is reactive since it generates sensor measurements and control commands when the plant state deviates more than a certain threshold from a desired value on the other hand control is proactive since it computes the next sampling or actuation instance ahead of current time and control have been demonstrated to significantly reduce the network traffic load motivated by those advantages a systematic design of eventbased implementations of stabilizing feedback control laws was performed in and control systems consist of two elements namely a feedback controller that computes the control command and a triggering mechanism that determines when the control input has to be updated again the triggering mechanism directly affects the traffic load there are many proposals for the triggering rule in the literature suppose that the state x t of the physical plant is available one of the traditional objectives of control is to maintain the condition k x t x tk where tk denotes the time instant when the last control task is executed the last event time and is a threshold the next event time instant is defined as inf t tk k x t x tk k the sensor of the control loop continuously monitors the current plant state and evaluates the triggering condition network traffic is generated if the plant state deviates by the threshold the network design problem is particularly challenging because the wireless network must support the randomly generated traffic furthermore eventtriggered control does not provide high energy efficiency since the node must continuously activate the sensing part of the hardware platform control determines its next execution time based on the previously received data and the triggering rule control is basically an emulation of an rule where one considers the model of the plant and controller to compute the next triggering time hence it is predictive sampling based on the plant models and controller rules this approach is generally more conservative than the approach since it is based on approximate models and predicted events the explicit allocation of network resources based on these predictions improves the performance and energy efficiency of the wireless network however since and selftriggered control generate fewer messages the message loss and message delay might seem to be more critical than for control comparison between and sampling one of the fundamental issues is to compare the performance of sampling and sampling approaches by using various channel access mechanisms in fact many control researches show performance improvement since it often reduces the network utilization however recent works of the control using the random access show control performance limitations in the case when there are a large number of control loops considers a control system where a number of or control loops are closed over a shared communication network this research is one of the inspiring works of wncs codesign problem where both the control policy and network scheduling policy have been taken into account the overall target of the framework is to minimize the sum of the stationary state variance of the control loops a dirac pulse is applied to achieve the minimum plant state variance as the control law the sampling can be either or depending on the mac schemes such as the traditional tdma fdma and csma schemes intuitively tdma is used for the sampling while the sampling is applied for csma based on the previous work the approach is also used for fdma since the sampling with a minimum event interval t performs better than the one using the sampling with the same time interval t the authors of assume that once the mac protocol gains the network resource the network is busy for specific delay from sensor to actuator after which the control command is applied to the plant the simulation results show that eventtriggered control using csma gives the best performance even though the main tradeoffs and conclusions of the paper are interesting some assumptions are not realistic in practice the dirac pulse controls are unrealistic due to the capability limit of actuators for simplicity the authors assume that the contention resolution time of csma is negligible compared to the transmission time this assumption is not realistic for general wireless channel access schemes such as ieee and ieee furthermore the total bandwidth resource of fdma is assumed to scale in proportion to the number of plants such that the transmission delay from sensor to actuator is inversely proportional to the number of plants these assumptions are not practical since the frequency spectrum is a limited resource for general wireless networks thus further studies are needed while most previous works on control consider a single control loop or small number of control loops compares control and control for a ncs consisting of a large number of plants the pure aloha protocol is used for the control of ncss the authors show that packet losses due to collisions drastically reduce the performance of control if packets are transmitted whenever the control generates an event remark that the instability of the aloha network itself is a well known problem in communications it turns out that in this setup control is superior to control the same authors also analyze the tradeoff between delay and loss for control with slotted aloha they show that the slotted aloha significantly improve the control cost of the state variance respect to the one of the pure aloha however the timetriggered control still performs better therefore it is hard to generalize the performance comparison between triggered sampling and sampling approaches since it really depends on the network protocol and topology vii w ireless n etwork d esign t echniques c ontrol s ystems wireless network vi control system analysis and design standardization sampling wireless network parameters sampling for this section presents various design and optimization techniques of wireless networks for wncs we distinguish interactive design approach and joint design approach in the interactive design approach the wireless network parameters are tuned to satisfy given constraints on the critical interactive system variables possibly enforced by the required control system performance in the joint design approach the wireless network and control system parameters are jointly optimized considering their interaction through the critical system variables fig illustrates the section structure related to previous sections v and vi in table ii we summarize the characteristics of the related works in the table we have demonstrated whether indications of requirements and communication and control parameters have been included in the network design or optimization for wncs table iii classifies previous design approaches of wncs based on control and communication aspects furthermore table iv categorizes previous works based on the wireless standards described in section interactive design approach in the interactive design approach wireless network parameters are tuned to satisfy the given requirements of the control system most of the interactive design approaches assume control systems in which sensor samples are generated periodically at predetermined rates they generally assume that the requirements of the control systems are given in the form of upper bounds on the message delay or message dropout with a fixed sampling period the adoption of wireless communication technologies for supporting control applications heavily depends on the ability to guarantee the bounded service times for messages at least from a probabilistic point of view this aspect is particularly important in control systems where the requirement is considered much more significant than other performance metrics such as throughput that are usually important in other application areas note that the performance of wireless networks heavily depends on the message delay and message dropout hence we mainly discuss the mac protocols of ieee and ieee different analytical techniques can provide the explicit requirements of control systems for wireless networks as we discussed in section vi the focus of previous research is mainly on the design and optimization of mac network resource scheduling and routing layer with limited efforts additionally considering physical layer parameters medium access control research on and networks can be classified into two groups the first group of solutions called access includes adaptive mac protocols for qos differentiations they adapt the parameters of backoff mechanism retransmissions and dependent on the constraints the second group interactive design approach medium access control joint design approach sampling access access access access physical layer extension routing and traffic generation control network resource schedule scheduling algorithm robustness enhancement network routing sampling access and mixed approach disjoint path routing graph routing controlled flooding vii wireless network design technique for control systems routing fig subsection structure of section vii related to previous sections v and vi called access relies on the contention free scheduling of a netowrk access random access protocols for wncs aim to tune the parameters of the mechanism of ieee and ieee and to improve delay packet loss probability and energy consumption performance the adaptive tuning algorithms are either or adaptation the adaptation techniques do not require any network model but rather depend on the local measurements of packet delivery characteristics early works of ieee propose adaptive algorithms to dynamically change the value of only a single parameter adaptively determine minimum contention window size denoted by macm inbe to decrease the delay and packet loss probability of nodes and increase overall throughput the references extend these studies to autonomously adjust all the parameters the adapt protocol adapts the parameter values with the goal of minimizing energy consumption while meeting packet delivery probability based on their local estimates however adapt tends to oscillate between two or more parameters sets this results in high energy consumption solves this oscillation problem by triggering the adaptation mechanism only upon the detection of a change in operating conditions furthermore aims to optimize parameters based on a linear decrease of the depending on the comparison of the successfully received packet rate and its target value while minimizing the energy consumption parameter optimization mainly use theoretical or derivations of the probability l table ii comparison of related works the circle with plus denotes that the paper explicitly considers the indication of the column the dot denotes that the paper does not include the indication and hence can not control it but simulation or experiment results include it the terms the sim exp of evaluation column mean that the proposed solution is evaluated through theoretical analysis simulation or realistic experiment respectively requirements loss l l l delay l l sampling period system parameters communication parameters control cost power rate schedule l l l energy l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l routing l l l l l l l l l control parameters sampling control period algorithm l l l l l l l contention l l l l l l l l l l l l l l l l l l l l l l scenarios evaluation multihop multihop multihop multihop multihop multihop multihop multihop multihop multihop multihop multihop multihop multihop multihop multihop the sim exp sim sim sim sim sim the l l l l l l l l l l l l l multihop multihop multihop multihop multihop multihop multihop multihop multihop l l l l l l l l tion of delay packet error probability and energy consumption a markov model per node of ieee is used to capture the state of each node at each moment in time these individual markov chains are then coupled by the memory introduced by fixed duration two slot clear channel assessment the proposed markov model is used to derive an analytical formulation of both throughput and energy consumption in such networks the extension of this work in leads to the derivation of the reliability delay and energy consumption as a function of all the protocol parameters for ieee the paper provides analytical models of delay reliability and energy consumption as a function of parameters by considering their effects on random backoff of ieee before successful transmissions these models are then used to minimize energy consumption given constraints on delay and reliability on the other hand derives experimental based models by using curve fitting techniques and validation through extensive experiments an adaptive algorithm was also proposed to adjust the coefficients of these models by introducing a learning phase without any explicit information about data traffic network topology and table iii classification of wncs design techniques interactive design access medium access control access physical layer extension network resource schedule scheduling algorithm robustness enhancement disjoint path graph network routing controlled flooding traffic generation control joint design approach sampling sampling table iv classification of wncs design techniques based on the wireless standards interactive design physical layer contention hybrid schedule wirelesshart routing contention schedule mac parameters by considering ieee a mac protocol with qos differentiation is presented for soft realtime ncss it handles periodic traffic by using two specific mechanisms namely a backoff mechanism and a retry limit assignment mechanism the backoff algorithm offers bounded backoff delays whereas the retry limit assignment mechanism differentiates the retry limits for periodic traffic in terms of their respective deadline requirements a markov chain model is established to describe the proposed mac protocol and evaluate its performance in terms of throughput delay and reliability under the critical traffic condition provides experimental measures and the analysis of network to better understand the statistical distribution of delay for industrial applications the statistical distribution of network delay is first evaluated experimentally when the traffic patterns they support resemble the realistic industrial scenarios under the varying background traffic then experimental results have been validated by means of a theoretical analysis for unsaturated traffic condition which is a quite common condition in industrial communication systems the performance evaluation shows that delays are generally bounded if the traffic on the industrial wlan is light below if the traffic grows higher up to the qos mechanism provided by edca is used to achieve behavior and bounded delays for selected high priority messages access the explicit scheduling of transmissions allows to meet the strict delay and reliability constraints of the nodes by giving priority to the nodes with tighter joint design approach sampling sampling straint to support soft industrial applications combines a number of various mechanisms of ieee such as transmission and retransmission scheduling seamless channel redundancy and basic bandwidth management to improve the deterministic network performance the proposed protocol relies on centralized transmission scheduling of a coordinator according to the edf strategy furthermore the coordinator takes care of the number of retransmissions to achieve both delay and reliability over lossy links in addition to scheduling the seamless channel redundancy concurrently transmits the copies of each frame on multiple distinct radio channels this mechanism is appealing for systems since it improves the reliability without affecting timeliness moreover the bandwidth manager reallocates the unused bandwidth of failed data transmission to additional attempts of other data transmissions within their deadlines presents the design and implementation of a wireless communication protocol called to support control systems which typically require or higher sampling rate is a tdma data link layer protocol based on ieee physical layer it provides deterministic timing performance on packet delivery since different control applications have different communication requirements on data delivery provides a configurable platform to adjust the design tradeoffs including sampling rate delay variance and reliability the middleware proposed in uses a method on top of csma to assign specific time slots to each node to send its traffic in a pollingbased scheduling using the edf policy on top of mac is incorporated with a feedback mechanism to adjust the maximum number of transmission attempts moreover maximum delay implements a communication architecture based on the standard and on the networking framework rtnet wireless ralink chipset of rtnet is a robust schedule used to support the strict network scheduling requirements of systems the performance indicators suchreplacements as packet psfrag loss ratio and delay are experimentally evaluated by varying protocol parameters for a star topology experimental results b edf schedule show that a proper tuning of system parameters can support fig illustrative example of two schedulers robust network performance physical layer extension propose a priority optimal assignment and scheduling algorithm as a function of sampling ssf periods and transmission deadlines to provide maximum level edf of adaptivity to accommodate the packet losses of timeleast laxity first triggered nodes and the transmissions of nodes the adaptivity metric is illustrated using the following example let us assume that the network consists of sensor nodes denoted by sensor node i for i the packet generation period and transmission time of sensor are ms and ms respectively the packet generation period of sensor nodes and is ms whereas packet transmission times are given by ms ms and ms respectively figs a and b psfrag replacements show a robust schedule where the time slots are uniformly distributed over time and the edf schedule respectively the number of nodes schedule given in fig a is more robust to packet losses than the edf schedule given in fig b indeed suppose fig comparison of the maximum delay experienced by eventthat the data packet of sensor in the first ms is not triggered components for ssf edf least laxity first and optimal scheduling algorithms successfully transmitted in fig a the robust schedule includes enough unallocated intervals for the retransmission of sensor whereas the edf schedule does not furthermore authors present both theoretical analysis and its validation the robust scheduler can accommodate traffic through a set of experiments the experimental analysis shows with smaller delay than the edf schedule as shown in fig the possibility to select the ieee parameters to ensure to witness suppose that an additional packet of ms the deterministic behavior for the applications in transmission time is generated by an sensor particular it is shown that a good mimo configuration of node at the beginning of the scheduling frame then the the standard enhances the communication reliability while triggered packet transmission can be allocated with a delay sacrificing the network throughput network resource schedule several scheduling algoof ms in the robust schedule and ms in the edf rithms are proposed to efficiently assign the time slot and the schedule this uniform distribution paradigm is quantified as channel of the multihop networks in order to meet the strict imizing the maximum total active length of all subframes delay and reliability requirements scheduling algorithm some scheduling algorithms focus where the subframe length is the minimum packet generation period among the components and the total active length of a on meeting a common deadline for all the packets generated subframe is the sum of the transmission time of the within a sampling period formulates nents allocated to that subframe the proposed smallest period the delay minimization of the packet transmissions from the into the shortest subframe first ssf algorithm has been sensor nodes to the common access point the optimization demonstrated to significantly decrease the maximum delay problem has been shown to be the proposed schedulexperienced by the packet of an component ing algorithms provide upper bounds on the packet delivery compared to the edf schedule as shown in fig moreover time by considering transmission characteristics when time diversity in the form of the retransmission of the formulation and scheduling algorithms however do not the lost packets is included in this framework the proposed take packet losses into account introduce novel adaptive framework decreases the average number of missed procedures to provide reliability in case of packet failures deadlines per unit time which is defined as the average number proposes an optimal schedule increment strategy based of packets that can not be successfully transmitted within their on the repetition of the most suitable slot until the comdelay constraint significantly compared to the edf schedule mon deadline the objective of the optimization problem since ieee encompasses several enhancements at is to maximize reliability while providing endboth phy and mac layers of wlan analyzes the transmission delay guarantees the physical network performance indicators such as service time and reliability nodes have been reorganized into logical nodes for improved of ieee for industrial communication systems the scheduling flexibility two scheduling algorithms have been evaluated dedicated scheduling and shared scheduling in dedicated scheduling the packets are only transmitted in the scheduled time slots whereas in shared scheduling the packets share scheduled time slots for better reliability proposes a faster scheduling algorithm for the same problem introduced in the algorithm is based on gradually increasing a network model from one to multiple transmitted packets as a function of given link qualities to guarantee reliability these scheduling algorithms can be combined with multiple path routing algorithms the authors assume bernoulli distribution for the arrival success of the packets over each link moreover they do not consider the transmission power rate and packet length as a variable assigning exactly one time slot to each transmission the scheduling algorithms that consider the variation of sampling periods and deadlines of the nodes over the network fall into one of two categories fixed priority and dynamic priority the delay analysis of periodic flows from sensors to actuators in a wirelesshart network under fixed priority scheduling policy has been performed in the upper bound on the delay of the periodic flows is obtained by mapping their scheduling to scheduling and then exploiting the response time analysis of the scheduling both the channel contention and transmission conflict delay due to higher priority flows have been considered channel contention happens when all channels are assigned to higher priority flows in a transmission slot whereas transmission conflict occurs when there exists a common node with a transmission of higher priority flow this study has later been extended for reliable graph routing to handle transmission failures through retransmissions and route diversity in similarly both and probabilisitic delay bounds have been derived by considering channel contention and transmission conflicts these analyses consider multihop multichannel networks with fixed time slots without incorporating any transmit power or rate adjustment mechanism the dynamic priority scheduling of periodic flows in a wirelesshart network has been shown to be in upon determining necessary condition for schedulability an optimal scheduling is proposed effectively discarding infeasible branches in the search space moreover a faster heuristic least laxity first algorithm is developed by assigning priorities to the nodes based on the criticality of their transmission the laxity is defined as the laxity after discarding time slots that can be wasted while waiting to avoid transmission conflicts the lower the laxity the higher the transmission criticality the algorithm does not provide any guarantee on the timely packet delivery provides the delay analysis of periodic realtime flows from sensors to actuators under edf policy the delay is bounded by considering the channel contention and transmission conflict delays the edf has been shown to outperform fixed priority scheduling in terms of performance robustness enhancement the predetermined nature of transmissions allows the incorporation of ious retransmission mechanisms in case of packet losses at random time instants although explicit scheduling is used to prevent various types of conflict and contention still transmission failures may occur due to multipath fading and external interference in harsh and unstable environments some of the retransmission mechanisms have been introduced at the link layer since schedule is known apriori by the nodes in the network the retransmissions can be minimized by exploiting the determinism in the packet headers to recover the unknown bytes of the header moreover various efficient retransmission procedures can be used to minimize the number of bits in the retransmissions uses symbol decoding confidence whereas uses received signal strength variations to determine the parts of the packet received in error so should be retransmitted the retransmission mechanisms at the network layer aim to determine the best timing and quantity of shared separate time slots given the link quality statistics combines the retransmissions with realtime scheduling analysis the number of possible retransmissions of a packet is limited considering the corresponding deadline and already guaranteed delay bounds of other packets proposes a scheduling algorithm that provides delay guarantees for the periodic flows considering both link bursts and interference a new metric called maximum burst length is defined as the maximum length of error burst estimated by using empirical data the algorithm then provides reliability guarantee by allocating each link one plus its corresponding maximum burst length time slots a novel algorithm is used in conjunction with this scheduling algorithm to minimize the sum of worst case burst lengths over all links in the route similarly increases the spacing between the actual transmission and the first retransmission for maximum reliability instead of allocating all the time slots in between improves the retransmission efficiency by using limited number of shared slots efficiently through fast slot competition and segmented slot assignment shared resources are allocated for retransmission due to its unpredictability fast slot competition is introduced by embedding more than one clear channel assessment at the beginning of the shared slots to reduce the rate of collision on the other hand segmented slot assignment provides the retransmission chances for a routing hop before its following hop arrives network routing there has been increasing interest in developing efficient multipath routing to improve the network reliability and energy efficiency of wireless networks previous works of the multipath routings are classified into four categories based on the underlying key ideas of the routing metric and the operation disjoint path routing graph routing controlled flooding and routing disjoint path routing most of previous works focus on identifying multiple disjoint paths from source to destination to guarantee the routing reliability against node or link failures since multiple paths may fail independently the disjoint paths have two types and while paths do not have any relay node in common paths do not have any common link but may have common nodes provides the and braided multipath schemes to provide the resilience against node failures multipath distance vector aomdv is a multipath extension of a single path routing protocol of distance vector aodv graph routing graph routing of isa and wirelesshart leads to significant improvement over a single path in terms of reliability due to the usage of multiple paths since the standards do not explicitly define the mechanism to build these multiple paths it is possible to use the existing algorithms of the disjoint path multiple routing paths from each node to the destination are formed by generating the subgraphs containing all the shortest paths for each source and destination pair link quality estimation is integrated into the generation of subgraphs for better reliability in proposes an algorithm to construct three types of reliable routing graphs namely uplink graph downlink graph and broadcast graph for different communication purposes while the uplink graph is a graph that connects all nodes upward to the gateway the downlink graph of the gateway is a graph to send unicast messages to each node of the network the broadcast graph connects gateway to all nodes of the network for the transmission of operational control commands three algorithms are proposed to build these graphs based on the concepts of k m where k and m are the minimum required number of incoming and outgoing edges of all nodes excluding the gateway respectively the communication schedule is constructed based on the traffic load requirements and the hop sequence of the routing paths recently the graph routing problem has been formulated as an optimization problem where the objective function is to maximize network lifetime namely the time interval before the first node exhausts its battery for a given connectivity graph and battery capacity of nodes this optimization problem has been shown to be a suboptimal algorithm based on integer programming and a greedy heuristic algorithm have been proposed for the optimization problem the proposed algorithm shows significant improvement in the network lifetime while guaranteeing the high reliability of graph routing controlled flooding previous approaches of disjoint routing and graph routing focus on how to build the routing paths and distribute the traffic load over the network some control applications may define more stringent requirements on the routing reliability in the harsher and noisier environments to address the major reliability issue a reliable routing protocol realflow is proposed for industrial applications realflow controls the flooding mechanism to further improve the multipath diversity while reducing the overhead each node transmits the received packet to the corresponding multiple routing paths instead of all feasible outgoing links furthermore it discards the duplicated packets and outdated packets to reduce the overhead for both uplink and downlink transmissions the same packets are forwarded according to the related node lists in all relay nodes due to redundant paths and flooding mechanism realflow can be tolerant to network topology changes furthermore since related node lists are distributively generated the workloads of the gateway are greatly reduced the flooding schedule is also extended by using the received signal strength in routing even though some multipath routings such as disjoint path graph routing and controlled flooding lead to significant reliability improvement they also increase the cost of the energy consumption routing jointly considers the application requirements and energy consumption of the network several energybalanced routing strategies are proposed to maximize the network lifetime while meeting the strict requirements for industrial applications breath is proposed to ensure a desired packet delivery and delay probabilities while minimizing the energy consumption of the network the protocol is based on randomized routing mac and jointly optimized for energy efficiency the design approach relies on a constrained optimization problem whereby the objective function is the energy consumption and the constraints are the packet reliability and delay the optimal working point of the protocol is achieved by a simple algorithm which adapts to traffic variations and channel conditions with negligible overhead earq is another energy aware routing protocol for reliable and communications for industrial applications earq is a proactive routing protocol which maintains an ongoing routing table updated through the exchange of beacon messages among neighboring nodes a beacon message contains expected values such as energy cost residual energy of a node reliability and message delay once a node gets a new path to the destination it will broadcast a beacon message to its neighbors when a node wants to send a packet to the destination next hop selections are based on the estimations of energy consumption reliability and deadlines if the packet chooses a path with low reliability the source will forward a redundant packet via other paths proposes the minimum transmission power cooperative routing algorithm reducing the energy consumption of a single route while guaranteeing certain throughput however the algorithm ignores the residual energy and communication load of neighboring nodes which result in unbalanced energy consumption among nodes in addition in a loadbalanced routing algorithm is proposed where each node always chooses the based on the communication load of neighboring nodes however the algorithm has heavy computation complexity and the communication load is high propose a routing protocol aiming at enhancing performance with energy efficiency the routing decision in is based on the integration of the velocity information of neighbors with energy balancing mechanism whereas the routing decision in is based on the number of hops from source to destination and information of the velocity b joint design approach in the joint design approach the wireless network and control system parameters are jointly optimized considering i control cost ag replacements r throughput throughput jreq l s sampling period s fig quadratic control cost of control systems and throughput of i r wireless networks over different sampling periods and refer to the control cost bound by using the ideal network and the realistic network respectively the tradeoff between their performances these parameters include the sampling period for control and level crossings for control in the control system and transmission power and rate at the physical layer the access parameters and algorithm of the mac protocol parameters and routing paths in the communication system the high complexity of the problem led to different abstractions of control and communication systems many of which considering only a subset of these parameters sampling the joint design approaches of the control are classified into three categories based on the communication layers access access and routing and traffic generation control access the usage of protocols in the joint optimization of control and communication systems requires modeling the probabilistic distribution of delay and packet loss probability in the wireless network and its effect on the control system a general framework for the optimization of the sampling period together with link layer parameters has been first proposed in the objective of the optimization problem is to maximize control system performance given the delay distribution and the packet error probability constraints the linear quadratic cost function is used as the control performance measure simplified models of packet loss and delay are assumed for the random access mechanism without considering spatial reuse the solution strategy is based on an iterative numerical method due to the complexity of the control cost used as an objective function of the optimization problem aims to minimize the error of the state estimation subject to delay and packet loss probability induced by the random access the error of the estimator is derived as a function of sampling period and delay distribution under the bernoulli random process of the packet losses discusses several fundamental tradeoffs of wncs over ieee networks fig shows the quadratic control cost and communication throughput over different i r sampling periods in the figure and refer to the control cost bound using an ideal network no packet loss and no delay and a realistic lossy network of ieee respectively due to the absence of packet delays and losses the control performance using an ideal network increases monotonically as the sampling period increases however when using a realistic network a shorter sampling period does not minimize the control cost because of the higher packet loss probability and delay when the traffic load is i high in addition the two curves of the control cost r and coincide for longer sampling periods meaning that when the sampling period is larger the sampling period is the dominant factor in the control cost compared to the packet loss probability and delay in fig if we consider a desired maximum control cost jreq greater than the minimum value of the control cost then we have the feasible range of the sampling periods between s and however the performance of the wireless network is still heavily affected by the operating point of the sampling period let us consider two feasible sampling periods s and by choosing l the throughput of the network is stabilized the control cost is also stabilized with respect to small perturbations of the network operation furthermore the longer sampling period l leads to lower network energy consumption than the one of the shorter sampling period based on these observations an adaptation of the wncs is proposed by considering a constrained optimization problem the objective is to minimize the total energy consumption of the network subject to a desired control cost the variables of the problem include both sampling period and mac parameters of ieee the network manager predicts the energy consumption corresponding to each feasible network requirement the optimal network requirements are obtained to minimize the energy consumption of the network out of the feasible set of network requirements proposes an interesting approach to the design of wncs by decomposing the overall concerns into two design spaces in the control layer a passive control structure of is used to guarantee the stability of ncss the overall ncs performance is then optimized by adjusting the retransmission limits of the ieee standard at the control layer the authors leverage their architecture to handle the message delay and message loss the authors consider a passive controller which produces a trajectory of the plant to track and define the control performance as its absolute tracking error through extensive simulation results a convex relationship between the retransmission limit of ieee and the control performance is shown based on this observation a mac parameter controller is introduced to dynamically adjust the retransmission limit to track the optimal tradeoff between packet losses and delays and thus to optimize the overall control system performance simulation results show that the mac adaptation can converge to a proper retransmission limit which optimizes the performance of the control system even though the proposed approach is interesting the fundamental tradeoff relationships between communication parameters and control performance are not trivial to derive in practice presents a ncs and its implementation over wireless relay networks of ieee and cooperative mac protocol the proposed approach deals with the problem from the control perspective it basically employs a mpc an actuator state and an adaptive ieee mac to reduce unbounded packet delay and improve the tolerance against the packet loss furthermore the cooperative mac protocol is used to improve the control performance by enabling reliable and timely data transmission under harsh wireless channel conditions access a novel framework for the joint optimization is proposed encompassing efficient abstraction of control system in the form of stochastic mati and mad constraints we should remember that mati and mad are defined as the maximum allowed time interval between subsequent state vector reports and the maximum allowed packet delay for the transmission respectively as we have discussed in section vi since such hard guarantees can not be satisfied by a wireless network with packet loss probability stochastic mati is introduced with the goal of keeping the time interval between subsequent state vector reports above the mati value with a predefined probability to guarantee the stability of control systems further a novel schedulability constraint in the form of forcing an adaptive upper bound on the sum of the utilization of the nodes defined as the ratio of their delay to their sampling periods is included to guarantee the schedulability of transmission under variable transmission rate and sampling period values the objective of the optimization is to minimize the total energy consumption of the network while guaranteeing mati and mad requirements of the control system and maximum transmit power and schedulability constraints of the wireless communication system the solution for the specific case of quadrature amplitude modulation and edf scheduling is based on the reduction of the resulting programming problem into an integer programming problem based on the analysis of the optimality conditions and relaxation of this reduced problem the formulation is also extended for any nondecreasing function of the power consumption of the nodes as the objective any modulation scheme and any scheduling algorithm in first an exact solution method based on the analysis of the optimality conditions and smart enumeration techniques is introduced then two polynomialtime heuristic algorithms adopting intelligent search space reduction and smart searching techniques are proposed the energy saving has been demonstrated to increase up to for a network containing up to nodes studies utility maximization problem subject to wireless network capacity and delay requirement of control system the utility function is defined as the ratio of of the system to that of the counterpart this utility function has been demonstrated to be a strictly concave function of the sampling period and inversely proportional to tracking error induced by discretization based on the assumption that the plants follow the reference trajectories provided by the controllers the wireless network capacity is derived by adopting slotted time transmission over a conflict graph where each vertex represents a wireless link and there is an edge between two vertices if their corresponding links interfere with each other the sampling period is used as the multihop delay bound the solution methodology is based on approach in the inner loop a relaxed problem with fixed delay bound independent of sampling period is solved via dual decomposition the outer loop then determines optimal delay bounds based on the sampling period as an output of the inner loop proposes a mathematical framework for modeling and analyzing multihop ncss the authors present the formal syntax and semantics for the dynamics of the composed system providing an explicit translation of multihop control networks to switched systems the proposed method jointly considers control system network topology routing resource scheduling and communication error the formal models are applied to analyze the robustness of ncss where data packet is exchanged through a multihop communication network subject to disruptions the authors consider two communication models namely permanent error model and transient error model dependent on the length of the communication disruptions the authors address the robustness of the multihop ncs in the case by worst case analysis of scheduling routing and packet losses and in the stochastic case by the stability analysis of node fault probability and packet loss probability the joint optimization of the sampling period of sensors packet forwarding policy and control law for computing actuator command is addressed in for a multihop wirelesshart network the objective of the optimization problem is to minimize the control cost subject to the energy and delay constraints of the nodes the linear quadratic cost function is used as the control cost similar to the one in the solution methodology is based on the separation of joint design problem for the fixed sampling rate transmission scheduling for maximizing the deadlineconstrained reliability subject to a total energy budget and optimal control under packet loss the optimal solution for transmission scheduling is based on dynamic programming which allows nodes to find their optimal forwarding policy based on the statistics of their outgoing links in a distributed fashion the bounds on the control loss function are derived for optimal kalman filter estimator and static linear feedback control law the joint optimal solution is then found by a search over the sampling period some recent researches of wncs investigate fault detection and fault tolerant issues develops a design framework of ncss for industrial automation applications the framework relies on an integrated design and parametrization of the tdma mac protocols the controller and the fault diagnosis algorithms in a multilayer system the main objective is to determine the data transmission of wireless networks and reduce the traffic load while meeting the requirements of the control and the fault detection and identification performance by considering the distributed control groups the hierarchical wncs configuration is considered while the lower layer tightly integrates with sensors actuators and microprocessors of local feedback control loops and its tdma resource the higher layer implements a control in the context of resource management the tdma mac protocol is modeled as a scheduler whose design and parameterization are achieved with the development of the control and the fault detection and identification algorithms at the different functional layers in a similar way investigates the fault estimation problem based on the deterministic model of the tdma mechanism the discrete periodic model of control systems is integrated with periodic information scheduling model without packet collisions by adopting the linearity of state equations the fault estimator is proposed for the periodic system model with arbitrary sensor inputs the fault estimation is obtained after solving a deterministic quadratic minimization problem of control systems by means of recursive calculation however the scheduler of the wireless network does not consider any realistic message delays and losses routing and traffic generation control in the optimized control cloc protocol is proposed for minimizing the performance loss of multiple control systems cloc is designed for a general wireless sensor and actuator network where both and connections are over a multihop mesh network the design approach relies on a constrained maxmin optimization problem where the objective is to maximize the minimum resource redundancy of the network and the constraints are the stability of the control systems and the schedulability of the communication resources the stability condition of the control system has been formulated in the form of stochastic mati constraint the optimal operation point of the protocol is automatically set in terms of the sampling period slot scheduling and routing and is achieved by solving a linear programming problem which adapts to system requirements and link conditions the performance analysis shows that cloc ensures control stability and fulfills communication constraints while maximizing the system performance presents a case study on a wireless process control system that integrates the control design and the wireless routing of the wirelesshart standard the network supports two routing strategies namely source routing and graph routing remind that the graph routing of the wirelesshart standard reduces packet loss through path diversity at the cost of additional overhead and energy consumption to mitigate the effect of packet loss in the wncs the control design integrates an observer based on an extended kalman filter with a mpc and an actuator buffer of recent control inputs the experimental results show that sensing and actuation can have different levels of robustness to packet loss under this design approach specifically while the plant state observer is highly effective in mitigating the effects of packet loss from the sensors to the controller the control performance is more sensitive to packet loss from the controller to the actuators despite the buffered control inputs based on this observation the paper proposes an asymmetric routing configuration for sensing and actuation source routing for sensing and graph routing for actuation to improve control performance addresses the sampling period optimization with the goal of minimizing overall control cost while ensuring delay constraints for a multihop wirelesshart network the linear quadratic cost function is used as the control performance measure which is a function of the sampling period the optimization problem relies on the multihop problem formulation of the delay bound in due to the difficulty of the resulting optimization problem the solution methodologies based on a subgradient method simulated penalty method greedy heuristic method and approximated convex optimization method are proposed the tradeoff between execution time and achieved control cost is analyzed for these methods sampling the communication system design for sampling has mostly focused on the mac layer in particular most researches focus on contentionbased random access since it is suitable for these control systems due to the unpredictability of the message generation time access the tradeoff between the level threshold crossings in the control system and the packet losses in the communication system have been analyzed in studies the eventtriggered control under lossy communication the information is generated and sent at the level crossings of the plant output the packet losses are assumed to have a bernoulli distribution independent over each link the dependence between the stochastic control criterion on the level crossings and the message loss probability is derived for a class of integrator plants this allows the generation of a design guideline on the assignment of the levels for the optimal usage of communication resources provides an extension to by considering a markov chain model of the attempted and successful transmissions over lossy channel in particular a algorithm is used to transmit the control command from the controller to the actuator by combining the communication model of the retransmissions with an analytical model of the performance a theoretical framework is proposed to analyze the tradeoff between the communication cost and the control performance and it is used to adapt an event threshold however the proposed markov chain only considers the packet loss as a bernoulli process and it does not capture the contention between multiple nodes on the other hand access in which the nodes are assigned fixed time slots independent of their message generation times is considered as an alternative to random access for control however this introduces extra delay between the triggering of an event and a transmission in its assigned slot analyzes the ncs consisting of multiple linear control systems over a multichannel slotted aloha protocol the multichannel slotted aloha system is considered as the random access model of the long term evolution the authors separate the resource allocation problem of the multichannel slotted aloha system into two problems namely the transmission attempt problem and the channel selection problem given a time slot each control loop decides locally whether to attempt a transmission based on some error thresholds a local algorithm is used to adapt the error thresholds based on the knowledge of the network resource when the control loop decides to transmit then it selects one of the available channels in uniform random fashion given plant and controller dynamics proposes random access policies to address the coupling between control loops over the shared wireless channel in particular the authors derive a sufficient mathematical condition for the random access policy of each sensor so that it does not violate the stability criterion of other control loops the authors only assume the packet loss due to the interference between simultaneous transmissions of the network they propose a mathematical condition decoupling the control loops based on this condition a random access policy is proposed by adapting to the physical plant states measured by the sensors online however it is still computationally challenging to verify the condition some sampling appproaches use the csma protocol to share the network resource analyzes the performance of the ncss with the csma protocol to access the shared network the authors present a markov model that captures the joint interactions of the policy and a contention resolution mechanism of csma the proposed markov model basically extends bianchi s analysis of ieee by decoupling interactions between multiple systems of the network investigates the data scheduling of multiple loop control systems communicating over a shared lossy network the proposed scheduling scheme combines deterministic and probabilistic approaches this scheduling policy deterministically blocks transmission requests with lower errors not exceeding predefined thresholds subsequently the medium access is granted to the remaining transmission requests in a probabilistic manner the message error is modeled as a homogeneous markov chain the analytical uniform performance bounds for the error variance is derived under the proposed scheduling policy numerical results show a performance improvement in terms of error level with respect to the one with periodic and random scheduling policies proposes a distributed adaptation algorithm for an control system where each system adjusts its communication parameter and control gain to meet the global control cost each stochastic linear system is coupled by the csma model that allows to close only a limited number of feedback loops in every time instant the backoff intervals of csma are assumed to be exponentially distributed with homogeneous backoff exponents furthermore the data packets are discarded after the limited number of retransmission trials the individual cost function is defined as the linear quadratic cost function the design objective is to find the optimal control laws and optimal eventtriggering threshold that minimize the control cost the design problem is formulated as an average cost markov decision process mdp problem with unknown global system eters that are to be estimated during execution techniques from distributed optimization and adaptive mdps are used to develop distributed that adapt their request rate to accommodate a global resource constraint in particular the dual price mechanism forces each system to adjust their thresholds according to the total transmission rate control and mixed approach sampling allows to save energy consumption and reduce the contention delay by predicting the level crossings in the future so explicitly scheduling the corresponding transmissions the sensor nodes are set to sleep mode until the predicted level crossing proposes a new approach to ensure the stability of the controlled processes over a shared ieee network by control the selftriggered sampler selects the next sampling time as a function of current and previous measurements measurement time delay and estimated disturbance the superframe duration and transmission scheduling in the contention free period of ieee are adapted to minimize the energy consumption while meeting the deadlines the joint selection of the sampling time of processes protocol parameters and scheduling allows to address the tradeoff between system performance and network energy consumption however the drawback of this sampling methodology is the lack of its robustness to uncertainties and disturbances due to the predetermined control and communication models the explicit scheduling for sampling is therefore recently extended to include additional time slots in the communication schedule not assigned apriori to any nodes in the case of the presence of disturbance these extra slots are used in an fashion the random access is used in these slots due to the unpredictability of the transmissions in a joint optimization framework is presented where the objective is a function of process state cost of the actuations and energy consumption to transmit control commands subject to communication constraints limited capabilities of the actuators and control requirements while the control is adopted with the controller dynamically determining the next task execution time of the actuator including command broadcasting and changing of action the sensors are assumed to perform sampling periodically a simulated annealing based algorithm is used for online optimization which optimizes the sampling intervals in addition the authors propose a mechanism for estimating and predicting the system states which may not be known exactly due to packet losses and measurement noise proposes a joint design approach of control and adaptive sampling for multiple control loops the proposed method computes the optimal control signal to be applied as well as the optimal time to wait before taking the next sample the basic idea is to combine the concept of the sampling with mpc where the cost function penalizes the plant state and control effort as well as the time interval until the next sample is taken the latter is considered to generate an adaptive sampling scheme for the overall system such that the sampling time increases as the system state error goes to zero in the multiple loop case the authors also present a transmission scheduling algorithm to avoid the conflicts proposes a mixed sampling and eventtriggered sampling scheme to ensure the control stability of ncss while improving the energy efficiency of the ieee wireless networks the basic idea of the mixed approach is to combine the sampling and the sampling schemes the sampling scheme first predicts the next activation time of the eventtriggered sampler when the controller receives the sensing information the sampler then begins to monitor the predefined triggering condition and computes the next sampling instance compared to the typical sampling the sensor does not continuously check the eventtriggered condition since the sampling component of the proposed mixed scheme estimates the next sampling a priori furthermore compared with the alone utilization of sampling the conservativeness is reduced since the sampling component extends the sampling interval by coupling the and sampling in a unified framework the proposed scheme extends the inactive period of the wireless network and reduces the conservativeness induced by the selftriggered sampling to guarantee the high while preserving the desired control performance viii e xperimental t estbeds in contrast to previous surveys of wsn testbeds we introduce some of our representative wncs testbeds existing wncs research often relies on experiments however they usually suffers from limited size and can not capture delays and losses of realistic large wireless networks several simulation tools are developed to investigate the ncs research unfortunately simulation tools for control systems often lack realistic models of wireless networks that exhibit complex and stochastic behavior in environments in this section we describe three wncs testbeds namely simulator and wsn testbed building automation testbed and industrial process testbed fig wsn testbed in bryan hall and jolley hall of washington university in louis are then fed again into tossim which delays or drops the packets and sends the outputs to the actuators furthermore it is also possible to use the experimental wireless traces of a wsn testbed as inputs to the tossim simulator the laboratory of washington university in louis has developed an experimental wsn testbed to study and evaluate wsn protocols the system comprises a network manager on a server and a network protocol stack implementation on tinyos and telosb nodes each node is equipped with a ti microcontroller and a ti radio compatible with the ieee standard fig shows the deployment of the nodes in the campus building the testbed consists of nodes placed throughout several office areas the testbed architecture is hierarchical in nature consisting of three different levels of deployment sensor nodes microservers and a desktop class machine at the lowest tier sensor nodes are placed throughout the physical environment in order to take sensor readings perform actuation they are connected to microservers at the second tier through a usb infrastructure consisting of usb compliant hubs messages can be exchanged between sensor nodes and microservers over this interface in both directions in the testbed two nodes are connected to each microserver typically with one microserver per room the final tier includes a dedicated server that connects to all of the microservers over an ethernet backbone the server machine is used to host among other things a database containing information about the different sensor nodes and the microservers they are connected to simulator and wsn testbed wireless simulator wcps is designed to provide a realistic simulation of wncs wcps employs a federated architecture that integrates simulink for simulating the physical system dynamics and controllers and tossim for simulating wireless networks simulink is commonly used by control engineers to design and study control systems while tossim has been widely used in the sensor network community to simulate wsns based on realistic wireless link models wcps provides an opensource middleware to orchestrate simulations in simulink and in tossim following the software architecture in wcps the sensor data generated by simulink is fed into the wsn simulated using tossim tossim then returns the packet delays and losses according to the behavior of the network which are then fed to the controller of simulink controller commands b building automation testbed heating ventilation and air conditioning hvac systems guarantee indoor air quality and thermal comfort levels in buildings at the price of high energy consumption to reduce the energy required by hvac systems researchers have been trying to efficiently use thermal storage capacities of buildings by proposing advanced estimation and control schemes by using wireless sensor nodes an example hvac testbed is currently comprised of the second floor of the electrical engineering building of the kth campus and is depicted in fig this floor houses four laboratories an office room a lecture hall one storage room and a boiler room each room of the testbed is considered to be a thermal zone and has a set of wireless sensors and actuators that can be individually controlled the wsn testbed is implemented wireless actuator wireless sensors lower tank pump tap upper tank vp fig hvac testbed at the second floor of the at kth each of the five rooms considered contain sensors and actuators used for hvac control additional sensors are located in the corridor and outside of the building fig coupled tank system setup and its diagram code is integrated in the application through a mathscript zone industrial process testbed fig hvac system architecture users are able to design experiments through a labview application and remotely connect to the hvac testbed additionally through a web browser any user can download experimental data from the testbed database on tinyos and telosb nodes the testbed consists of wireless sensors measuring indoor and outdoor temperature humidity concentrations light intensity occupancy levels and events like in several rooms note that the nodes are equipped with humidity temperature and light sensors and external sensors such as sensors by using an converter channel on the telosb expansion area furthermore laboratory includes a people counter to measure the occupancy of the laboratory the collection tree protocol is used to collect the sensor measurements through the multihop networks the actuators are the flow valve of the heating radiator the flow valve for the air conditioning system the air vent for fresh air flow at constant temperature and the air vent for air exhaust to the corridor an overview of the testbed architecture is shown in fig the hvac testbed is developed in labview and is comprised of two separate components the experimental application and a server system the database is responsible for logging the data from all hvac components in on the other hand the experimental application is developed by each user and interacts with the and supervisory control module in the testbed server which connects to the programmable logic controller this component allows for sensing computation and actuation even though the application is developed in labview matlab the control of liquid levels in tanks and flows between tanks are basic problems in process industry liquids need to be processed by chemicals or mixed treatment in tanks while the levels of the tanks must be controlled and the flows between tanks must be regulated fig depicts the experimental apparatus and a diagram of the physical system used in the coupled tank system consists of a pump a water basin and two tanks of uniform cross sections the system is simple yet representative testbed of dynamics of water tanks used in practice the water in the lower tank flows to the water basin a pump is responsible for pumping water from the basin to the upper tank which then flows to the lower tank the holes in each of the tanks have the same diameter the controller regulates the level of water in the upper or lower tank the sensing of the water levels is performed by pressure sensors placed under each tank the process control testbed is built on multiple control systems of quanser coupled tanks with a wireless network consisting of telosb nodes the control loops are regulating two coupled tank processes where the tanks are collocated with the sensors and actuators and communicate wirelessly with a controller node a wireless node interfaces the sensors with an converter in order to sample the sensors for both tanks the actuation is implemented through the converter of the wireless actuator node connected to an amplification circuit that will convert the output voltage of the pump motor ix o pen c hallenges and f uture r esearch d irections although a large number of results on wsn and ncss are reported in the literature there are still a number of challenging problems to be solved out some of them are presented as follows tradeoff of joint design the joint design of communication and control layers is essential to guarantee the robustness and resilience of the overall wncs several different approaches of wncs design are categorized dependent on the degree of the interaction increasing the interaction may improve the control performance but at the risk of high complexity of the design problem and thus eventually leading to the fundamental scalability and tractability issues hence it is critical to quantify the benefit of the control performance and cost of the complexity depending on the design approaches the benefit of the adaptation of the design parameters significantly depends on the dynamics of control systems most researches of control and communication focus on the design of the controller or the network protocol with certain optimization problems for the fixed sampling period some ncs researches propose possible alternatives to set the sampling periods based on the stability analysis however they do not consider the fundamental tradeoff between qos and sampling period of wireless networks while the adaptive sampling period might provide control performance improvement it results in the complex stability problem of the control systems and requires the adaptation of wireless networks adaptation of the sampling period might be needed for the fast dynamical system on the other hand it may just increase the complexity and implementation overhead for slow control systems hence it is critical to quantify the benefit and cost of the joint design approach for control and communication systems b control system requirement various technical approaches such as hybrid system markov jump linear system and system are used to analyze the stability of ncss for different network assumptions the wireless network designers must carefully consider the detailed assumptions of ncs before using their results in wireless network design similarly control system designers need to consider wireless network imperfections encompassing both message dropout and message delay in their framework while some assumptions of control system design affect the protocol operation other assumptions may be infeasible to meet for overall network for instance the protocol operation should consider the sampling period to check whether it is allowed to retransmit the outdated messages over the sampling period on the other hand if the ncs design requires a strict bound on the maximum allowable number of consecutive packet losses this can not be achieved by the wireless system in which the packet error probability is at all times numerical methods are mostly used to derive feasible sets of wireless network requirements in terms of message loss probability and delay to achieve a certain control system performance even though all these feasible requirements meet the control cost it may give significantly different network costs such as energy consumption and robustness and thus eventually affect the overall control systems there are two ways to solve these problems the first one is to provide efficient tools quantifying feasible sets and corresponding network costs previous researches of wncs still lack of the comparison of different network requirements and their effect on the network design and cost the second one is to provide efficient abstractions of both control and communication systems enabling the usage of methods for instance the usage of stochastic mati and mad constraints for the control system in enables the generation of efficient solution methodologies for the joint optimization of these systems communication system abstraction efficient abstractions of communication systems need to be included to achieve the benefit of joint design while reducing complexity for wncs both interactive and joint design approaches mostly focus on the usage of constant transmit power and rate at the physical layer to simplify the problem however variable transmit power and rate have already been supported by network devices the integration of the variability of time slots with variable transmit power and rate has been demonstrated to improve the communication energy consumption significantly this work should be extended to integrate power and rate variability into the wncs design approaches bernouilli distribution has been commonly used as a packet loss model to analyze the control stability for simplicity however most wireless links are highly correlated over time and space in practice the time dependence of packet loss distribution can significantly affect the control system performance due to the effect of consecutive packet losses on the control system performance the packet loss dependencies should be efficiently integrated into the interactive and joint design approaches network lifetime control systems must continuously operate the process without any interruptions such as oil refining chemicals power plants and avionics the continuous operation requires infrequent maintenance such as semiannual or annual since its effects of the downtime losses may range from production inefficiency and equipment destruction to irreparable financial and environmental damages on the other hand energy constraints are widely regarded as a fundamental limitation of wireless devices the limited lifetime due to the battery constraint is particularly challenging for wncs because the are attached to the main physical process or equipment in fact the battery replacement may require the maintenance since it may be not possible to replace while the control process is operating recently two major technologies of energy harvesting and wireless power transfer have emerged as a promising technology to address lifetime bottlenecks of wireless networks some of these solutions are also commercially available and deployed such as abb wisa based on the wireless power transfer for the industrial automation and enocean based on the energy harvesting for the building automation wncs using these energy efficient technologies encounters new challenges at all layers of the network design as well as the overall joint design approach in particular the joint design approach must balance the control cost and the network lifetime while considering the additional constraint on the arrival of energy harvesting the timing and amount of energy harvesting may be random for the generation of energy from natural sources such as solar vibration or controlled for the rf inductive and magnetic resonant coupling latency communication recently communication with and latency requirements has attracted much interest in the research community due to many control related applications in industrial automation autonomous driving healthcare and virtual and augmented reality in particular the tactile internet requires the extremely low latency in combination with high availability reliability and security of the network to deliver the control and physical sensing information remotely diversity techniques which have been previously proposed to maximize total data rate of the users are now being adapted to achieve reliability corresponding to packet error probability on the order of within latency down to a millisecond or less the latency requirement may prohibit the sole usage of time diversity in the form of arq where the transmitter resends the packet in the case of packet losses or hybrid arq where the transmitter sends incremental redundancy rather than the whole packet assuming the processing of all the information available at the receiver therefore have investigated the usage of space diversity in the form of multiple antennas at the transmitter and receiver and transmission from multiple base stations to the user over cellular networks these schemes however mostly focus on the reliability of a single user multiple users in a interference scenario or multiple users to meet a single deadline for all nodes extended these works to consider the separate packet generation times and individual packet transmission deadlines of multiple users in the high reliability communication the previous work on wncs only investigated the time and path diversity to achieve very high reliability and very low latency communication requirements of corresponding applications as explained in detailed above the time diversity mechanisms either adopt efficient retransmission mechanisms to minimize the number of bits in the retransmissions at the link layer or determine the best timing and quantity of time slots given the link quality statistics on the other hand path diversity is based on the identification of multiple disjoint paths from source to destination to guarantee the routing reliability against node and link failures the extension of these techniques to include other diversity mechanisms such as space and frequency in the context of ultra low latency communication requires reformulation of the joint design balancing control cost and network lifetime and addressing new challenges at all layers of the network design networks one of the major issues for large scale smart grid smart transportation and industry is to allow communications of sensors and actuators using very levels recently several lpwan protocols such as lora sigfox and ltem are proposed to provide the low data rate communications of battery operated devices and use a licensed spectrum supported by generation partnership project standardization on the other hand lora and sigfox rely on an unlicensed spectrum the wireless channel behavior of lpwans is significantly different from the behavior of the wireless channel commonly used in wncs standards such as wirelesshart bluetooth and due to different fading characteristics and spectrum usage thus the design of the physical and link layers is completely different moreover the protocol design needs to consider the effect of the interoperation of different protocols of lpwans on the overall message delay hence the control system engineers must validate the feasibility of the traditional assumptions of wireless networks for wncs based on lpwans furthermore the network architecture of lpwan must carefully adapt its operation in order to support the requirements and control message priority of large scale control systems x c onclusions wireless networked control systems are the fundamental technology of the control systems in many areas including automotive electronics avionics building automation and industrial automation this article provided a tutorial and reviewed recent advances of wireless network design and optimization for wireless networked control systems we discussed the critical interactive variables of communication and control systems including sampling period message delay message dropout and energy consumption we then discussed the effect of wireless network parameters at all protocol layers on the probability distribution of these interactive variables moreover we reviewed the analysis and design of control systems that consider the effect of various subsets of interactive variables on the control system performance by considering the degree of interactions between control and communication systems we discussed two design approaches interactive design and joint design we also describe some practical testbeds of wncs finally we highlighted major existing research issues and identified possible future research directions in the analysis of the tradeoff between the benefit of the control performance and cost of the complexity in the joint design efficient abstractions of control and communication systems for their usage in the joint design inclusion of energy harvesting and diversity techniques in the joint design and extension of the joint design to wireless networked control systems r eferences sztipanovits koutsoukos karsai kottenstette antsaklis gupta goodwine baras and wang toward a science of system integration proceedings of the ieee vol no pp bello and zeadally intelligent communication in the internet of things ieee systems journal vol no pp fettweis the tactile internet applications and challenges ieee vehicular technology magazine vol no pp sadi ergen and park minimum energy data transmission for wireless networked control systems ieee transactions on wireless communications vol no pp and chen design considerations for wireless networked control systems ieee transactions on industrial electronics vol no pp sadi and ergen optimal power control rate adaptation and scheduling for intravehicular wireless sensor networks ieee transactions on vehicular technology vol no pp demir and ergen arima based time variation model for beneath the chassis uwb channel eurasip journal on wireless communications and networking accepted technical characteristics and spectrum requirements of wireless avionics systems to support their safe operation itur tech witrant marco park and briat limitations and performances of robust control over wsn ufad control in intelligent buildings ima journal of mathematical control and information vol no pp willig recent and emerging topics in wireless industrial communication ieee transactions on industrial informatics vol no pp gungor and hancke industrial wireless sensor networks challenges design principles and technical approaches ieee transactions on industrial electronics vol no pp kagermann wahlster and helbig recommendations for implementing the strategic initiative industrie forschungsunion acatech tech zigbee crosses the chasm a market dynamics report on ieee and zigbee on world http technical basics alliance http wireless systems for industrial automation process control and related applications isa wirelesshart overview hart communication foundation http steigman and endresen introduction to wisa and wps interface for sensors and actuators and proximity switches white paper http park fischione bonivento johansson and breath an adaptive protocol for industrial control applications using wireless sensor networks ieee transactions on mobile computing vol no pp chang and tassiulas maximum lifetime routing in wireless sensor networks transactions on networking vol no pp ploennigs vasyutynskyy and kabitzsch comparative study of sampling approaches for wireless control networks ieee transactions on industrial informatics vol no pp park modeling analysis and design of wireless sensor network protocols dissertation kth royal institute of technology schenato sinopoli franceschetti poola and sastry foundations of control and estimation over lossy networks proceedings of the ieee vol no pp a technical overview of lora and lorawan lora alliance tech low power wide area technologies gsma tech yu and xue smart grids a systems perspective proceedings of the ieee vol no pp zhang wang wang lin xu and chen datadriven intelligent transportation systems a survey ieee transactions on intelligent transportation systems vol no pp marescaux leroy gagner rubino mutter vix butner and smith transatlantic telesurgery nature vol no pp a kumar ovsthus and m an industrial perspective on wireless sensor networks a survey of requirements protocols and challenges ieee communications surveys tutorials vol no pp wang and jiang comparative examination on architecture and protocol of industrial wireless sensor network standards ieee communications surveys tutorials vol no pp lu saifullah li sha gonzalez gunatilaka wu nie and chen wireless networks for industrial systems proceedings of the ieee vol no pp velupillai and guvenc tire pressure monitoring applications of control ieee control systems vol pp ergen x sun tebano alalusi audisio and sabatini the tire as an intelligent sensor ieee transactions on design of integrated circuits and systems vol no pp july pirelli cyber tyre the intelligent tyre that speaks to the car pirelli tech software considerations in airborne systems and equipment certification rtca elgezabal fbwss benefits risks and technical challenges in caneus workshop technical characteristics and operational objectives for wireless avionics waic tech world radiocommunication conference international telecommunication union tech invocon enhanced accelerometer unit white paper aswani master taneja culler and tomlin reducing transient and steady state electricity consumption in hvac using model predictive control proceedings of the ieee vol no pp final electricity consumption by sector european environment agency http use of freshwater resources european environment agency http smart energy homes a market dynamics report on world http chen cao cheng xiao and y sun distributed collaborative control for industrial automation with wireless sensor and actuator networks ieee transactions on industrial electronics vol no pp pister thubert systems dwars and phinney industrial routing requirements in and lossy networks ietf blaney wireless proves its value power engineering global industrial automation control market technavio http petersen and carlsen wirelesshart versus the format war hits the factory floor ieee industrial electronics magazine vol no pp hespanha naghshtabrizi and xu a survey of recent results in networked control systems proceedings of the ieee vol no pp and systems theory and design bemporad heemels and johansson networked control systems springer chen and b francis optimal control systems london dorf and bishop modern control systems pearson education zhang branicky and phillips stability of networked control systems ieee control systems vol no pp walsh ye and bushnell stability analysis of networked control systems ieee transactions on control systems technology vol no pp athans the role and use of the stochastic problem in control system design ieee transactions on automatic control vol no pp tipsuwan and chow control methodologies in networked control systems control engineering practice vol no pp and kumar control automatica vol no pp and pid controllers theory design and tuning isa qin and badgwell a survey of industrial model predictive control technology control engineering practice vol no pp barbosa j machado and ferreira tuning of pid controllers based on bode s ideal transfer function nonlinear dynamics vol no pp garcia prett and morari model predictive control theory and survey automatica vol no pp henriksson quevedo peters sandberg and johansson model predictive control for network scheduling and control ieee transactions on control systems technology vol no pp li ma westenbroek wu gonzalez and lu wireless routing and control a case study in iccps sinopoli schenato franceschetti poolla jordan and sastry kalman filtering with intermittent observations ieee transactions on automatic control vol no pp sahebsara chen and shah optimal filtering in networked control systems with multiple packet dropout ieee transactions on automatic control vol no pp demirel zou soldati and johansson modular design of jointly optimal controllers and forwarding policies for wireless control ieee transactions on automatic control vol no pp schenato optimal estimation in networked control systems subject to random delay and packet drop ieee transactions on automatic control vol no pp moayedi foo and soh filtering for networked control systems with measurement packets subject to multiplestep measurement delays and multiple packet dropouts international journal of systems science vol no rabi ramesh and johansson separated design of encoder and controller for networked linear quadratic optimal control siam journal on control and optimization vol no pp matveev and savkin the problem of state estimation via asynchronous communication channels with irregular transmission times ieee transactions on automatic control vol no pp jiang polastre and culler perpetual environmentally powered sensor networks in ipsn xie shi hou and lou wireless power transfer and applications to sensor networks ieee wireless communications no pp lou wang niyato kim and han wireless power transfer and applications to sensor networks ieee communications surveys and tutorials vol no pp wittenmark and arzen computer control an overview ifac professional brief tech heemels johansson and tabuada an introduction to and control in ieee cdc franklin powell and digital control of dynamic systems addison wesley longman tabuada scheduling of stabilizing control tasks ieee transactions on automatic control vol no pp wang and lemmon feedback control systems with stability ieee transactions on automatic control vol no pp lunze and lehmann a approach to control automatica vol no pp arzen a simple pid controller in ifac world congress araujo mazo anta tabuada and johansson system architectures protocols and algorithms for aperiodic wireless control systems ieee transactions of industrial informatics vol no pp wang and lemmon feedback control systems with stability ieee transactions on automatic control vol no pp peng yue and fei a higher sampling scheme for networked control systems over ieee wireless networks ieee transactions on industrial informatics vol no pp pollin ergen ergen bougard perre moerman bahai varaiya and catthoor performance analysis of slotted carrier sense ieee medium access layer ieee transactions on wireless communications vol no pp ergen and varaiya pedamacs power efficient and delay aware medium access protocol for sensor networks ieee transactions on mobile computing vol no pp ergen and varaiya tdma scheduling algorithms for wireless sensor networks wireless networks vol no pp sadi and ergen energy and delay constrained maximum adaptive schedule for wireless networked control systems ieee transactions on wireless communications vol no pp blind and analysis of networked control with a shared communication medium part i pure aloha in ifac world congress smith closed control of loops with dead time chemical engineering progress vol pp seuret a novel stability analysis of linear system under asynchronous samplings automatica vol no pp sadi and ergen minimum length scheduling with packet traffic demands in wireless networks ieee transactions on wireless communications vol no pp kim lee and hong a survey of transmission from the perspective of phy and mac layers ieee communication surveys and tutorials vol no pp kontik and ergen scheduling in successive interference cancellation based wireless networks ieee communications letters vol no pp sridharan koksal and a greedy link scheduler for wireless networks with gaussian and broadcast channels transactions on networking vol no pp baldi giacomelli and marchetto access and forwarding for industrial wireless multihop networks ieee transactions on industrial informatics vol no pp hart field communication protocol specification revision hart communication foundation xiong and lam stabilization of networked control systems with a logic zoh ieee transactions on automatic control vol no pp cloosterman hetel van de wouw heemels daafouz and nijmeijer controller synthesis for networked control systems automatica vol no pp moyne and tilbury the emergence of industrial control networks for manufacturing control diagnostics and safety data proceedings of the ieee vol no pp nilsson control systems with delays dissertation lund institute of technology park traffic generation rate control of wireless sensor and actuator networks ieee communications letters vol no pp saifullah wu tiwari xu fu lu and chen near optimal rate selection for wireless control systems acm transactions on embedded computing systems vol no pp arampatzis lygeros and manesis a survey of applications of wireless sensors and wireless sensor networks in ieee mcca branicky phillips and zhang stability of networked control systems explicit analysis of delay in acc zhang and yu a robust control approach to stabilization of networked control systems with delays automatica vol no pp goldsmith wireless communications cambridge university press park marco fischione and johansson modeling and optimization of the ieee protocol for reliable and timely communications ieee transactions on parallel and distributed systems vol no pp prabhakar and gamal packet transmission over a wireless link transactions on networking vol no pp palattella accettura vilajosana watteyne grieco boggia and dohler standardized protocol stack for the internet of important things ieee communications surveys tutorials vol no pp willig kubisch hoene and wolisz measurements of a wireless link in an industrial environment using an ieee physical layer ieee transactions on industrial electronics vol no pp tian and tian modelling and performance evaluation of the ieee dcf for control computer networks vol no pp ieee standard wireless medium access control and physical layer specifications for wireless personal area networks ieee http han zhu mok chen and nixon reliable and realtime communication in industrial wireless mesh networks in ieee rtas ieee standard for local and metropolitan area wireless personal area networks amendment mac sublayer ieee std amendment to ieee std pp accettura vogli palattella grieco boggia and dohler decentralized traffic aware scheduling in networks design and experimental evaluation ieee internet of things journal vol no pp ouanteur yazid and modeling and performance evaluation of the ieee lldn mechanism designed for industrial applications in wsns wireless networks vol no pp winter thubert brandt hui and kelsey rpl routing protocol for low power and lossy networks ietf thubert objective function zero ietf an architecture for over the tsch mode of ieee tech dujovne grieco palattella and accettura scheduling tech ieee standard for wireless lan medium access control mac and physical layer phy specifications ieee std pp ieee standard for information and metropolitan area wireless lan medium access control mac and physical layer phy specifications amendment medium access control mac quality of service enhancements ieee std amendment to ieee std edition reaff pp cena seno valenzano and zunino on the performance of ieee wireless infrastructures for industrial applications ieee transactions on industrial informatics vol no pp sojka molnar and hanzalek experiments for communication contracts in ieee edca networks in ieee international workshop on factory communication systems tian camtepe and tian a mac protocol with qos differentiation for soft control ieee transactions on industrial informatics vol no pp seno cena scanzio valenzano and zunino enhancing communication determinism in networks for soft industrial applications ieee transactions on industrial informatics vol no pp wei leng han mok zhang and tomizuka communication protocol for wireless control applications in ieee systems symposium heo hong and cho earq energy aware routing for and reliable communication in wireless industrial sensor networks ieee transactions on industrial informatics vol no pp cena bertolotti valenzano and zunino evaluation of response times in industrial wlans ieee transactions on industrial informatics vol no pp kontik and ergen scheduling in multiple access wireless networks with successive interference cancellation ieee wireless communications letters vol no pp ye heidemann and estrin medium access control with coordinated adaptive sleeping for wireless sensor networks transactions on networking vol no pp dam and langendoen an adaptive mac protocol for wireless sensor networks in acm sensys polastre hill and culler versatile low power media access for wireless sensor networks in acm sensys buettner yee and han a short preamble mac protocol for wireless sensor networks in acm sensys liu systems prentice hall winter kunzel muller pereira and netto study of routing mechanisms in a wirelesshart network in ieee icit z wu research of wirelesshart network layer routing algorithm industrial instrumentation and control systems vol pp ganesan govindan shenker and estrin multipath routing in wireless sensor networks sigmobile mob comput commun vol no pp tarique tepe adibi and erfani survey of multipath routing protocols for mobile ad hoc networks journal of network and computer applications vol no pp marina and das multipath distance vector routing in ad hoc networks in ieee icnp jindong zhenjun and yaopei elhfr a graph routing in industrial wireless mesh network in ieee icia gao zhang and li a reliable multipath routing strategy for wirelesshart mesh networks using subgraph routing journal of computational information systems vol no pp saad chauvenet and tourancheau simulation of the rpl routing protocol for sensor networks in international conference on sensor technologies and applications gaddour koubaa chaudhry tezeghdanti and abid simulation and performance evaluation of dag construction with rpl in international conference on communications and networking gonizzi monica and ferrari design and evaluation of a rpl routing metric in international wireless communications and mobile computing conference marco fischione athanasiou and mekikis macaware routing metrics for low power and lossy networks in ieee infocom liu guo bhatti orlik and parsons load balanced routing for low power and lossy networks in ieee wcnc kim kim paek and bahk load balancing under heavy traffic in rpl routing protocol for low power and lossy networks ieee transactions on mobile computing vol no pp karkazis leligou trakadas sarakis and velivassaki design of primary and composite routing metrics for wireless sensor networks in telecommunications and multimedia gaddour koubaa baccour and abid qos aware fuzzy logic objective function for the rpl routing protocol in international symposium on modeling and optimization in mobile ad hoc and wireless networks seiler and sengupta an approach to networked control ieee transactions on automatic control vol no pp yue han and peng state feedback controller design of networked control systems ieee transactions on circuits and systems ii express briefs vol no pp seiler and sengupta analysis of communication losses in vehicle control problems in acc xiong and lam stabilization of linear systems over networks with bounded packet loss automatica vol no pp heemels teel van de wouw and nesic networked control systems with communication constraints tradeoffs between transmission intervals delays and performance ieee transactions on automatic control vol no pp zhang and yu modelling and control of networked control systems with both delay and automatica vol no pp output feedback stabilization of networked control systems with packet dropouts ieee transactions on automatic control vol no pp nilsson stochastic analysis and control of systems with random time delays automatica vol no pp shousong and qixin stochastic optimal control and analysis of stability of networked control systems with long delay automatica vol no pp lin and antsaklis stability and stabilizability of switched linear systems a survey of recent results ieee transactions on automatic control vol no pp yu wang xie and chu stabilization of networked control systems with data packet droupout via switched system approach in ieee cacsd costa and fragoso stability results for linear systems with markovian jumping parameters journal of mathematical analysis and applications vol no pp tatikonda and mitter control under communication constraints ieee transactions on automatic control vol no pp srinivasan kazandjieva agarwal and levis the betafactor measuring wireless link burstiness in acm sensys srinivasan jain choi azim kim levis and krishnamachari the kappa factor inferring protocol performance using reception correlation in acm mobicom park araujo and johansson wireless networked control system in ieee icnsc park khadilkar balakrishnan and tomlin high confidence networked control for next generation air transportation systems ieee transactions on automatic control vol no pp karagiannis altintas ekici heijenk jarupan lin and weil vehicular networking a survey and tutorial on requirements architectures challenges standards and solutions ieee communications surveys tutorials vol no pp cervin and henningsson scheduling of controllers on a shared network in ieee cdc blind and analysis of networked control with a shared communication medium part ii slotted aloha in ifac world congress johannesson henningsson and cervin sporadic control of linear stochastic systems in hscc springer berlin heidelberg rom and sidi multiple access protocols performance and analysis park ergen fischione and optimization for ieee wireless sensor networks acm transactions on sensor networks vol no brienza roveri guglielmo and anastasi adaptive algorithm for optimal parameter setting in wsns acm transactions on autonomous and adaptive systems vol no pp francesco anastasi conti das and neri reliability and in ieee sensor networks an adaptive and approach ieee journal on selected areas in communications vol no pp moraes vasques and portugal a mechanism to enforce behavior in wifi networks in ieee international workshop on factory communication systems toscano and bello a middleware for reliable soft communication over ieee wlans in ieee international symposium on industrial and embedded systems boggia camarda grieco and zacheo toward wireless networked control systems an experimental study on communications in wlans in ieee international workshop on factory communication systems yan lam han chan chen fan chen and nixon data link layer scheduling for reliable packet delivery in wireless sensing and control networks with delay constraints information sciences vol pp saifullah xu lu and chen communication delay analysis in industrial wireless networks ieee transactions on computers vol no pp wu sha gunatilaka saifullah lu and chen analysis of edf scheduling for wireless networks in ieee iwqos saifullah gunatilaka tiwari sha lu li wu and chen schedulability analysis under graph routing in wirelesshart networks in ieee rtss barac gidlund and zhang preed packet recovery by exploiting the determinism in industrial wsn communication in ieee dcoss jonsson and kunert towards reliable wireless industrial communication with guarantees ieee transactions on industrial informatics vol no pp munir lin hoque nirjon j stankovic and whitehouse addressing burstiness for reliable communication and latency bound generation in wireless sensor networks in ipsn gamba tramarin and willig retransmission strategies for cyclic polling over wireless channels in the presence of interference ieee transactions on industrial informatics vol no pp saifullah xu lu and chen scheduling for wirelesshart networks in ieee rtss yu pang gidlund akerberg and bjorkman realflow reliable routing protocol for industrial wireless sensor networks international journal of distributed sensor networks vol no pp park marco and johansson optimization for industrial control applications using wireless sensor and actuator mesh networks ieee transactions on industrial electronics vol no pp sadi and ergen joint optimization of communication and controller components of wireless networked control systems in ieee icc alur d innocenzo johansson pappas and weiss compositional modeling and analysis of control networks ieee transactions on automatic control vol no pp ding zhang yin and ding an integrated design framework of wireless networked control systems for industrial automatic control applications ieee transactions on industrial informatics vol no pp wang ding xu and shen an fault estimation scheme of wireless networked control systems for industrial realtime applications ieee transactions on control systems technology vol no pp rabi stabellini proutiere and johansson networked estimation under medium access international journal of robust and nonlinear control vol no pp bai eyisi xue and koutsoukos dynamic tuning retransmission limit of ieee mac protocol for networked control systems in cpscom ulusoy gurbuz and onat wireless predictive networked control system over cooperative wireless network ieee transactions on industrial informatics vol no pp li chen q song wang and y sun enhancing realtime delivery in wireless sensor networks with information ieee transactions on industrial informatics vol no pp dobslaw zhang and gidlund scheduling for wireless sensor networks ieee transactions on industrial informatics vol no pp yang xu wang zheng zhang zhang and gidlund assignment of segmented slots enabling reliable transmission in industrial wireless sensor networks ieee transactions on industrial electronics vol no pp wu gunatilaka saifullah sha tiwari lu and chen maximizing network lifetime of wirelesshart networks under graph routing in ieee iotdi quang and kim enhancing delivery of gradient routing for industrial wireless sensor networks ieee transactions on industrial informatics vol no pp liu and goldsmith wireless network design for distributed control in ieee cdc bai eyisi qiu xue and koutsoukos optimal design of sampling rate adaptation and network scheduling for wireless networked control systems in iccps rabi and johansson scheduling packets for control in ecc pp cao cheng chen and y sun an online optimization approach for control and communication codesign in networked cyberphysical systems ieee transactions on industrial informatics vol no pp vilgelm mamduhi kellerer and hirche adaptive decentralized mac for networked control systems in hscc gatsis ribeiro and pappas random access communication in iccps ramesh sandberg and johansson performance analysis of a network of systems ieee transactions on automatic control vol no pp mamduhi tolic molin and hirche scheduling for stochastic networked control systems with packet dropouts in ieee cdc molin and hirche adaptive scheduling in multiloop control systems with resource constraints ieee transactions on automatic control vol no pp demirel gupta and johansson on the between control performance and communication cost for control over lossy networks in ecc ko cho and kim performance evaluation of ieee mac with different backoff ranges in wireless sensor networks in ieee iccs pang and tseng dynamic backoff for wireless personal networks in ieee globecom merlin and heinzelman duty cycle control for mac protocols ieee transactions on mobile computing vol no pp fischione park and ergen analysis and optimization of duty cycle in random access networks wireless networs vol no pp kottenstette koutsoukos j hall sztipanovits and antsaklis design of wireless networked control systems for robustness to delays in ieee systems symposium gokturk and gurbuz cooperation in wireless sensor networks design and performance analysis of a mac protocol in ieee icc tiberi fischione di benedetto and johansson sampling of networked control systems over ieee wireless networks automatica vol no pp jamieson and balakrishnan ppr partial packet recovery for wireless networks acm sigcomm computer communication review vol no pp hauer willig and wolisz mitigating the effects of rf interference through error recovery in ewsn tramarin vitturi luvisotto and zanella on the use of ieee for industrial communications ieee transactions on industrial informatics vol no pp yu gidlund akerberg and bjorkman reliable rssbased routing protocol for industrial wireless sensor networks in ieee iecon kiszka and wagner rtnet a flexible hard networking framework in ieee conference on emerging technologies and factory automation ibrahim han and liu distributed cooperative routing in wireless networks ieee transactions on wireless communications vol no pp dai wang and li mebrs energy balancing route scheduling in centralized wireless sensor networks in asqed tyagi aurzada lee kim and reisslein impact of retransmission limit on preamble contention in network ieee systems journal vol no pp bianchi performance analysis of the ieee distributed coordination function ieee journal on selected areas in communications vol no pp steyn and hancke a survey of wireless sensor network testbeds in ieee africon tonneau mitton and vandaele a survey on mobile wireless sensor network experimentation testbeds in ieee dcoss horneber and a survey on testbeds and experimentation environments for wireless sensor networks ieee communications surveys tutorials vol no pp cervin henriksson lincoln eker and arzen how does control timing affect performance analysis and simulation of timing using jitterbug and truetime ieee control systems vol no pp eyisi bai riley weng yan xue koutsoukos and sztipanovits ncswt an integrated modeling and simulation tool for networked control systems simulation modelling practice and theory vol pp aminian araujo johansson and johansson gisoo a virtual testbed for wireless systems in ieee iecon li nie wu gonzalez and lu incorporating emergency alarms in reliable wireless process control in iccps levis lee welsh and culler tossim accurate and scalable simulation of entire tinyos applications in acm sensys lee cerpa and levis improving wireless simulation through noise modeling in ipsn sha gunatilaka wu and lu implementation and experimentation of industrial wireless network protocols in ewsn polastre szewczyk and culler telos enabling power wireless research in ipsn gnawali fonseca jamieson moss and levis collection tree protocol in acm sensys pattarello wei ebadat wahlberg and johansson the kth open testbed for smart hvac control in acm buildsys and modern control systems addison wesley johansson the process a multivariable laboratory process with an adjustable zero ieee transactions on control systems technology vol no pp coupled water tanks quanser http tanks enocean enocean the world of energy harvesting wireless technology white paper johansson wang eriksson and hessler radio access for and communications in ieee icc yilmaz wang johansson brahmi ashraf and sachs analysis of and communication for a factory automation use case in ieee icc luvisotto pang and dzung ultra high performance wireless control for critical applications challenges and directions ieee transactions on industrial informatics vol pp no pp ohmann simsek and fettweis achieving high availability in wireless networks by an optimal number of links in ieee globecom workshop latency and ultrahigh reliability in wireless communications pocovi soret lauridsen pedersen and mogensen signal quality outage analysis for communications in cellular networks in ieee globecom workshop latency and reliability in wireless communications serror dombrowski wehrle and j gross channel coding versus cooperative arq reducing outage probability in latency wireless communications in ieee globecom workshop ultralow latency and reliability in wireless communications cheikh kelif coupechoux and godlewski analytical joint processing cooperation performance in rayleigh fading ieee wireless communications letters vol no pp farayev sadi and ergen optimal power control and rate adaptation for control applications in ieee globecom workshop latency and reliability in wireless communications the cellular network operator enabling the internet of things sigfox tech condoluci araniti mahmoodi and dohler enabling the iot machine age with multicast services for innovative applications ieee access vol pp
3
mar notes on finitely generated flat modules abolfazl tarizadeh abstract in this article the projectivity of a finitely generated flat module of a commutative ring is studied through its exterior powers and invariant factors consequently the related results of endo vasconcelos wiegand and on the projectivity of flat modules are generalized introduction the main purpose of the present article is to investigate the projectivity of finitely generated flat modules of a commutative ring it is worthy to mention that this has been the main topic of many articles in the literature over the years and it is still of current interest see note that in general there are flat modules which are not necessarily projective see example also see tag for another example we use in place of finitely generated in this paper the projectivity of a finitely generated flat module of a commutative ring is studied through its exterior powers and invariant factors the important outcome of this study is that some major results in the literature on the projectivity of flat modules are directly without using the homological methods and at the same time most of them are vastly generalized in particular theorem vastly generalizes theorem theorem generalizes theorem and theorem generalizes theorem proposition it also generalizes proposition and corollary in the commutative case in fact theorem can be viewed as a generalization of all of the above mentioned results the main motivation to investigate the projectivity of flat modules essentially originates from the fact that every flat module over a local ring is free in this article we also prove a more general result theorem this result in particular implies the above fact mathematics subject classification key words and phrases flat module invariant factor projectivity abolfazl tarizadeh see corollary for reading the present article having a reasonable knowledge from the exterior powers of a module is necessary in this article all of the rings are commutative preliminaries lemma let m be a let i annr m and let s be a multiplicative subset of then s i anns r s m proof easy it is that if r s is a ring map and m is an then m s as is canonically isomorphic to m s it is also that if m is a projective resp flat then for each natural number n m is a projective resp flat finally if m is a then m is a we shall use these facts freely throughout this article main results lemma the annihilator of a projective module is generated by an idempotent element proof let m be a let i annr m and let j be the ideal of r which is generated by the elements f m where f m r is a map and m clearly ij consider a free f with basis ei and an onto map f for each i there is a map hi f r such that hi ej j if m is then there exists a map m f such that is the identity fi hi for all i then for each m m we may write m fi m ei where fi m for i all but a finite number of indices i this implies that jm if moreover m is a then by tag we may find an element b j such that b i let a b then clearly a and i ra notes on finitely generated flat modules remark in a flat module both of the scalars and vectors involved in a linear relation have very peculiar properties more precisely let n p ai xi in m m be a and consider a linear relation where ai r and xi m for all i let i an and consider the map rn i which maps each rn of rn n p into ri ai then we get the following exact sequence of i m where k ker if morern m n p over m is then xi ker where k k m for all i because by the flatness of m i m is canonically isomorphic to im therefore there exist a natural number m and also elements sj j rn j k and yj m with j m such that n m p p xi now by applying the canonical isomor i sj yj phism rn m m n which maps each pure tensor rn x m p into x rn x we obtain that xi ri j yj for all i moreover for each j n p ri j ai since sj under the light of remark the following result is obtained theorem let r m be a local ring and let m be a flat let s be a subset of m such that its image under the canonical map m is linearly independent over k then s is linearly independent over proof suppose n p ai xi where ai r and xn to prove the assertion we shall use an induction argument on if n then by remark there are elements zd m and d p rd r such that rj zj and rj for all j by the hypotheses mm therefore rj m for some j it follows that now let n again by remark there are elements m n p p ri j yj and ri j ai ym m and ri j r such that xi for all i j there is some j such that rn j m since xn mm it abolfazl tarizadeh follows that an p rn j ri j ai then we get p ai xi rn j ri j xn let ci rn j ri j note that the image of xi ci xn i n under the canonical map m is linearly independent because if xn is a linearly independent subset of a module then xi ri xn i n is also a linearly independent subset where the ri are arbitrary scalars therefore by the induction hypothesis ai for all i n this also implies that an corollary let r m be a local ring and let m be a flat then there is a free f of m such that m f mm in particular if either is finitely generated or the maximal ideal is nilpotent then m is a free proof every vector space has a basis so let xi mm p i i rxi be a of where k by theorem f is a free clearly m f mm if is finitely generated then by the nakayama lemma m f if m is nilpotent then there is a natural number n such that mn it follows that mn as an immediate consequence of the above corollary we obtain the following result which plays a major role in this article corollary every flat module over a local ring is free as a first application of corollary we obtain lemma the annihilator of a flat module is an idempotent ideal proof let m be a flat module over a ring let i annr m let p be a prime ideal of by lemma ip annrp mp by corollary mp is a free rp therefore ip is either the whole localization or the zero ideal if ip then i p since i i but if ip rp then i is not contained in thus we may choose some a i clearly i p and so i p rp therefore i i notes on finitely generated flat modules if m is a then the invariant factor of m denoted by in m is defined as the annihilator of the exterior power of n therefore in m annr m we have lemma the invariant factors of a flat module are idempotent ideals proof if m is a flat then m is as well thus by lemma in m is an idempotent ideal remark let m be a flat then corollary leads us to a function spec r n which is defined as p rankrp mp is called the rank map of it is also easy to see that supp m p spec r rankrp mp n theorem let m be a flat then the following conditions are equivalent i m is ii the invariant factors of m are ideals iii the rank map of m is locally constant proof i ii it is that m is a projective and so by lemma in m is a principal ideal ii iii it suffices to show that the rank map of m is zariski continuous by lemma and tag there exists some a in m such that a in m clearly a and in m ra by remark n supp n spec r supp n where n m and n m but supp n d a moreover supp n v m since n is a therefore n is an open subset of spec iii i apply corollary and tag the following result vastly generalizes theorem theorem let a b be an extension of rings and let m be a flat if m b is then m is proof first we shall prove that i anna m is a principal ideal let l annb n where n m b we claim that ib let abolfazl tarizadeh q be a prime ideal of b clearly n is a thus by corollary lq is either the whole localization or the zero ideal if lq then ib q since ib but if lq bq then l is not contained in q and so nq again by corollary ip is either the whole localization or the zero ideal where p a q if ip then mp and so by corollary mp bq but mp bq is isomorphic to nq this is a contradiction therefore ip ap it follows that ib q bq this establishes the claim by lemma there is an idempotent e b such that ib be let j b clearly ij we have i j a if not then there exists a prime ideal p of a such that i j thus by corollary ip therefore the extension of ib under the canonical map b b ap is zero thus there exists an element s a p such that se and so s s e hence s j but this is a contradiction therefore i j a it follows that there is an element c i such that c and i ac now let n but m is a flat moreover m b is since it is canonically isomorphic to m b thus by what we have proved above in m is a principal ideal hence by theorem m is lemma let m be a flat and let j be an ideal of let i annr m and l annr then l i j proof clearly i j let p be a prime ideal of by lemma ip annrp mp thus by corollary ip is either the whole localization or the null ideal for all primes if ip rp then i j p lp rp since i i j but if ip then mp and so annrp mp mp jp recall that if f is a nonzero free then annr j on the other hand by lemma lp annrp mp mp thus i j p jp lp hence l i j the following result generalizes theorem theorem let m be a flat and let j be an ideal of r which is contained in the radical jacobson of if is then m is proof first we shall prove that i annr m is a principal ideal by lemma l i j also by lemma notes on finitely generated flat modules is a principal ideal this implies that i rx i j for some x r since i is canonically isomorphic to i j but i rx because let m be a maximal ideal of by corollary im is either the whole localization or the zero ideal if im then rx m since rx i but if im rm then i is not contained in thus rx is also not contained in m since i j j hence rx m rm therefore i rx now let n and let n m then is because as is isomorphic to and is but n is a flat therefore by what we have proved above in m annr n is a principal ideal thus the invariant factors of m are ideals and so by theorem m is a ring r is called an s refers to sakhajev if every flat is theorem let a b be a ring map whose kernel is contained in the radical jacobson of a if m is a flat such that m b is then m is in particular if b is an then a is as well proof clearly is a flat and b m b is where j ker moreover can be viewed as a subring of b via therefore by theorem is then apply theorem finally assume that b is an if m is a flat then m b is a flat and so by the hypothesis it is therefore m is remark let s be a subset of a ring the polynomial ring r xs s s modulo i is denoted by s r where the ideal i is generated by elements of the form xs and xs s with s we call s r the pointwise localization of r with respect to amongst them the pointwise localization of r with respect to itself namely r r has more interesting properties for further information please b instead consult with note that wiegand utilizes the notation r of r clearly s s xs i and xs i s xs i where r s r is the canonical map and the pair s r satisfies in the following universal property for each such pair a where r a is a ring map and for each s s there is some c a abolfazl tarizadeh such that s s c and c s then there exists a unique ring map s r a such that now let p be a prime ideal of r and consider the canonical map r p where p is the residue field of r at by the above universal property there is a unique ring map s r p such that thus induces a surjection between the corresponding spectra this in particular implies that the kernel of is contained in the of using this then the following result vastly generalizes theorem corollary let m be a flat if there exists a subset s of r such that m s r is s then m is proof it is an immediate consequence of theorem proposition let i be an ideal of a ring then is if and only if annr f i r for all f i proof first assume that is suppose there is some f i such that annr f i thus there exists a prime p of r such that annr f i therefore by corollary ip and so there is an element s r p such that sf but this is a contradiction and we win conversely let m n be an injective map to prove the assertion it suffices to show that the induced map given by m im m in n p is injective if m in then we may write m fi xi where fi i and xi n for all i by the hypothesis there are elements bi annr fi and ci i such that bi ci it follows that bn cn b c where b bn and c i thus m m m cm therefore m cm im as a final result in the following we give an example of a flat module which is not projective note that finding explicit examples of flat modules but not projective is not as easy as one may think at first example let r q a be an infinite direct product of copies of l a ring a and let i a which is an ideal of let f fi notes on finitely generated flat modules be an element of i there exists a finite subset d of such that fi for all i now consider the sequences g gi and h hi of elements of r with gi and hi for all i d and gi and hi for all i clearly g annr f h i and g thus by proposition is suppose is then by lemma there is a sequence e ei r such that i re thus there exists a finite subset e of such that ei for all i clearly e pick some k there is some r ri r such that k re where k is the kronecker delta in particular rk ek rk this is a contradiction therefore is not references cox and pendleton rings for which certain flat modules are projective trans amer math soc endo on flat modules over commutative rings j math soc japan aise johan de jong et al stacks project see http jondrup on finitely generated flat modules math scand olivier anneaux absolument plats universels et buts samuel commutative tomme puninski and rothmaler when every finitely generated flat module is projective journal of algebra vasconcelos on finitely generated flat modules trans amer math soc wiegand golobalization theorems for locally finitely generated modules pacific journal of math vol no department of mathematics faculty of basic sciences university of maragheh o box maragheh iran address
0
intphys a framework and benchmark for visual intuitive physics reasoning ronan riochet ecole normale superieure inria mario ynocente castro ycmario mar mathieu bernard ecole normale superieure inria adam lerer facebook ai research rob fergus facebook ai research alerer robfergus izard paris descartes cnrs emmanuel dupoux coml research university abstract to predict how they interact in the physical world experimental evidence shows that very young infants and many animals do have an intuitive grasp of how objects interact in the world and that they exploit this intuitive physics to make predictions about future outcomes and plan their actions at months infants are able to parse visual inputs in terms of permanent solid and spatiotemporally continuous objects at months they understand the notion of stability support and causality between and months they grasp the notions of gravity inertia and conservation of momentum in collision between and months shape constancy and so on such tacit knowledge is intuitive and nonverbal as opposed to formal knowledge as taught in physics classes and follows a developmental path parallel to early language acquisition both occur quickly spontaneously and without explicit training by caregivers in living organisms intuitive physics is a latent construct it can only be observed and measured indirectly through its effects on specific tasks like planning problem solving or in humans verbal descriptions and explanations it can also be revealed through the measurement of surprise reactions to magic tricks physically impossible events such as objects disappearing or appearing out of nowhere passing through each other or defying gravity for a review the latent nature of intuitive physics raises two difficult challenges for vision systems an evaluation challenge and an engineering challenge the evaluation challenge can be formulated as follows given an artificial vision system define a measure which quantifies how much this system understands about in order to reach human performance on complex visual tasks artificial systems need to incorporate a significant amount of understanding of the world in terms of macroscopic objects movements forces etc inspired by work on intuitive physics in infants we propose an evaluation framework which diagnoses how much a given system understands about physics by testing whether it can tell apart well matched videos of possible versus impossible events the test requires systems to compute a physical plausibility score over an entire video it is free of bias and can test a range of specific physical reasoning skills we then describe the first release of a benchmark dataset aimed at learning intuitive physics in an unsupervised way using videos constructed with a game engine we describe two deep neural network baseline systems trained with a future frame prediction objective and tested on the possible versus impossible discrimination task the analysis of their results compared to human data gives novel insights in the potentials and limitations of next frame prediction architectures introduction despite impressive progress in machine vision on many tasks face recognition object recognition object segmentation etc artificial systems are still far from understanding of complex scenes scene understanding involves not only segmenting and tracking of objects across time but also representing the spatial and temporal relationships between these objects and being able figure popular applications involving scene understanding and proposed evaluation method based on physical plausibility judgments itive physics one possible answer would be to measure intuitive physics through applications like visual question answering vqa object tracking or action planning see figure however this runs into two risks a dataset bias b noisy measure the first risk also known as the clever hans problem is that real life application datasets often contain inherent statistical biases which make it sometimes possible to achieve good performance with only minimal involvement in solving the problem at hand the second risk is that the overall performance of a system is a complicated function of the performance of its parts therefore if a vqa system has a better performance than another one it could be not because it better understands physics but because it has a better language model we propose a framework which we call the physical plausibility test which directly evaluates intuitive physics in a and fashion this framework is inspired by research on infant and animal intuitive physics it recasts physical reasoning as a simple classification problem presented with the video of a simple scene the question is whether the depicted event is physically possible or not the trick is in preparing matched sets of videos where the physical violation introduces minimal differences between the frames of possible vs impossible movies by varying the nature of the physical violation one can probe different types of reasoning laws regarding objects and their properties laws regarding objects movement and interactions given that our method involves videos of events that could not arise spontaneously in nature it should be taken as a diagnostic test and in no way as a practical method for training physical reasoning systems yet it s advantage is that it can be applied to a variety of systems that have been engineered or trained on some other task so long as these systems have the minimal requirement to compute a global scalar number for a given scene which we can interpret as a plausibility score any system based on probability or reconstruction error can easily derive such a score the engineering challenge can be formulated as follows construct a system which incorporates as much intuitive physics as possible at least as much as infants have for a start we already discarded the use of impossible movies to train such a system on the grounds of practicality another approach using supervised learning with high level annotations physical entities laws or relations etc would also be impractical first a system could have a good physical understanding of a scene without performing full reconstruction second as shown by infants it is possible to learn intuitive physics without being fed with any high level tag or label in fact they only experience positive physical instances physically possible events additionally infants get useful feedback from their environment as they become more competent at motor control although such feedback only consists in possible events one way to address the challenge would be therefore to construct an unsupervised or weakly supervised system that learns the laws of physics using the same type of data available to infants abundant amount of observational sensory data limited but informative environmental feedback only positive instances here we propose intphys an intuitive physics benchmark which aims at getting a first stab at both the engineering and the evaluation challenges it consists of synthetic videos constructed with a python interfaced game engine unrealengine enabling both realistic physics and precise control the training set consists only of positive cases possible movies as seen from a perspective by an immobile agent this is probably a more difficult task than the one faced by infants because they can explore and interact with their environment it is however interesting to establish how far one can get with such simplified inputs which are easy to gather in abundant amounts in the real world with video cameras in addition this enables an easier comparison of models because they all get the same training data the test set is constructed according to our evaluation framework it requires the system to output a plausibility score and is evaluated on its ability to separate possible from impossible movies the test set can also be used as a standalone diagnostic evaluation of systems trained in other ways real videos interactive training in a virtual environment see the structure of this paper is as follows in section we review related work in high level vision evaluation and models in section we detail our intuitive physics evaluation framework in section we present the first release of intphys benchmark which addresses the most basic component of intuitive physics namely object permanence in section we present two baseline systems trained with a frame prediction cost on this dataset only and analyze their results compared to human performance related work most of the previous work relevant for intuitive physics has been conducted in the context of particular applications we distinguish three broad classes of applications depending on the type of data they use the first class includes tasks at the interface where a model s ability to reason about an image is assessed through a language task generating a caption or answering a question about the image the second class includes tasks which only use images or videos as input such as predicting future events in a video or tracking objects through time the third class involves interface with applications in robotics in that case systems require vision reasoning to control actions and predict their outcome interface going beyond the standard object classification tasks some of the more recent work in the visionlanguage interface have focused on classifying relations between objects this requires in principle some understanding of the underlying physics the distinction between hanging and supporting two tasks are currently receiving a lot of attention captioning and vqa scene captioning consists in generating a sentence that describes an image or matching an image with one or several captions this task requires not only to recognize objects in an image but also understand spatial relations and interactions between these objects visual question answering requires to provide a verbal answer to a verbal question about an image alternatively one is only required to rank several potential answers like in image captioning this task requires to understand spatial relations between objects but in addition it has to understand the question and extract the right information to answer it some datasets use videos as input instead of static images because they go beyond a closed set of predefined outputs these two tasks raise evaluation difficulties as shown in and vqa systems can cheat and obtain good performance by exploiting statistical biases in the dataset for example if the question what covers the ground is highly correlated with images of ground a statistical learner would perform well on that question by always answering snow whatever the image these biases make it harder to understand models weaknesses and strengths in the clevr dataset authors focus on testing visual reasoning ability while minimizing questionconditional biases they provide representations for both images and questions as well as detailed annotations describing the kind of reasoning each question requires in a similar spirit we aim to propose a diagnostic tool for visual reasoning systems providing systematically constructed tasks minimizing statistical biases our proposed task gets rid of language altogether and directly taps into understanding of objects and their interactions high level vision many research projects in vision define tasks which aim at recovering high level structure from low level pixel information one example is the recovery of structure from static or moving images two tasks have been proposed to tackle the temporal dynamics of objects in videos object tracking and future prediction classic formulations of object tracking focus on matching instance labels through video frames for a given video it results in a collection of object instances with their location at every time step this is a problem with its own literature and challenges contrary to learning systems tracking models use priors regarding objects motion assuming for instance that objects have constant speed when they are occluded or that their appearance only have small changes through time intuitive physics and forward modelling prediction can be seen as a more general task than object tracking given an image or a stack of images around time t the task is to produce a predicted image some time in the future recent studies have investigated models for predicting the stability and forward modeling the dynamics of towers of blocks proposes a model based on an intuitive physics engine and follow a supervised approach using convolutional neural networks cnns makes a comparison between models and models improves the predictions of a cnn model by providing it with a prediction of a generative model in authors propose different feature learning strategies architecture adversarial training method image gradient difference loss function to predict future frames in raw videos even though next frame prediction tasks could be used to learned aspects of intuitive physics through observed regularities of object motion current forward models still struggle to predict outcomes beyond a few frames other models use more structured representation of objects to derive predictions in and authors learn objects dynamics by modelling their pairwise interactions and predicting the resulting objects states representation position velocity object intrinsic properties in and authors combine factored latent object representations object centric dynamic models and visual encoders each frame is parsed into a set of object state representations which are used as input of a dynamic model in and authors use a visual decoder to reconstruct the future frames allowing the model to learn from raw though synthetic videos interface other studies have focused on interface with potential applications in robotics the main tasks are of two kinds i predicting the outcome of an action in a visual environment forward modelling and ii predicting the optimal action to make in order to reach the desired outcome action planning forward modelling has been studied in where authors train a model to predict the outcome of interactions between a robot and an object conditioned with an input image the state and the action of the robot their model is trained to predict the resulting image after the action in they use this forward model to train the robot to execute a given action on an object in authors construct indoor scenes in a physics simulator and apply forces to various objects in these scenes they train a deep neural network to predict effects of forces on objects the ground truth being simulated by the physics simulator using properties like mass friction gravity and solidity in they train a model to predict the dynamics of object from static images other studies focused on learning control from visual inputs in and authors train deep neural networks to coordinate robotic vision and action on the specific task of object grasping learning visual control policies with reinforcement learning has been investigated either in simulation or in the real world some systems use a prediction model in the purpose of action planning in authors train a model to predict future frames in videos using these predictions to train a for playing atari games integrating a model of the dynamic of the external world to an agent was also done in to plan novel actions by running multiple internal simulations finally robots pushing poking objects was investigated in where the model does not predict the image directly but rather a latent representation of it this latent representation and the forward model are learned jointly with an inverse model that predicts how to move the object to a desired position so that the latent representation has to keep information about object location being able to manipulate objects or predict dynamics in billiard game seems to require notions like solidity mass collisions and causality even though the proposed framework for testing intuitive physics involves vision only integrating vision and action during training may help to learn these notions a diagnostic test for intuitive physics as we just saw there is a great diversity of systems and applications that rely in some way or other on physical understanding we propose a single diagnostic test which can be run provided minor modifications on any of these systems captioning and vqa systems systems performing reconstruction tracking planning etc be they engineered by hand or trained using statistical learning the main idea is to draw from work in developmental or comparative psychology infants or animals to construct a well controlled test avoiding potential statistical biases and cheap tricks and to obtain relatively pure tests measuring different types of physical reasoning abilities intuitive physics is best described at the latent body of knowledge which allow organisms to predict events and plan actions and when applicable describe them this body of knowledge may be incomplete not totally coherent and not used in all situations due to variations in attention memory etc here we will take the view that it is a rudimentary version of newtonian physics in so far as it deals with solid macroscopic objects existing in a world with their intrinsic properties mass shape position velocities we illustrate our diagnostic test on object permanence one of the most basic principle of intuitive physics which states that an object continue to exist even when not seen we present the design features of our test minimal sets parametric task difficulty and evaluation metric we then show how this can be extended to a wide range of intuitive physics reasoning problems minimal sets design an important design principle of our evaluation framework relates to the organization of the possible and impossible movies in extremely well matched sets to avoid the clever hans problem this is illustrated in figure for object permanence we constructed matched sets comprising four movies which contain an initial scene at time either one or two objects and a final scene at time either one or two objects separated by a potential occlusion by a screen which is raised and then lowered for a variable amount of time at its maximal height the screen completely occludes the objects so that it is impossible to know in this frame how many objects are behind the occluder the four movies are constructed by combining the two possible beginnings with the two possible endings giving rise to two possible and and two impossible and movies importantly across these movies the possible and impossible ones are made of the exact same frames the only factor distinguishing them being the temporal coherence of these frames such a design is intended to make it difficult for algorithms to use cheap tricks to distinguish possible from impossible movies by focusing on low level details but rather requires models to focus on higher level temporal dependencies between frames parametric manipulation of task complexity our second design principle is that in each block we will vary the stimulus complexity in a parametric fashion in the a hierarchy of intuitive physics problems figure illustration of the minimal sets design with object permanence schematic description of a static condition with one two objects and one occluder in the two possible movies green arrows the number of objects remains constant despite the occlusion in the two impossible movies red arrows the number of objects changes goes from to or from to case of the object permanence block for instance stimulus complexity can vary according to three dimensions the first dimension is whether the change in number of objects occurs in plain view visible or hidden behind an occluder occluded a change in plain view is evidently easier to detect whereas a hidden change requires an element of short term memory in order to keep a trace of the object s through time the second dimension is the complexity of the object s motion tracking an immobile object is easier than if the object has a complicated motion the third dimension is the number of objects involved in the scene this tests for the attentional capacity of the system as defined by the number of objects it can track simultaneously manipulating stimulus complexity is important to establish the limit of what a vision system can do and where it will fail for instance humans are well known to fail when the number of objects to track simultaneously is greater than four the physical possibility metrics our evaluation metrics depend on the system s ability to compute a plausibility score p x given a movie x because the test movies are structured in n matched in figure k of positive and negative movies n p p oski impki we derive two different metrics the relative error rate lr computes a score within each set it requires only that within a set the positive movies are more plausible than the negative movies x p lr j p p osji pj p impji n i the absolute error rate la requires that globally the score of the positive movies is greater than the score of the negative movies it is computed as la au c i j p p osji i j p impji where au c is the area under the roc curve which plots the true positive rate against the false positive rate at various threshold settings we now explain how the design principles we presented above can be applied to study physical reasoning with progressively more complicated problems taking advantage of behavioral work on intuitive studies we organize the tests into levels and blocks each one corresponding to a core principle of intuitive physics and each raising its particular machine vision challenge our typology of problems is presented in table it is organized in two levels at the level the problems only deal with properties and movement characteristics of single objects at the level the problems involve interactions between objects in the first level we define blocks as follows the first two are related to the conservation through time of intrinsic properties of objects object permanence already discussed corresponds to the fact that objects continuously exist through time and do not pop in or out of existence this turns into the computational challenge of tracking objects through occlusion the second block shape constancy describes the tendancy of rigid objects to preserve their shape through time this principle is more challenging than the preceding one because even rigid objects undergo a change in appearance due to other factors illumination distance viewpoint partial occlusion the other three blocks relate to object s movement through time and the conservation laws which govern these movements for rigid inanimate macroscopic objects these principles map into progressively more challenging problems of trajectory prediction regarding the interactions between objects level we also define blocs in increasing order of complexity the first two test for very basic principles solidity states that two objects can not occupy the same physical space causality states that object s interactions can only occur through physical contact this is of course not true for gravitational and electromagnetic forces but still the principle of contact is deeply entrenched in human perception further forces acting at a distance would have limited practical value for many applications another reason to leave them for further extensions the last three blocks correspond to modes of interactions between objects through contact elastic collision support and containment all of these blocks raise difficult challenges for action planning and some of them also for correctly describing a scene or an event using language causality support containment the intphys benchmark we present the first release of intphys a benchmark designed to address both the engineering and evaluation challenges for intuitive physics in vision systems this first release is focused on unsupervised learning and tests only the first block of our hierarchy of problems table list of the conceptual blocks of the intuitive physics framework block name object permanence shape constancy continuity energy momentum gravity solidity causality physical principles computational challenge objects objects attributes objects don t pop in and out of existence object tracking objects keep their shapes object tracking trajectories of objects are continuous object trajectories constant kinetic energy and momentum object trajectories downwards gravitational field predicting objects trajectories relations interactions two objects can not occupy the same space predicting objects mechanical interactions require description mass elastic collisions temporal proximity objects keep their mass conservation of system s momentum planning support containment gravity solidity polygon of support solidity continuity event description solving the shell ject permanence future releases will include more of the blocks of table the benchmark consists of three components a training set containing only physically possible events involving simple inanimate objects moving and interacting in a virtual environment a dev and test set containing both physically possible and physically impossible videos carefully matched in tuples as described above an evaluation software we describe these three components as well as the results of humans plausibility judgments on the test set which can serve as a reference for algorithms modeling human perception the training set the training set has been constructed using unreal engine it contains a large variety of objects interacting one with another occluders textures etc see figure for some examples it is composed of videos of possible events around seconds each at totalling hours of videos each video is delivered as stacks of raw image x pixels totalling of uncompressed data we also release the source code for data generation allowing users to generate a larger training set if desired even though the spirit of intphys is the unsupervised learning of intuitive physics we do provide additional information which may help the learner the first one is the depth field for each image this is not unreasonable given that in infants stereo vision and motion cues could provide an approximation of this information we also deliver object instance segmentation masks given that this information it is probably not available as such in infants we provide it only in the training set not in the test set for pretraining purposes figure examples of frames from the training set the dev and test sets this section describes the dev and test sets for block object permanence the design of these dev and test sets follow the general structure of matched sets described in section as for parametric complexity we vary the number of objects or the presence and absence of occluder s and the complexity of the movement static dynamic and dynamic in the static case the objects do not move in the dynamic case they bounce or roll from left to right or right to left in both these types of events one occluder may be present on the scene and objects may sometimes pop into existence a event or disappear suddenly these impossible events occur behind the occluder when it is present or in full view otherwise in dynamic events illustrated in supplementary material figure two occluders are present and the existence of objects may change twice for example one object may be present on the scene at first then disappear after going behind the first occluder later reappearing behind the ond occluder dynamic events were designed to prevent systems from detecting inconsistencies merely by comparing the number of objects visible at the beginning and at the end of the movie matched sets contain four videos two possible events and and two impossible events and in total the block test set contains types of movies objects occlusions and types of movements the dev set is instantiated by different renderings of these scenarios objects positions shapes trajectories resulting in movies the test set is instantiated by different renderings of these scenarios for a total of movies and uses different objects textures motions etc all of the objects and textures of the dev and test sets are present in the training set the purpose of the dev set released in intphys is to help in the selection of an appropriate plausibility score and in the comparison of various architectures hyperparameters but it should not serve to train the model s parameters this should be done only with the training set this is why the dev set is kept intentionally small the test set has more statistical power and enables a fine grained evaluation of the results across the different movie subtypes evaluation software for each movie the model should issue a scalar plausibility score this number together with the movie id is then fed to the evaluation software which outputs two tables of results one for the absolute score and the other for the relative score the evaluation software is provided for the dev set but not the test set for evaluating on the test set participants are invited to submit their system and results see and their results will be registered and on the website leaderboard human judgments we presented the videos from the test set block to human participants using amazon mechanical turk the experiment and human judgements results are detailed in supplementary section baseline systems in this section we present two baseline systems which attempt to learn intuitive physics in an unsupervised setting using only the possible movies of the training set the two baselines consist in training deep neural networks with a future frame prediction objective based on the literature on next frame prediction we propose two neural network models predicting a future frame given a set of current frames our first model has a cnn structure and the second is a conditional generative adversarial network gan with a similar structure as dcgan for both model architectures we investigate two different training procedures in the first we train models to predict images with a prediction span of frames in the second we predict images with a prediction span of frames preliminary work with predictions at the pixel level revealed that our models failed at predicting convincing object motions especially for small objects on a rich background for this reason we switched to computing predictions at a higher level using object masks we use the metadata provided in the benchmark training set to train a semantic mask deep neural network dnn this dnn uses a pretrained on imagenet to extract features from the image from which a deconvolution network is trained to predict the semantic mask which only distinguished three types of entities background occluders and objects we then use this mask as input to a prediction component which predicts future masks based on past ones to evaluate these models on our benchmark our system needs to output a plausibility score for each movie for this we compute the prediction loss along the movie given past frames a plausibility score for the frame ft can be derived by comparing ft with the prediction like in we use the analogy with an agent running an internal simulation visual imagination here we assimilate a greater distance between prediction and observation with a lower plausibility in subsection we detail how we aggregate the scores of all frames into a plausibility score for the whole video models through out the movie our models take as input two frames and predict a future frame ftarget the prediction span is independent from the model s architecture and depends only on the triplets ftarget provided during the training phase our two architectures are trained either on a short term prediction task frames in the future or a long term prediction task frames intuitively prediction will be more robust but prediction will allow the model to grasp dependencies and deal with long occlusions cnn we use a pretrained on imagenet to extract features from input frames a deconvolution network is trained to predict the semantic mask of future frame ftarget conditioned to these features using a loss generative adversarial network as a second baseline we propose a conditional generative adversarial network gan that takes as input predicted semantic masks from frames and predicts the semantic mask of future frame ftarget in this setup the discriminator has to distinguish between a mask predicted from ftarget directly real and a mask predicted from past frames like in our model combines a conditional approach with a similar structure as of dcgan at test time we derive a plausibility score by computing the conditioned discriminator s score for every conditioned frame this is a novel approach based on the observation that the optimal discriminator d computes a score for x of pdata x d x pg x pdata x for events pdata therefore as long as pg d should be for events and d x for physical events x note that this is a strong assumption as there is no guarantee that the generator will ever have support at the part of the distribution corresponding to impossible videos all our models architectures as well as training procedures and samples of predicted semantic masks can be found in supplementary material tables and figure the code will be made available video plausibility score from forward models presented above we can compute a plausibility score for every frame ftarget conditioned to previous frames however because the temporal positions of impossible events are not given we must decide of a score for a video given the scores of all its conditioned frames an impossible event can be characterized by the presence of one or more impossible frame s conditioned to previous frames hence a natural approach to compute a video plausibility score is to take the minimum of all conditioned frames scores plaus v min ftarget plaus ftarget where v is the video and ftarget are all the frame triplets in v as given in the training phase results prediction the first training procedure is a prediction task it takes as input frames ft and predicts which we note ft in the following we train the two architectures presented above on prediction task and evaluate them on the test set for the relative classification task cnn encoderdecoder has an error rate of when impossible events are visible and when they are occluded the gan has an error rate of when visible and when occluded for the absolute classification task cnn has a la see eq of when impossible events are visible and when they are occluded the gan has a la of when visible and when occluded results are detailed in supplementary material tables we observe that our prediction models show good performances when the impossible events are visible especially on the relative classifications task however they perform poorly when the impossible events are occluded this is easily explained by the fact that they have a prediction span of frames which is usually lower than the occlusion time hence these models don t have enough memory to catch occluded impossible events prediction the second training procedure consists in a prediction task ft for the relative classification task cnn has an error rate of when impossible events are visible and when they are occluded the gan has an error rate of when visible and when occluded for the absolute classification task cnn has a la of when impossible events are visible and when they are occluded the gan has a la of when visible and when occluded results are detailed in supplementary material tables as expected models perform better than shortterm models on occluded impossible events moreover results on absolute classification task confirm that it is way more challenging than the relative classification task because some movies are more complex than others the average score of each quadruplet of movies may vary a lot it results in cases where one model returns a higher plausibility score to an impossible movie m imp easy from an easy quadruplet than to a possible movie m pos complex from a complex quadruplet aggregated model to grasp short and dependencies we aggregate the scores of and longterm models pagg v v v for the relative classification task cnn has an error rate of when impossible events are visible and when they are occluded the gan has an error rate of when visible and when occluded for the absolute classification task cnn has a la of when impossible events are visible and when they are occluded the gan has a la of when visible and when occluded results are detailed in supplementary material tables figure discussion we defined a general framework for measuring intuitive physics in artificial systems inspired by research on conceptual development in infants in this framework a system is asked to return a plausibility score for a video sequence showing physical interaction between objects the system s performance is assessed by measuring its ability to discriminate possible from impossible videos illustrating several types of physical principles in addition we present intphys a benchmark designed to test for the unsupervised learning of intuitive physics learning from positive examples only on the first release of this benchmark dedicated to object permanence we provide both human performance and proof of principle baseline systems humans show a generally good performance although some attentional limitations start to appear when using occlusion and several objects to track simultaneously the computational system shows that it is possible to obtain above chance performance using a mask prediction task although occlusion presents a particularly strong challenge the relative success of the mask prediction system compared to what can be expected from systems indicates that operating at a more abstract level is a worth pursuing strategy as new blocks of the benchmark are released see table the prediction task will become more and more difficult and progressively reach the level of scene comprehension achieved by humans references agrawal batra and parikh analyzing the behavior of visual question answering models arxiv preprint agrawal nair abbeel malik and levine learning to poke by poking experiential learning of intuitive physics corr antol agrawal lu mitchell batra lawrence zitnick and parikh vqa visual question answering in proceedings of the ieee international conference on computer vision pages baillargeon and carey core cognition and beyond in pauen editor early childhood development and later outcome chapter pages cambridge university press new york baillargeon and is the top object adequately supported by the bottom object young infants understanding of support relations cognitive development baillargeon needham and devos the development of young infants intuitions about support infant and child development battaglia pascanu lai jimenez rezende and kavukcuoglu interaction networks for learning about objects relations and physics in lee sugiyama luxburg guyon and garnett editors advances in neural information processing systems pages curran associates battaglia j hamrick and j tenenbaum simulation as an engine of physical scene understanding proceedings of the national academy of sciences of the united states of america bertinetto valmadre henriques vedaldi and torr siamese networks for object tracking in european conference on computer vision pages springer brockman cheung pettersson schneider schulman tang and zaremba openai gym corr carey the origin of concepts oxford series in cognitive development oxford university press oxford new york chang funkhouser guibas hanrahan huang li savarese savva song su et al shapenet an model repository arxiv preprint chang ullman torralba and j tenenbaum a compositional approach to learning physical dynamics arxiv preprint chen kuznetsova warren and choi imagecaptions a corpus of expressive descriptions in repetition in pages choy xu gwak chen and savarese a unified approach for single and object reconstruction arxiv preprint denton gross and fergus learning with generative adversarial networks corr ehrhardt monszpart mitra and vedaldi learning a physical predictor arxiv preprint farhadi hejrati sadeghi young rashtchian hockenmaier and forsyth every picture tells a story generating sentences from images computer pages finn goodfellow and levine unsupervised learning for physical interaction through video prediction in advances in neural information processing systems pages finn and levine deep visual foresight for planning robot motion in robotics and automation icra ieee international conference on pages ieee fraccaro kamronn paquet and winther a disentangled recognition and nonlinear dynamics model for unsupervised learning advances in neural information processing systems nips fragkiadaki agrawal levine and malik learning visual predictive models of physics for playing billiards iclr gao mao zhou huang wang and xu are you talking to a machine dataset and methods for multilingual image question in advances in neural information processing systems pages goodfellow mirza xu ozair courville and bengio generative adversarial nets in advances in neural information processing systems pages j hamrick pascanu vinyals ballard heess and battaglia decision making with physical models in deep neural networks he zhang ren and j sun deep residual learning for image recognition in proceedings of the ieee ference on computer vision and pattern recognition pages jiang liu zamir toderici laptev shah and sukthankar thumos challenge action recognition with a large number of classes johnson hariharan van der maaten zitnick and girshick clevr a diagnostic dataset for compositional language and elementary visual reasoning arxiv preprint kellman and spelke perception of partly occluded objects in infancy cognitive psychology kingma and ba adam a method for stochastic optimization corr krishna zhu groth johnson hata kravitz chen kalantidis li shamma et al visual genome connecting language and vision using crowdsourced dense image annotations international journal of computer vision kristan leonardis matas felsberg pflugfelder vojir and fernandez the visual object tracking challenge results springer oct krizhevsky sutskever and hinton imagenet classification with deep convolutional neural networks in pereira burges bottou and weinberger editors advances in neural information processing systems pages curran associates milan reid roth and schindler motchallenge towards a benchmark for multitarget tracking cs apr arxiv lerer gross and fergus learning physical intuition of block towers by example international conference on machine learning icml leslie and keeble do infants perceive causality cognition levine finn darrell and abbeel training of deep visuomotor policies journal of machine learning research levine pastor krizhevsky ibarz and quillen learning coordination for robotic grasping with deep learning and data collection the international journal of robotics research page li leonardis and fritz to fall or not to fall a visual approach to physical stability prediction arxiv preprint li leonardis and fritz visual stability prediction and its application to manipulation arxiv preprint lillicrap j hunt pritzel heess erez tassa silver and wierstra continuous control with deep reinforcement learning arxiv preprint lin maire belongie hays perona ramanan and zitnick microsoft coco mon objects in context in european conference on computer vision pages springer malinowski and fritz a approach to question answering about scenes based on uncertain input in advances in neural information processing systems pages manohar soundararajan raju goldgof kasturi and garofolo performance evaluation of object detection and tracking in video pages springer berlin heidelberg berlin heidelberg mathieu couprie and lecun deep video prediction beyond mean square error arxiv preprint mirza courville and bengio generalizable features from unsupervised learning iclr workshop submission mirza and osindero conditional generative adversarial nets corr mottaghi bagherinezhad rastegari and farhadi newtonian scene understanding unfolding the dynamics of objects in static images in proceedings of the ieee conference on computer vision and pattern recognition pages mottaghi rastegari gupta and farhadi what happens if learning to predict the effect of forces in images eccv oh guo lee lewis and singh actionconditional video prediction using deep networks in atari games in advances in neural information processing systems pages papadourakis and argyros multiple objects tracking in the presence of occlusions computer vision and image understanding pinheiro collobert and dollar learning to segment object candidates in advances in neural information processing systems pages pinker the language instinct harper pinto and gupta supersizing learning to grasp from tries and robot hours in robotics and automation icra ieee international conference on pages ieee pirsiavash ramanan and fowlkes globallyoptimal greedy algorithms for tracking a variable number of objects in computer vision and pattern recognition cvpr ieee conference on pages ieee pylyshyn and storm tracking multiple independent targets evidence for a parallel tracking mechanism spatial vision radford metz and chintala unsupervised representation learning with deep convolutional generative adversarial networks corr real shlens mazzocchi pan and vanhoucke a large data set for object detection in video arxiv preprint ren kiros and zemel exploring models and data for image question answering in advances in neural information processing systems pages rezende eslami mohamed battaglia jaderberg and heess unsupervised learning of structure from images in advances in neural information processing systems pages rolfs dambacher and cavanagh visual adaptation of the perception of causality current biology russakovsky deng su krause satheesh ma huang karpathy khosla bernstein berg and imagenet large scale visual recognition challenge international journal of computer vision saxe and carey the perception of causality in infancy acta psychologica saxena sun and ng learning scene structure from a single still image ieee transactions on pattern analysis and machine intelligence spelke kestenbaum simons and wein spatiotemporal continuity smoothness of motion and object identity in infancy british journal of developmental psychology tapaswi zhu stiefelhagen torralba urtasun and fidler movieqa understanding stories in movies through in proceedings of the ieee conference on computer vision and pattern recognition pages watters tacchetti weber pascanu battaglia and zoran visual interaction networks arxiv june wright yang ganesh sastry and ma robust face recognition via sparse representation ieee transactions on pattern analysis and machine intelligence wu song khosla yu zhang tang and xiao shapenets a deep representation for volumetric shapes in proceedings of the ieee conference on computer vision and pattern recognition pages xu and carey infants metaphysics the case of numerical identity cognitive psychology young lai hodosh and hockenmaier from image descriptions to visual denotations new similarity metrics for semantic inference over event descriptions transactions of the association for computational linguistics yu park berg and berg visual madlibs fill in the blank description generation and question answering in proceedings of the ieee international conference on computer vision pages zhang goyal batra and parikh yin and yang balancing and answering binary visual questions in proceedings of the ieee conference on computer vision and pattern recognition pages zhang wu zhang freeman and j tenenbaum a comparative evaluation of approximate probabilistic simulation and deep neural networks as accounts of human physical scene understanding cogsci zitnick and parikh bringing semantics into focus using visual abstraction in proceedings of the ieee conference on computer vision and pattern recognition pages supplementary material framework details ties was reversed presumably because participants started using heuristics such as checking that the number of objects at the beginning is the same as at the end and therefore missed the intermediate disappearance of an object these results suggest that human participants are not responding according to the gold standard laws of physics due to limitations in attentional capacity and this even though the number of objects to track is below the theoretical limit of objects the performance of human observers can thus serve as a reference besides ground truth especially for systems intended to model human perception models and training procedure figure illustration of the dynamic condition in the two possible movies green arrows the number of objects remains constant despite the occlusion in the two impossible movies red arrows the number of objects changes temporarily goes from to to or from to to detailed models see tables for models architectures and figure for samples of predicted semantic masks the code will be made available on https training procedure human judgement experiment we presented the videos from the test set block to human participants using amazon mechanical turk participants were first presented examples of possible scenes from the training set some simple some more complex they were told that some of the test movies were incorrect or corrupted in that they showed events that could not possibly take place in the real world without specifying how participants were each presented with randomly selected videos and labeled them as possible or impossible they completed the task in about minutes and were paid a response was counted as an error when a possible movie was classified as impossible or vice versa a total of persons participated but for of them the data were discarded because they failed to respond correctly to the easiest condition static one object visible a mock sample of the amt test is available in http physics experiment the average error rates were computed across condition number of objects and visibility for each remaining participant and are shown in table the overall error rate was rather low but in general observers missed violations more often when the scene was occluded there was an increase in error going from static to dynamic and from dynamic to dynamic but this pattern was only consistently observed in the occluded condition for visible scenario the dynamic appeared more difficult than the dynamic this was probably due to the fact that when objects are visible the dynamic impossible scenarios contain two local discontinuities and are therefore easier to spot than when one discontinuity only is present when the discontinuities occurred behind the occluder the pattern of we separate of the training dataset to control the overfitting of our forward predictions all our models are trained using adam for the cnn we use adam s default parameters and stop the training after one epoch for the gan we use the same parameters as in we set the generator s learning rate to and discriminator s learning rate to on the prediction task we train the gan for epoch on the longterm prediction task we train it for epochs learning rate decays are set to and is set to for both generator and discriminator detailed baseline results table average error rate on plausibility judgments collected in humans using mturk for the intphys block test set this datapoint has been forced to be zero by our inclusion criterion visible occluded type of scene obj obj obj avg obj obj obj avg static dynamic violation dynamic violations avg figure results of our baselines in cases where the impossible event occurs in the open visible or behind an occluder occluded represents the losses lr see equation for the relative performance and la see equation for the absolute performance table mask predictor parameters bn stands for input frame x x first layers of pretrained frozen weights reshape x fc fc reshape x x upsamplingnearest x conv bn relu upsamplingnearest x conv bn relu upsamplingnearest x conv bn relu sigmoid target mask table cnn for forward prediction parameters bn stands for input frames x x x first layers of pretrained frozen weights applied to each frame reshape x fc fc reshape x x upsamplingnearest x conv bn relu upsamplingnearest x conv bn relu upsamplingnearest x conv bn relu sigmoid target mask table generator g parameters sfconv stands for spatial full convolution and bn stands for batchnormalization input masks x x x x conv bn relu x conv bn relu noise x conv bn relu unif x conv bn relu x conv bn relu stack input and noise x sfconv bn relu x sfconv bn relu x sfconv bn relu x sfconv bn relu x sfconv bn relu sigmoid target mask table discriminator d parameters bn stands for history input x x x x x reshape x x x x convolution strides bn leakyrelu x convolution strides bn leakyrelu x convolution strides bn leakyrelu x convolution strides bn leakyrelu x convolution strides bn leakyrelu layer sigmoid figure output examples of our semantic mask predictor from left to right input image ground truth semantic mask predicted semantic mask table detailed relative classification scores for the cnn with prediction span of visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total table detailed absolute classification scores for the cnn with prediction span of visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total table detailed relative classification scores for the gan with prediction span of visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total table detailed absolute classification scores for the gan with prediction span of visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total table detailed relative classification scores for the cnn with prediction span of visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total table detailed absolute classification scores for the cnn with prediction span of visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total table detailed relative classification scores for the gan with prediction span of visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total table detailed absolute classification scores for the gan with prediction span of visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total table detailed relative classification scores for the aggregation of cnn models with prediction spans of and visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total table detailed absolute classification scores for the aggregation of cnn models with prediction spans of and visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total table detailed relative classification scores for the aggregation of gan models with prediction spans of and visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total table detailed absolute classification scores for the aggregation of gan models with prediction spans of and visible occluded type of scene obj obj obj total obj obj obj total static dynamic violation dynamic violations total
2
published as a conference paper at iclr w hen is l earn a c onvolutional f ilter e asy simon du carnegie mellon university ssdu jason lee university of southern california jasonlee to yuandong tian facebook ai research yuandong feb a bstract we analyze the convergence of stochastic gradient descent algorithm for learning a convolutional filter with rectified linear unit relu activation function our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of relu in contrast with previous works that are restricted to standard gaussian input we show that stochastic gradient descent with random initialization can learn the convolutional filter in polynomial time and the convergence rate depends on the smoothness of the input distribution and the closeness of patches to the best of our knowledge this is the first recovery guarantee of algorithms for convolutional filter on input distributions our theory also justifies the learning rate strategy in deep neural networks while our focus is theoretical we also present experiments that justify our theoretical findings i ntroduction deep convolutional neural networks cnn have achieved the performance in many applications such as computer vision krizhevsky et natural language processing dauphin et and reinforcement learning applied in classic games like go silver et despite the highly nature of the objective function simple algorithms like stochastic gradient descent and its variants often train such networks successfully on the other hand the success of convolutional neural network remains elusive from an optimization perspective when the input distribution is not constrained existing results are mostly negative such as hardness of learning a neural network blum rivest or a convolutional filter brutzkus globerson recently shamir showed learning a simple fully connected neural network is hard for some specific input distributions these negative results suggest that in order to explain the empirical success of sgd for learning neural networks stronger assumptions on the input distribution are needed recently a line of research tian brutzkus globerson li yuan soltanolkotabi zhong et assumed the input distribution be standard gaussian n i and showed stochastic gradient descent is able to recover neural networks with relu activation in polynomial time one major issue of these analysis is that they rely on specialized analytic properties of the gaussian distribution section and thus can not be generalized to the case in which distributions fall into for general input distributions new techniques are needed in this paper we consider a simple architecture a convolution layer followed by a relu activation function and then average pooling formally we let x rd be an input sample an image we generate k patches from x each with size p z where the column is the patch generated by some known function zi zi x for a filter with size and stride zi x is the and i pixels since for convolutional filters we only need to focus on the patches instead of the input in the following definitions and theorems we will refer z as input and let z as the distribution of z x max x is the relu activation function published as a conference paper at iclr a b c relu label estimate input figure a architecture of the network we are considering given input x we extract its patches zi and send them to a shared weight vector the outputs are then sent to relu and then summed to yield the final label and its estimation b c two conditions we proposed for convergence we want the data to be b highly correlated and c concentrated more on the direction aligned with the ground truth vector f w z k w zi k see figure a for a graphical illustration such architectures have been used as the first layer of many works in computer vision lin et milletari et we address the realizable case where training data are generated from with some unknown teacher parameter under input distribution z consider the loss w z f w z f z we learn by stochastic gradient descent wt g wt where is the step size which may change over time and g wt is a random function where its expectation equals to the population gradient e g w w z the goal of our analysis is to understand the conditions where w if w is optimized under stochastic gradient descent in this setup our main contributions are as follows learnability of filters we show if the input patches are highly correlated section zi zj for some small then gradient descent and stochastic gradient descent with random initialization recovers the filter in polynomial furthermore strong correlations imply faster convergence to the best of our knowledge this is the first recovery guarantee of randomly initialized algorithms for learning filters even for the simplest network on input distribution answering an open problem in tian convergence rate we formally establish the connection between the smoothness of the input distribution and the convergence rate for filter weights recovery where the smoothness in our paper is defined as the ratio between the largest and the least eigenvalues of the second moment of the activation region section we show that a smoother input distribution leads to faster convergence and gaussian distribution is a special case that leads to the tightest bound this theoretical finding also justifies the twostage learning rate strategy proposed by he et szegedy et if the step size is allowed to change over time r elated w orks in recent years theorists have tried to explain the success of deep learning from different perspectives from optimization point of view optimizing neural network is a optimization note since in this paper we focus on continuous distribution over z our results do not conflict with previous negative results blum rivest brutzkus globerson whose constructions rely on discrete distributions published as a conference paper at iclr problem pioneered by ge et al a class of optimization problems that satisfy strict saddle property can be optimized by perturbed stochastic gradient descent in polynomial time jin et this motivates the research of studying the landscape of neural networks soltanolkotabi et kawaguchi choromanska et hardt ma haeffele vidal mei et freeman bruna safran shamir zhou feng nguyen hein however these results can not be directly applied to analyzing the convergence of methods for relu activated neural networks from learning theory point of view it is well known that training a neural network is hard in the worst cases blum rivest livni et et b and recently shamir showed either niceness of the target function or of the input distribution alone is sufficient for optimization algorithms used in practice to succeed with some additional assumptions many works tried to design algorithms that provably learn a neural network with polynomial time and sample complexity goel et zhang et sedghi anandkumar janzamin et gautier et goel klivans however these algorithms are tailored for certain architecture and can not explain why stochastic gradient based optimization algorithms work well in practice focusing on algorithms a line of research analyzed the behavior of stochastic gradient descent for gaussian input distribution tian showed population gradient descent is able to find the true weight vector with random initialization for model brutzkus globerson showed population gradient descent recovers the true weights of a convolution filter with input in polynomial time li yuan showed sgd can recover the true weights of a resnet model with relu activation under the assumption that the spectral norm of the true weights is bounded by a small constant all the methods use explicit formulas for gaussian input which enable them to apply trigonometric inequalities to derive the convergence with the same gaussian assumption soltanolkotabi shows that the true weights can be exactly recovered by projected gradient descent with enough samples in linear time if the number of inputs is less than the dimension of the weights other approaches combine tensor approaches with assumptions of input distribution zhong et al proved that with sufficiently good initialization which can be implemented by tensor method gradient descent can find the true weights of a fully connected neural network however their approach works with known input distributions soltanolkotabi used gaussian width definition of soltanolkotabi for concentrations and his approach can not be directly extended to learning a convolutional filter in this paper we adopt a different approach that only relies on the definition of relu we show as long as the input distribution satisfies weak smoothness assumptions we are able to find the true weights by sgd in polynomial time using our conclusions we can justify the effectiveness of large amounts of data which may eliminate saddle points and adaptive learning rates used by he et al szegedy et al etc o rganization this paper is organized as follows in section we analyze the simplest model where we state our key observation and establish the connection between smoothness and convergence rate in section we discuss the performance of stochastic gradient descent for learning a convolutional filter we provide empirical illustrations in section and conclude in section we place most of our detailed proofs in the appendix n otations let denote the euclidean norm of a vector for a matrix a we use a to denote its largest singular value and a its smallest singular value note if a is a positive semidefinite matrix a and a represent the largest and smallest eigenvalues of a respectively let o and denote the standard and notations that hide absolute gradient descent is not guaranteed to converge to a local minima in polynomial time du et lee et published as a conference paper at iclr s w b a l l s w s w s w figure a the four regions considered in our analysis b illustration of l and defined in definition and assumption constants we assume the gradient function is uniformly bounded there exists b such that kg w b this condition is satisfied as long as patches w and noise are all bounded warm u p a nalyzing o ne ayer o ne euron m odel before diving into the convolutional filter we first analyze the special case for k which is equivalent to the architecture the analysis in this simple case will give us insights for the fully general case for the ease of presentation we define following two events and corresponding second moments s w z w z z s w z w z z aw e zz i s w aw e zz i s w where i is the indicator function intuitively s w is the joint activation region of w and and s w is the joint activation region of w and see figure a for the graphical illustration with some simple algebra we can derive the population gradient e w z aw w aw one key observation is we can write the inner product w w i as the sum of two terms lemma this observation directly leads to the following theorem theorem suppose for any with e zz i s w and the initialization satisfies then gradient descent algorithm recovers the first assumption is about the of input distribution for one case that the assumption fails is that the input distribution is supported on a space or degenerated the second assumption on the initialization is to ensure that gradient descent does not converge to w at which the gradient is undefined this is a general convergence theorem that holds for a wide class of input distribution and initialization points in particular it includes theorem of tian as a special case if the input distribution is degenerate there are holes in the input space the gradient descent may stuck around saddle points and we believe more data are needed to facilitate the optimization procedure this is also consistent with empirical evidence in which more data are helpful for optimization c onvergence r ate of o ne ayer o ne euron m odel in the previous section we showed if the distribution is regular and the weights are initialized appropriately gradient descent recovers the true weights when it converges in practice we also want to know how many iterations are needed to characterize the convergence rate we need some quantitative assumptions we note that different set of assumptions will lead to a different rate and ours is only one possible choice in this paper we use the following quantities published as a conference paper at iclr definition the eigenvalue values of the second moment on intersection of two half spaces for define min aw l max aw w w these two conditions quantitatively characterize the angular smoothness of the input distribution for a given angle if the difference between and l is large then there is one direction has large probability mass and one direction has small probability mass meaning the input distribution is not smooth on the other hand if and l are close then all directions have similar probability mass which means the input distribution is smooth the smoothest input distributions are rotationally invariant distributions standard gaussian which have l for analogy we can think of l as lipschitz constant of the gradient and as the strong convexity parameter in the optimization literature but here we also allow they change with the angle also observe that when l because the intersection has measure and both and l are monotonically decreasing our next assumption is on the growth of aw note that when w then aw because the intersection between w and has measure also aw grows as the angle between w and becomes larger in the following we assume the operator norm of aw increases smoothly with respect to the angle the intuition is that as long as input distribution bounded probability density with respect to the angle the operator norm of aw is bounded we show in theorem that for rotational invariant distribution and in theorem that p for standard gaussian distribution assumption we assume there exists that for maxw w aw now we are ready to state the convergence rate theorem the initialization satisfies denote suppose kw t arcsin t then if step size is set as l we have for kwt note l increases as decreases so we can choose a constant step size this theorem implies that we can find the solution of in l l o iterations it also suggests a direct relation between the smoothness of the log distribution and the convergence rate for smooth distribution where and l are close and is small then l is relatively small and we need fewer iterations on the other hand if l or is much larger than we will need more iterations we verify this intuition in section if we are able to choose the step sizes adaptively l like using t proposed by lin xiao we may improve the computational complexity to l o log this justifies the use of learning rate strategy proposed by he et al szegedy et al where at the beginning we need to choose learning to be small because l is small and later we can choose a large learning rate because as the angle between wt and becomes smaller l becomes bigger the theorem requires the initialization satisfying which can be achieved by random initialization with constant success probability see section for a detailed discussion m ain r esults for l earning a c onvolutional f ilter in this section we generalize ideas from the previous section to analyze the convolutional filter first for given w and we define four events that divide the input space of each patch zi each event published as a conference paper at iclr corresponds to a different activation region induced by w and similar to s w i zi w zi zi s w i zi w zi zi s i zi w zi zi s i zi w zi zi please check figure a again for illustration for the ease of presentation we also define the average over all patches in each region zs w k k zi i s w i zs w zi i s w i k k zs k zi i s i k next we generalize the smoothness conditions analogue to definition and assumption here the smoothness is defined over the average of patches assumption for define h e zs w z s w w w h l max e zs w z s w min w w h we assume for all maxw w e zs w z for s w some the main difference between the simple network and the convolution filter is two patches may appear in different regions for a given sample there may exists patch zi and zj such that zi s w i and zj s w j and their interaction plays an important role in the convergence of stochastic gradient descent here we assume the second moment of this interaction also grows smoothly with respect to the angle assumption we assume there exists lcross such that h h max e zs w z e zs w z s w s w w h e zs w z lcross s first note if then zs w and zs has measure and this assumption models the growth of next note this lcross represents the closeness of patches if zi and zj are very similar then the joint probability density of zi s w i and zj s w j is small which implies lcross is small in the extreme setting zk we have lcross because in this case the events zi s w i zj s w j zi s w i zj s j and zi s w i zj s j all have measure now we are ready to present our result on learning a convolutional filter by gradient descent theorem the initialization satisfies kwt which satisfies arcsin k l cross and denote then if we choose kwt we have for t and arcsin k kwt our theorem suggests if the initialization satisfies we obtain linear convergence rate in section we give a concrete example showing closeness of patches implies large and small lcross similar to theorem if the step size is chosen so that published as a conference paper at iclr ls w log in o ls w w iterations we can find the solution of and the proof is also similar to that of theorem in practice we never get a true population gradient but only stochastic gradient g w equation the following theorem shows sgd also recovers the underlying filter theorem let denote for sufficiently small if arcsin and b k then we have in t o iterations with probability at log k kw cross least we have kwt k unlike the vanilla gradient descent case here the convergence rate depends on instead of this is because of the randomness in sgd and we need a more robust initialization we choose to be the average of and for the ease of presentation as will be apparent in the proof we only require not very close to the proof relies on constructing a martingale and use inequality and this idea has been previously used by ge et al w hat distribution is easy for sgd to learn a convolutional filter different from model here we also requires the lipschitz constant for closeness lcross to be relatively small and to be relatively large a natural question is what input distributions satisfy this condition here we give an example we show if patches are close to each other the input distribution has small probability mass around the decision boundary then the assumption in theorem is satisfied see figure b c for the graphical illustrations pk theorem denote zavg zi suppose all patches have unit norm and for all for all i zi zavg further assume there exists l such that for any and for all zi h ii h ii p zi p zi then we have cos and lcross where e zz i z z analogue to definition several comments are in sequel we view as a quantitative measure of the closeness between different patches small means they are similar this decreasing as a function of and note when h bound is monotonically e zs w zs w which recovers definition for the upper bond on lcross represents the bound of density around the decision boundary for example if p zi in a small neighborhood around say radius we have p zi this assumption is usually satisfied in real world examples like images because the image patches are not usually close to the decision boundary for example in computer vision the local image patches often form clusters and is not evenly distributed over the appearance space therefore if we use linear classifier to separate their cluster centers from the rest of the clusters near the decision boundary the probability mass should be very low t he p ower of r andom i nitialization for model we need initialization and for the convolution filter we need a stronger initialization cos the following theorem this is condition can be relaxed to the norm and the angle of each patch are independent and the norm of each pair is independent of others published as a conference paper at iclr shows with uniformly random initialization we have constant probability to obtain a good initialization note with this theorem at hand we can boost the success probability to arbitrary close to by random restarts the proof is similar to tian theorem a ball with radius k so that q if we uniformly sample from p then with probability at least we have to apply this general initialization theorem to our convolution filter case we can choose cos therefore with some simple algebra we have the following corollary corollary suppose cos then if is uniformly sampled from a ball with center p and radius k cos we have with probability at least cos the assumption of this corollary is satisfied if the patches are close to each other as discussed in the previous section e xperiments in this section we use simulations to verify our theoretical findings we first test how the smoothness affect the convergence rate in model described in section to construct input distribution with different l and definition and assumption we fix the patch to have unit norm and use a mixture of truncated gaussian distribution to model on the angle around and around the specifically the probability density of is sampled from n i n i note by definitions of l and if the probability mass is centered around so the distribution is very spiky and l and will be large on the other hand if then input distribution is close to the rotation invariant distribution and l and will be small figure verifies our prediction where we fix the initialization and step size next we test how the closeness of patches affect the convergence rate in the convolution setting we e using the above model with then generate each unit norm first generate a single patch z e z e is sampled from z e n i figure shows as zi whose angle with z variance between patches becomes smaller we obtain faster convergence rate which coincides with theorem we also test whether sgd can learn a filter on real world data here we choose mnist data and generate labels using two filters one is random filter where each entry is sampled from a standard gaussian distribution figure and the other is a gabor filter figure figure and figure show convergence rates of sgd with different initializations here better initializations give faster rates which coincides our theory note that here we report the relative loss logarithm of squared error divided by the square of mean of data points instead of the difference between learned filter and true filter because we found sgd often can not converge to the exact filter but rather a filter with near zero loss we believe this is because the data are approximately lying in a low dimensional manifold in which the learned filter and the true filter are equivalent to justify this conjecture we try to interpolate the learned filter and the true filter linearly and the result filter has similar low loss figure lastly we visualize the true filters and the learned filters in figure and we can see that the they have similar patterns c onclusions and f uture w orks in this paper we provide the first recovery guarantee of stochastic gradient descent algorithm with random initialization for learning a convolution filter when the input distribution is not gaussian our analyses only used the definition of relu and some mild structural assumptions on the input distribution here we list some future directions one possibility is to extend our result to deeper and wider architectures even for fullyconnected network the convergence of stochastic gradient descent with random initialization is not known existing results either requires sufficiently good initialization zhong et or log log published as a conference paper at iclr epochs epochs figure convergence rates of sgd a with different smoothness where larger is smoother b with different closeness of patches where smaller is closer c for a learning a random filter with different initialization on mnist data d for a learning a gabor filter with different initialization on mnist data a random generated target filters b gabor filters figure visualization of true and learned filters for each pair the left one is the underlying truth and the right is the filter learned by sgd published as a conference paper at iclr relies on special architecture li yuan however we believe the insights from this paper is helpful to understand the behaviors of algorithms in these settings another direction is to consider the agnostic setting where the label is not equal to the output of a neural network this will lead to different dynamics of stochastic gradient descent and we may need to analyze the robustness of the optimization procedures this problem is also related to the expressiveness of the neural network raghu et where if the underlying function is not equal bot is close to a neural network we believe our analysis can be extend to this setting acknowledgment the authors would like to thank hanzhang hu tengyu ma yuanzhi li jialei wang and kai zhong for useful discussions r eferences avrim blum and ronald l rivest training a neural network is in advances in neural information processing systems pp alon brutzkus and amir globerson globally optimal gradient descent for a convnet with gaussian inputs arxiv preprint anna choromanska mikael henaff michael mathieu ben arous and yann lecun the loss surfaces of multilayer networks in artificial intelligence and statistics pp yann n dauphin angela fan michael auli and david grangier language modeling with gated convolutional networks arxiv preprint simon s du chi jin jason d lee michael i jordan barnabas poczos and aarti singh gradient descent can take exponential time to escape saddle points arxiv preprint c daniel freeman and joan bruna topology and geometry of network optimization arxiv preprint antoine gautier quynh n nguyen and matthias hein globally optimal training of generalized polynomial neural networks with nonlinear spectral methods in advances in neural information processing systems pp rong ge furong huang chi jin and yang yuan escaping from saddle pointsonline stochastic gradient for tensor decomposition in proceedings of the conference on learning theory pp surbhi goel and adam klivans learning neural networks in polynomial time arxiv preprint surbhi goel varun kanade adam klivans and justin thaler reliably learning the relu in polynomial time arxiv preprint benjamin d haeffele and vidal global optimality in tensor factorization deep learning and beyond arxiv preprint moritz hardt and tengyu ma identity matters in deep learning arxiv preprint kaiming he xiangyu zhang shaoqing ren and jian sun deep residual learning for image recognition in proceedings of the ieee conference on computer vision and pattern recognition pp majid janzamin hanie sedghi and anima anandkumar beating the perils of guaranteed training of neural networks using tensor methods arxiv preprint chi jin rong ge praneeth netrapalli sham m kakade and michael i jordan how to escape saddle points efficiently arxiv preprint published as a conference paper at iclr kenji kawaguchi deep learning without poor local minima in advances in neural information processing systems pp alex krizhevsky ilya sutskever and geoffrey e hinton imagenet classification with deep convolutional neural networks in advances in neural information processing systems pp jason d lee max simchowitz michael i jordan and benjamin recht gradient descent only converges to minimizers in conference on learning theory pp yuanzhi li and yang yuan convergence analysis of neural networks with relu activation arxiv preprint min lin qiang chen and shuicheng yan network in network arxiv preprint qihang lin and lin xiao an adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization in international conference on machine learning pp roi livni shai and ohad shamir on the computational efficiency of training neural networks in advances in neural information processing systems pp song mei yu bai and andrea montanari the landscape of empirical risk for losses arxiv preprint fausto milletari nassir navab and ahmadi fully convolutional neural networks for volumetric medical image segmentation in vision fourth international conference on pp ieee quynh nguyen and matthias hein the loss surface of deep and wide neural networks arxiv preprint maithra raghu ben poole jon kleinberg surya ganguli and jascha on the expressive power of deep neural networks arxiv preprint itay safran and ohad shamir on the quality of the initial basin in overspecified neural networks in international conference on machine learning pp hanie sedghi and anima anandkumar provable methods for training neural networks with sparse connectivity arxiv preprint shai ohad shamir and shaked shammah failures of deep learning in international conference on machine learning pp shai ohad shamir and shaked shammah weight sharing is crucial to succesful optimization arxiv preprint ohad shamir hardness of learning neural networks arxiv preprint david silver aja huang chris j maddison arthur guez laurent sifre george van den driessche julian schrittwieser ioannis antonoglou veda panneershelvam marc lanctot et al mastering the game of go with deep neural networks and tree search nature training a single sigmoidal neuron is hard neural computation mahdi soltanolkotabi learning relus via gradient descent arxiv preprint mahdi soltanolkotabi adel javanmard and jason d lee theoretical insights into the optimization landscape of shallow neural networks arxiv preprint published as a conference paper at iclr christian szegedy sergey ioffe vincent vanhoucke and alexander a alemi and the impact of residual connections on learning in aaai pp yuandong tian an analytical formula of population gradient for relu network and its applications in convergence and critical point analysis arxiv preprint yuchen zhang jason d lee martin j wainwright and michael i jordan learning halfspaces and neural networks with random initialization arxiv preprint yuchen zhang jason d lee and michael i jordan neural networks are improperly learnable in polynomial time in international conference on machine learning pp kai zhong zhao song prateek jain peter l bartlett and inderjit s dhillon recovery guarantees for neural networks arxiv preprint pan zhou and jiashi feng the landscape of deep learning algorithms arxiv preprint published as a conference paper at iclr a p roofs and a dditional t heorems p roofs of the t heorem in s ection lemma w w i w aw w w aw and both terms are proof since aw and aw both the first term and one part of the second term w aw w are the other part of the second term is aw w z w z i w z z proof of theorem the assumption on the input distribution ensures when w aw and when w aw now when gradient descent converges we have w we have the following theorem by assumption since w and gradient descent only decreases function value we will not converge to w note that at any critical points w w i from lemma we have w aw w w aw w suppose we are converging to a critical point w there are two cases if w then we have w aw w which contradicts with eqn if w without loss of generality let w for some by the assumption we know aw now the second equation becomes w aw w aw which contradicts with eqn therefore we have w proof of theorem our proof relies on the following simple but crucial observation if kw then kw w arcsin we denote wt and by the observation we have recall the gradient descent dynamics wt wt wt e zz i wt z z wt e w z z wt consider the squared distance to the optimal weight kwt wt e zz i wt z z wt e w z z wt e zz i wt z z wt e w z z wt by our analysis in the previous section the second term is smaller than wt e zz i wt z z wt kwt published as a conference paper at iclr where we have used our assumption on the angle for the third term we expand it as e zz i wt z z wt e w z z wt e zz i wt z z wt e zz i wt z z wt e w z z wt e w z z wt kwt kwt k kwt kwt kwt kwt kwt k kwt kwt k kw therefore in summary l kwt kwt kwt where the first inequality is by our assumption of the step size and second is because and is monotonically decreasing theorem rotational invariant distribution for any unit norm rotational invariant input distribution we have proof of theorem without loss of generality we only need to focus on the plane spanned by w and and suppose then z sin cos cos e zz i s w cos sin sin sin cos it has two eigenvalues sin sin and therefore maxw w aw for theorem if z n i then p proof note in previous theorem we can integrate h angle i and radius separately then multiply them together for gaussian distribution we have e the result follows p roofs of t heorems in s ection proof of theorem the proof is very similar to theorem for two events we use as a shorthand for and as a shorthand for denote wt first note with some routine algebra we can write the gradient as wt d d x i j n o w zi z j i s w i s w j published as a conference paper at iclr d d x i j d d x i j d d x i j n o zi z j i s w i s w j s w i s w j n o zi z j i s w i s w j n o zi z j i s w i s j s w i s w j s w i s j we first examine the inner product between the gradient and w w w i d d w e x d d w e x d d w e x i j i j i j d d w e x d d w e x d d w e x i j i j i j n o zi zj i s w i s w j w n o zi zj i s w i s w j s w i s w j s w i s w j w n o zi zj i s w i s j s w i s w j s w i s j n o zi zj i s w i s w j w n o zi zj i s w i s w j w n o zi zj i s w i s j s w i s w j s w i s j kw d d kw e x i j n o zi zj i s w i s w j d d kw e x op i j d d x i j kw n o zi zj i s w i s j op n o zi zj i s w i s w j op d d kw e x i j d d kw e x d d x i j n o zi zj i s w i s j op n o zi zj i s w i s j op op n o zi zj i s w i s w j i j published as a conference paper at iclr d d x i j kwt n o zi zj i s w i s w j d d x op i j kwt kwt kwt kwt kwt where the first inequality we used the definitions of the regions the second inequality we used the definition of operator norm the third inequality we used the fact kwt the fourth inequality we used the definition of lcross and the fifth inequality we used sin for any next we can upper bound the norm of the gradient using similar argument wt kwt kwt k kwt l kwt therefore using the dynamics of gradient descent putting the above two bounds together we have l kwt kwt kwt where the last step we have used our choice of and the proof of theorem consists of two parts first we show if is chosen properly and t is not to big then for all t t with high probability the iterates stat in a neighborhood of next conditioning on this we derive the rate lemma denote sin given kw sin number then if the step size l t log t l b with l then with probability at least for all t t we have kwt k proof of lemma let g wt e wt we denote ft the sigmaalgebra generated by and define the event consider ct t k h i e ict h i kwt wt ict l kwt b ict where the inequality follows by our analysis of gradient descent together with definition of ct and e define gt kwt op of iterations t and failure probability denote arcsin satisfies n o zi zj i s w i s j published as a conference paper at iclr by our analysis above we have e ict gt ict gt where the last inequality is because ct is a subset of therefore gt is a and we may apply inequality before that we need to bound the difference between gt ict and its expectation note h i gt e gt kwt e kwt h i wt wt e l kwt b l b dt therefore for all t t t x t x l b l b l b where the first inequality we used the second we used t t and the third we used our assumption on let us bound at t step the iterate goes out of the region h n oi p ct ct h n oi ct t ct ct exp t where the second inequality we used inequality the last one we used our assumption of therefore for all t t we have with probability at least ct happens now we can derive the rate lemma denote sin given kw sin number of iterations t and failure probability denote arcsin k then if the step size satisfies l t log t l b l log published as a conference paper at iclr l with l then we have with probability kwt proof of lemma we use the same notations in the proof of lemma by the analysis of lemma we know h i e ict kwt b ict therefore we have t e kwt ict now we can bound the failure probability p kwt kwt c ct kwt ict kwt ict i h e kwt ict t the first inequality we used the last assumption the second inequality we used the probability of an event is upper bound by any superset of this event the third one we used lemma and the union bound the fourth one we used markov s inequality now we can specify the t and and derive the convergence rate of sgd for learning a convolution filter proof of theorem with the choice of and t it is straightforward to check they satisfies conditions in lemma proof of theorem we first prove the lower bound of k k x x zi i s w i zi i s w i k x zi i s w i z kz e zz ke k x ke k x k x zi i s w i z zi i s w i z zi i s w i z z published as a conference paper at iclr k x zi i s w i i s w k x zi i s w i i s w k x k e zz ke zi i s w i z ke k x zi i s w i z z note because zi s have unit norm and by law of cosines kz zi i s w i z kop cos therefore d d x x zi i s w i zi i s w i k cos now we prove the upper bound of lcross notice that h n oi h n oi kzi kzj i s w i s w j e zi z j i s w i s w j z z dp zi dp s w i s w j if then by our assumption we have z z z dp zi dp s w j s w i s w j dp zj on the other hand if let be the angle between and zj we have z z z z dp zi dp dp zi dp s w j s w i s w i h therefore e zs w z using similar arguments we can show s w e zs w zs and e zs w zs proof of theorem we use the same argument by tian let rinit be the initialization radius the failure probability is lower bounded cos rinit rinit rinit vk rinit therefore rinit cos maximizes this lower bound plugging this optimizer in and using formula for the volume of the euclidean ball the failure probability is lower bounded by r cos cos where we used gautschi s inequality for the last step b a dditional e xperimental r esults figure show the loss of linear interpolation between the learned filter w and ground truth filter our interpolation has the form winter where is the interpolation ratio note that for all interpolation ratios the loss remains very low published as a conference paper at iclr log relative loss log relative loss gabor filter random filter interpolation ratio interpolation ratio figure loss of linear interpolation between learned filter and the true filter
2
ieee transactions on circuits and regular papers optimal tracking performance limitation of networked control systems with limited bandwidth and additive colored white gaussian noise feb guan chen gang feng and tao li paper studies optimal tracking performance issues for linear systems under networked control with limited bandwidth and additive colored white gaussian noise channel the tracking performance is measured by control input energy and the energy of the error signal between the output of the system and the reference signal with respect to a brownian motion random process this paper focuses on two kinds of network parameters the basic network and the additive colored white gaussian noise and studies the tracking performance limitation problem the best attainable tracking performance is obtained and the impact of limited bandwidth and additive colored white gaussian noise of the communication channel on the attainable tracking performance is revealed it is shown that the optimal tracking performance depends on nonminimum phase zeros gain at all frequencies and their directions unitary vector of the given plant as well as the limited bandwidth and additive colored white gaussian noise of the communication channel the simulation results are finally given to illustrate the theoretical results index control systems bandwidth additive colored white gaussian noise performance limitation i ntroduction m ore and more researchers are interested in networked control systems in the past decade please see for example and references therein most works focus on analysis and synthesis of networked control systems with quantization effects time delays bandwidth constraint data rate constraint data packet dropout in spite of the significant progress in those studies the more inspiring and challenging issues of control performance limitation under such network environment remain largely open guan and chen are with the department of control science and engineering huazhong university of science and technology wuhan china gang feng is with the department of mechanical and biomedical engineering city university of hong kong kowloon hong kong sar china tao li is with the college of electronics and information yangtze university jingzhou china corresponding author zhguan guan this work was supported in part by the national natural science foundation of china under grants and the doctoral foundation of ministry of education of china under grant and by a grant from the research grants council of the hong kong special administrative region china project no cityu performance limitations resulting from nonminimum phase nmp zeros and unstable poles of given systems have been known for a long time the issue has been attracting a growing amount of interest in the control community see for example the tracking performance achievable via feedback was studied in with respect to siso stable systems the result was extended to mimo unstable systems in and it was found that the minimal tracking error depends not only on the location of the system nonminimum phase zeros but also on how the input signal may interact with those zeros the angles between the input and zero directions optimal tracking and regulation control problems were studied in where objective functions of tracking error and regulated response defined by integral square measures are minimized jointly with the control effort and the latter is measured by the system input energy in the optimal tracking control problem was studied with both the forward and feedback channel disturbances the authors of investigated the regulation performance limitations of unstable phase simo and systems respectively however all these mentioned works have not taken into account the effects of networks which would make the study of the optimal performance limitation much more challenging networked control systems are ubiquitous in industry more and more control systems are operating over a network in recent years the research on the performance limitation of networked control systems attracts some attention for example the authors in studied the tracking performance of siso networked feedback systems by modeling the quantization error as a white noise the tracking performance of mimo systems with the additive white gaussian noise awgn was studied through and control schemes in the result was further generalized to other noisy channels with bandwidth limitation in where the optimal tracking performance is measured by the achievable minimal tracking error however it was showed in that in the optimal tracking problem in order to attain the minimal tracking error the control input of systems is often required to have an infinite energy this requirement can not be met in general in practice thus the control input energy of systems should be considered in the performance index to address this issue in this paper ieee transactions on circuits and regular papers we consider the optimal tracking problem in terms of both the tracking error energy and the control input energy and meanwhile we consider communication link over bandwidthlimited additive colored gaussian noise acgn channels which are more realistic models of communication link than those in in this paper we study optimal tracking performance issues pertaining to mimo feedback control systems the objective is to minimize the tracking error between the output and the reference signals of a feedback system under the constraint of control input energy the optimal tracking performance is attained by stabilizing compensators under a structure the tracking error is defined in an square error sense and the reference signals are considered as a brownian motion which can be roughly considered as the integral of a standard white noise the tracking performance index is given by the weighted sum between the power of the tracking error energy and the system input energy the rest of the the paper is organized as follows the problem formulation and preliminaries are given in section ii in section iii the main results of this paper are presented results of extensive simulation studies and discussions are shown to validate the theoretical results in section iv concluding remarks are made in section associated with s and for such a zero it is always true that h n s for some unitary vector on the other hand a complex number is said to be a pole of p s if p p if p is an unstable pole of p s p then equivalent statement is that p for some unitary vector in order to facilitate the subsequent proof we introduce two specific factorization for n s n s l s nm s s s and s q zand allpass factor q l s z have the form l s li s s s and h zi z i i li s ui uih i h s i ii p reliminaries consider the class of functions in f f f s analytic in kf lemma and can be found in lemma let f s f and denote f suppose that f s is conjugate symmetric f s f then f lemma consider a conjugate symmetric function f s suppose that f s is analytic and has no zero in and that log f s then provided that f r log f lemma let l and li be defined p by then for any z x the equality s n x zi zi zi li zi lnz zi holds for some s proof we assume that a is an allpass factor from lemma in for some y we have y pnz a z zi zi y zi nz i zi ai p nz h h h then we have y a y zi zi h z let l a s zi ai zi i nz p nz h x y then xl x zi zi zi i zi lnz zi therefore the proof is completed we begin by summarizing briefly the notations used throughout this paper for any complex number s we denote its complex conjugate by sh the expectation operator is denoted by e respectively for any vector u we denote its conjugate transpose by uh and its euclidean norm by kuk for a matrix a we denote its conjugate transpose by ah all the vectors and matrices involved in the sequel are assumed to have compatible dimensions and for simplicity their dimensions will be omitted let the open plane be denoted by s re s the open plane by s re s and the imaginary axis by define f f s measurable in kf kf kf then r is a hilbert space with h an inner product hf gi tr f g next define as a subspace of functions in with functions f s analytic inr f f s analytic in c kf and the kf orthogonal complement of in ras f f s analyticin c kf kf kf thus for any f and g hf gi we use the same notation to denote the corresponding norm finally we denote by the class of all stable proper rational transfer function matrices we introduce a factorization formula for phase systems for the rational transfer function matrix p let its right and left coprime factorizations be given by p n m where n m a complex number s c is said to be a zero of p s if h p s for some unitary vector where is called an output direction vector where are unitary vectors obtained by factorizing the zeros one at a time and ui are matrices which together with form a unitary matrix similarly and have same definition and nature likewise has the allpass factorization s s where s is an allpass factor and s is the minimum phase part s one particular allpass factor is qnof p s and given by s h i s i iii t racking performance limitations consider the control feedback loop shown in where the plant model p is a rational transfer function matrix the channel model is the acgn channel where n nl with ni i l being a zeromean stationary white gaussian noise process and spectral density when l note the reference signal r is a vector of the step signal generated by passing a standard white noise through an integrator which can ieee transactions on circuits and regular papers be roughly considered as a brownian motion process and emulate the step signal in the deterministic setting therefore the formulation resembles the tracking of a deterministic step signal for the channel i we denote the spectral density of wi by when l note it is assumed that the system reference inputs in different channels are independent and that the reference input and the noise are uncorrelated denotes the fig feedback control over bandwidth limited acgn channels the optimal performance attainable by all possible stabilizing controllers is j inf theorem let and n be uncorrelated white gaussian signals suppose that p s po s for some integer n such that po s is proper and has no zero at s p is supposed to be unstable nmp and invertible including right invertible and left invertible denote the nmp zeros of p s and f s by zi i nz nf and assume also that these zeros are distinct define f s tr u t nm s t o s nm u and factorize f s as f s ns s s s s fm s where si are the i i i i nonminimum phase zeros of f s and fm s is minimum phase it is noted that f s fm s f fm pl i then with the controller given in nz l x re zi x j ej i compensators the communication channel is characterized by three parameters the awgn n the channel transfer functions f and the channel transfer function f s modeling the bandwidth limitation is assumed to be stable and nmp then f s diag s s fl s where s s fl s f s has nf distinct nmp zeros the channel transfer function h s colors the additive white gaussian noise the performance index of the system is defined as j e r t y t t r t y t t uc t where the parameter is and can be used to weigh the relative importance of tracking objective and the plant control input energy constraint for the transfer function matrices p and p f let their right and left coprime factorizations respectively be given by p f n m p m where n m and satisfy the double bezout identity m y i n x then the set of all stabilizing two parameter compensators is characterized by z ns l x x log res i nz x zi re zj dir zj dirh zi z i j i h dil zi v h oh zi o zj v djl zj proof from we have j tr ryn rucr rucn where t ryn t rucr t and rucn t are the autocorrelation functions of the random processes t yn t ucr t and ucn t respectively denote the spectral densities of r and n as sr and sn respectively then we have j ks k k q q r according to we may rewrite the performance index j as j e kr t yr t k kyn t k t k where yr t and yn t are the outputs in response to r and n respectively and the tracking error is given by t r t yr t tr sr tt z tr tyn sn tt yn tr tucr sr tt ucr z tr tucn sn tt ucn t yn v u s tucr u tucn v s ieee transactions on circuits and regular papers i i p f p f u s i p f i f p f u s from the following equation can be obtained p hv t nm nm cft m cf m from one can define the following matrix function with its module equal to i f p hv i n q u m q s m hv so according to the property of the matrix norm becomes m inf q u m s jv inf q u s u s where where h o nm i h o o nm nm h m o o nm i u diag v diag evidently we have j inf j inf ju inf jv firstly for ju using the allpass factorization we have i n q ju inf u m q s l i inf u s qu u m s s we then obtain x re zi u inf i nz m x re zi x ej i similar to we may invoke lemma and obtains f in light of lemma one also obtains n z l s x x resi log thus we have where cf m is the minimum phase part of f m is the direction vector associated with the zero of p f ej is unitary a column vector whose element is and the remaining elements and inf q u m s q u inf m s furthermore we perform an factorization given in such that m where is an inner matrix function and is an outer according to the definition of an inner matrix function we have i h u o nm u s s nz h o nm nz l x re zi x ej i z ns l x x resi log secondly for jv we have m hv jv hv where nom is the minimum phase part of no similar to the equation we perform an factorization such that ieee transactions on circuits and regular papers in addition similar to we factorize hv cd where c is the minimum phase part d is an allpass factor which can be formed as nz y d s di s di s wi i wih fig feedback control over awgn channels hence in light of lemma we have jv rcd rc nz x o zi v dil zi dir zi rc where and o zi zi zi h zi zi m zi h zi dil zi zi zi zi q ns and factorize f s si s si s fm s where si are the nonminimum phase zeros of f s and fm s is minimum phase that f s fm s pitl is noted f fm then with the twoparameter controller given in n m z x re zi x ej j i z ns l x x resi log nz x zi re zj h r zj zi z i j i dir zi zi zi zi z since is right invertible and c left invertible we have zi zj v zj nz inf x nz nz dir zi o zi v zi dil zi dir zi s zi zi v h proof similar to the proof of theorem we have the performance index x o zi v dil zi h x zi re zj dir zj dirh zi z i j i h dil zi v h oh zi o zj v djl zj the proof is thus completed remark when there is no network channel because the brownian motion random process is different from the step signal vector with deterministic direction this result can not be degraded to the results in literature corollary if the system p s are siso in theorem then the optimal tracking performance can be written as nzx ns resi re zi x j nz z x log zj i i where zi m zi h zi i corollary consider the simple channel case with f i and h i under the assumptions in theorem t fine f s tr u t nm s o s nm u j tr ryn rucr rucn u k ktyn v k s ktucr u k ktucn v k s i n q u q s v jv h h let we can obtain nz n x re zi x ej z ns l x x res log i for jv we have jv v v in addition we factorize v where is the minimum phase part is an allpass factor which can be formed ieee transactions on circuits and regular papers as s nz y s s i similar to the proof of theorem we can obtain where p is the predetermined input power threshold with the performance index for the system to be stabilizable and obtain the optimal tracking performance the channel snr must satisfy np x rj p pad pj i i where nz x zi re zj h r zj zi z i j i h np x pad nom pi n the proof is thus completed if there is no channel noise in the configuration of the feedback control system depicted in then the following result can be immediately obtained corollary consider the case of and suppose that the channel is under the same assumptions described in theorem we have nz l re zi x ej j i z n l s x resf i x log i remark if we do not consider the impact of the system control input setting and j l from the expression in corollary it can be observed that for a feedback control system with a compensators when the tracking target is the brownian motion the performance limitation depends on the nonminimum phase zeros the plant gain at all frequencies and their directions unitary vectors in what follows we will discuss the relationship between stabilizability the performance limits and channel characteristics under simplified conditions we do some appropriate simplifications and assumptions consider the siso system p s in and the simplified performance j as j e kr t yr t kyn t the relationship between the stabilizability tracking performance and the channel ratio snr can be summarized as shown in the following theorem theorem consider the feedback control system of suppose that p s is a scalar transfer function under the assumptions in theorem the system p s is stabilizable only if the admissible channel snr satisfies np x rj p pj i i x nom zi m pi nz y zi h zi zi zi m zi k zi s nm zi zi zi zi np y nz zi zi zi z pi h pi zi v h zi zj v zj where zi and the optimal tracking performance is given as nz j nz x re zi x zj i where ri nom pi n pi h pi nom zi m zi h zi i proof using the equation similar to the proof of the theorem we have j ryn u s n q u s ty n v p m x rn hv nm q u s l nom x rnm hv u s nm q u s nom x rnm hv nz x re zi nom x rnm hv based on the allpass factorization and lemma we can write nz nom xh s x nom zi x zi h zi zi zi i ieee transactions on circuits and regular papers where s then nom x rnm hv nz x nom rnm h nom zi x zi nom rnm h where zi zi x np y ku t kpow nz i zi pi nom rh s zi zi zi t kpow nz nz x re zi x zi re zj j zj i h zi zi zi zi zi zi nz nz x re zi x zi zi i i nz nz zi zk zk zj x re zi x zj i where is the residue of pi s at s zi in addition suppose that the input r t the channel input is required to satisfy the power constraint kukpow p for some predetermined input power level p ku t kpow ut t u t tr run z tr tun sn jw tutn dw np y nom pi y pi h pi pi np y sl np x np x i rj pj pi nom pi y pi h pi pi nom rh ku t kpow s pad np x i rj pj where pad is given by equation then p t kpow s pad the proof is now completed remark theorem shows that for a system to achieve for a best tracking performance in addition to stabilization its ratio must be greater than that required only for stabilization iv s imulation studies consider the plant p s k s s the lti filters used to model the finite bandwidth f s and colored noise h s of the communication link are both chosen to be butterworth filters of order f s f s f v kno hv knom y h rh np np y x nom pi y pi h pi this is the result of however in many cases not only the stabilizability needs to be considered but also the system tracking performance in this case via noting equations and we have from equations and we have nz y np x rj p pj i i zi nom zi m zi h zi zi re zj zj where ri nom pi n pi h pi is the residue of nom pi n pi h pi at s pi therefore for the feedback system to be stabilizable the channel snr must satisfy by using the bezout identity xm y n zi can be written as np x nz s nom pi y pi h pi zi nom zi x zi h zi zi zi zi zi zi zi zi zi x np x when only the stabilizability is considered regardless of the tracking performance we have h zi zi zi h s s h where k and f h clearly p s is of minimum phase shows the optimal performances plotted for different values of two observations can be obtained from where the optimal performance is plotted with respect to bandwidth of both f s and ieee transactions on circuits and regular papers h s first the system tracking performance becomes better as the available bandwidth of the communication channel decreases secondly if the noise is colored by a low pass filter the decrease of its cutoff frequency would lead to the better tracking performance shows that the reference signal and acgn will deteriorate tracking performance j c onclusions k fig j with respect to k for different f h j h bandwidth f bandwidth j with respect to f and k fig in this paper we have investigated the best attainable tracking performance of networked mimo control systems in tracking the brownian motion over a limited bandwidth and additive colored white gaussian noise channel we have derived explicit expressions of the best performance in terms of the tracking error and the control input energy it has been shown that due to the existence of the network the best achievable tracking performance will be adversely affected by several factors such as the nonminimum phase zeros and their directions of the plant the colored additive white gaussian noise the basic network parameters such as bandwidth finally some simulation results are given to illustrate the obtained results furthermore one possible future work is to consider more realistic constraints such as and dropout issues which is much more challenging when the networked control system contains the nondeterministic or hybrid switching the issue of tracking performance also deserves further study r eferences j fig j with respect to and k f h braslavsky middleton and freudenberg feedback stabilization over ratio constrained channels ieee transactions on automatic control vol no pp li tuncel and chen optimal tracking over an additive white gaussian noise channel proceedings of the american control conference mo usa pp zhan guan xiao and wang performance limitations in tracking of linear system with measurement noise proceedings of the ieee conference on chinese control conference beijing china pp rojas braslavsky and middleton fundamental limitations in control over a communication channel automatica vol no pp xiao and xie feedback stabilization over stochastic multiplicative input channels case proceedings of the international conference control automation robotics and vision singapore pp menon and edwards static output feedback stabilisation and synchronisation of complex networks with performance international journal of robust and nonlinear control vol no pp guan zhan and feng optimal tracking performance of mimo systems with communication constraints international journal of robust and nonlinear control wiley online library zhang yan yang and chen quantized control design for impulsive fuzzy networked systems ieee transactions on fuzzy systems vol no pp si azuma and sugie dynamic quantization of nonlinear control systems ieee transactions on automatic control vol no pp qi and su optimal tracking and tracking performance constraints from quantization proceedings of the asian control conference china pp you su fu and xie optimality of the logarithmic quantizer for stabilization of linear systems achieving the minimum data rate proceedings of the ieee conference on decision and control and chinese control conference china pp ieee transactions on circuits and regular papers xiao xie and fu stabilization of markov jump linear systems using quantized state feedback automatica vol no pp luan shi and liu stabilization of networked control systems with random delays ieee transactions on industrial electronics vol no pp wei wang x he and shu filtering for networked stochastic systems with sector nonlinearity ieee transactions on circuits and systems ii express briefs vol no pp liu predictive controller design of networked systems with communication delays and data loss ieee transactions on circuits and systems ii express briefs vol no pp rojas braslavsky and middleton output feedback stabilisation over bandwidth limited signal to noise ratio constrained communication channels proceedings of the american control conference minnesota usa pp trivellato and benvenuto state control in networked control systems under packet drops and limited transmission bandwidth ieee transactions on communications vol no pp wu and chen design of networked control systems with packet dropouts ieee transactions on automatic control vol no pp wang liu zhu and z du a survey of networked control systems with delay and packet dropout proceedings of chinese control and decision conference ccdc pp you fu and xie mean square stability for kalman filtering with markovian packet losses automatics toker chen and qiu tracking performance limitations in lti multivariable systems ieee transactions on circuits and systems i fundamental theory and applications vol no pp chen hara and chen best tracking and regulation performance under control energy constraint ieee transactions on automatic control vol no pp bakhtiar and hara regulation performance limitations for simo linear feedback control systems automatica vol no pp wang guan and yuan optimal tracking and twochannel disturbance rejection under control energy constraint automatica vol no pp morari and zafiriou robust process control englewood cliffs nj chen qiu and toker limitations on maximal tracking accuracy ieee transactions on automatic control vol no pp ding wang guan and chen tracking under additive white gaussian noise effect iet control theory and application vol no pp qiu ren and chen fundamental performance limitations in estimation problems communications in information and systems vol no pp zhan guan liao yuan optimal performance in tracking stochastic signal under disturbance rejection asian journal of control doi wang ding guan and chen limitations on minimum tracking energy for siso plants proceedings of the control and decision conference china pp li tuncel and chen optimal tracking and power allocation over an additive white noise channel proceedings of the ieee international conference on control and automation new zealand pp francis a course in control theory ser lecture notes in control and information science berlin germany guan hill and shen on hybrid impulsive and switching systems and application to nonlinear control ieee transactions on automatic and control vol no pp zhang cui liu and zhao asynchronous filtering of switched linear systems with average dwell time ieee transactions on circuits and systems i regular papers vol no pp zhang and james necessary and sufficient conditions for analysis and synthesis of markov jump linear systems with incomplete transition descriptions ieee transactions on automatic and control vol no pp guan received phd degree in automatic control theory and applications from the south china university of technology guangzhou china in he was a full professor of mathematics and automatic control with the jianghan petroleum institute jingzhou china in since december he has been full professor of the department of control science and engineering executive associate director of the centre for nonlinear and complex systems and director of the control and information technology in the huazhong university of science and technology hust wuhan china since he has held visiting positions at the harvard university usa the central queensland university australia the loughborough university uk the national university of singapore the university of hong kong and the city university of hong kong currently he is the associate editor of the journal of control theory and applications the international journal of nonlinear systems and application and severs as a member of the committee of control theory of the chinese association of automation executive committee member and also director of the control theory committee of the hubei province association of automation his research interests include complex systems and complex networks impulsive and hybrid control systems networked control systems systems chen was born in hunan china he graduated in mathematics from hunan university of science and technology xiangtan china in and received the degree at department of mathematics in guangxi teachers education university currently he is working towards the degree at the department of control science and engineering huazhong university of science and technology wuhan china his research interests include networked control systems complex dynamical networks impulsive and hybrid control systems gang feng received the and degrees in automatic control from nanjing aeronautical institute china in and in respectively and the degree in electrical engineering from the university of melbourne australia in he has been with city university of hong kong since where he is at present a chair professor he is a changjiang chair professor at nanjing university of science and technology awarded by ministry of education china he was lecturer at school of electrical engineering university of new south wales australia he was awarded an alexander von humboldt fellowship in and the ieee transactions on fuzzy systems outstanding paper award in his current research interests include piecewise linear systems and intelligent systems control feng is an ieee fellow an associate editor of ieee trans on fuzzy systems and was an associate editor of ieee trans on systems man cybernetics part c journal of control theory and applications and the conference editorial board of ieee control system society tao li received the degree in huazhong university of science and technology wuhan china in he is also currently an associate professor in the college of electronics and information yangtze university jingzhou china his current research interests include nonlinearity complex network systems complex network theory application complex networks spreading dynamics
3
comparing powers of edge ideals sep mike janssen thomas kamp and jason vander woude a bstract given a nontrivial homogeneous ideal i k x d a problem of great recent interest has been the comparison of the rth ordinary power of i and the mth symbolic power i m this comparison has been undertaken directly via an exploration of which exponents m and r guarantee the subset containment i m i r and asymptotically via a computation of the resurgence i a number for which any i guarantees i m i r recently a third quantity the symbolic defect was introduced as i t i t the symbolic defect is the minimal number of generators required to add to i t in order to get i t we consider these various means of comparison when i is the edge ideal of certain graphs by describing an ideal j for which i t i t j when i is the edge ideal of an odd cycle our description of the structure of i t yields solutions to both the direct and asymptotic containment questions as well as a partial computation of the sequence of symbolic defects i ntroduction let k be an algebraically closed field and i a nonzero proper homogeneous ideal in r k x n recall that the mth symbolic power of i is the ideal i m r p i i m rp over the last years the structure of i m has been an object of ongoing study see the recent survey one avenue for this study has been the examination of the relationship between i m and the algebraic structure of i r the rth ordinary power of i the naive context in which to examine this relationship is via subset containments for which m and r s and t do we have i s i t and i m i r in fact this line of inquiry has been extremely productive it is straightforward to see that i s i t if and only if s t but determining which r and m give i m i r is more delicate seminal results of and established that for such ideals i m i r if additional information about the ideal under consideration generally leads to tighter results see this phenomenon led to bocci and harbourne s introduction of a quantity known as the o resurgence of i denoted n i it is the least upper bound of the set t i m i r thus if i we have i m i r recently galetto geramita shin and van tuyl introduced a new measure of the difference between i m and i m known as the symbolic defect since i m i m the quotient mike janssen thomas kamp and jason vander woude i m i m is a finite thus we let sdefect i m denote the number of minimal generators of i m i m as an this is known as the symbolic defect and the symbolic defect sequence is the sequence sdefect i m in the authors study the symbolic defect sequences of star configurations in pnk and homogeneous ideals of points in our work considers all these questions in the context of a class of edge ideals let g v e be a simple graph on the vertex set v xn with edge set the edge ideal of g introduced in is the ideal i g r k xn given by i g xi x j xi x j e that is i g is generated by the products of pairs of those variables between which are edges in in the authors establish that for an edge ideal i i g we have i m i m for all m if and only if g is bipartite a natural question then is to explore the relationship between i g m and i g r when g is not bipartite which is equivalent to g containing an odd cycle thus sought to explore this relationship when g is a cycle on vertices we continue the problem of exploring the structure of the symbolic power i g t for certain classes of graphs g with a focus on when g is an odd cycle the main results of this work are theorem and corollary which together describe a decomposition of the form i t i t j where j is a ideal we are then able to use this decomposition to resolve a conjecture of compute i in theorem and establish a partial symbolic defect sequence in theorem we close by showing that our ideas in theorem apply for complete graphs and graphs which consist of an odd cycle plus an additional vertex and edge remark as preparation of this manuscript was concluding in summer dao et al posted the preprint in particular their theorem bears a striking resemblance to our corollary while these similarities are worth noting in part as evidence that interest in symbolic powers is high it is also worth noting that the aims of these two works are distinct and complementary the aim of the relevant sections of is to investigate the packing property for edge ideals while ours is to more directly describe the difference between the ordinary and symbolic powers by investigating the structure of a set of minimal generators for i t we then use information about these generators to compute invariants related to the containment i m i r acknowledgements this work was supported by dordt college s summer undergraduate research program in the summer of all three authors wish to express deep gratitude to the dordt college office of research and scholarship for the opportunity to undertake this project b ackground results edge ideals are an important class of examples of squarefree monomial ideals an ideal generated by elements of the form xnan where ai for all i when comparing powers of edge ideals i is squarefree monomial it is that the minimal primary decomposition is of the form i pr with pj x x js for j j when i i g is an edge ideal the variables in the pj s are precisely the vertices in the minimal vertex covers of recall that given a graph g v e a vertex cover of g is a subset v v such that for all e e e v a minimal vertex cover is a vertex cover minimal with respect to inclusion the minimal vertex covers will be especially useful to us as they describe the variables needed to decompose an edge ideal into its minimal primes see corollary lemma let g be a graph on the vertices xn i i g k xn be the edge ideal of g and vr the minimal vertex covers of let pj be the monomial prime ideal generated by the variables in v j then i pr and i m prm symbolic powers of squarefree monomial ideals and more specifically edge ideals have enjoyed a great deal of recent interest see in a linear programming approach is used to compute invariants related to the containment question we adapt this technique in lemma for the edge ideals under consideration in this paper one result of which will be of use is the following which reduces the problem of determining whether a given monomial is in i m to a problem of checking certain linear constraints on the exponents of the variables lemma let i r be a squarefree monomial ideal with minimal primary decomposition i pr with pj x x js for j then x x an i m if and j only if a a js m for j j remark throughout this work we will be exploring questions related to ideals in r k xn related to graphs on the vertex set xn we will use the xi s interchangeably to represent both vertices and variables the specific use should be clear from context and we see this as an opportunity to emphasize the close connection between the graph and the ideal factoring monomials along odd cycles in this section we introduce the main ideas of our approach to studying symbolic powers of edge ideals we begin by defining a means of writing a monomial in a power of an edge ideal with respect to the minimal vertex covers of the graph we then study this factorization and describe a situation in which it can be improved in what follows let r k and let i i be the edge ideal of the odd cycle i mike janssen thomas kamp and jason vander woude definition let m k be a monomial let e j denote the degree two monomial representing the jth edge in the cycle e j x j x for j and we may then write a b m where b m b j is as large as possible observe m deg m and ai when m is written in this way we will call this an optimal factorization of m or say that m a is expressed in optimal form in addition each xi i with ai in this form will be called an ancillary factor of the optimal factorization or just an ancillary for short observe that the optimal form representation of m is not unique in the sense that different edges may appear as factors of m for example in k if m we may write m a b lemma let m be an optimal factorization where m i then any will also be an optimal factorization if ai and b j for all i and proof let such that for all i j ai and b j since each variable s exponent from is less than or equal to the corresponding exponent from m we know that divides thus there must exist some b b b b b a such that m suppose that is not in optimal form then there must exist some other way of expressing such that the sum of the exponents of edge factors will be greater in this new c d expression that is we can as such that a c b b d i i i ei i i i m will have an edge exponent sum of di as m xi bi di bi di as di it must be true that this edge exponent sum is greater than bi this contradicts the premise that m was expressed in optimal form and thus is an optimal factorization of the next lemma describes a process that will be critical in the proof of the main result intuitively it says that if a monomial is factored as a product of an odd number of consecutive edges with ancillaries on both ends of this path of edges the monomial is not written in optimal form it can be rewritten as a product of strictly more edges a j b j b b a lemma let m x j e j e e x where a j a if it is the case that b for all h k then m is not in optimal form a j b j b b a proof let m x j e j e e x and notice that m is a string of adjacent edges with ancillaries on either end our goal is to rewrite m in a more optimal form for clarity and without loss of generality let j and suppose that bi for all evenly indexed edge exponents comparing powers of edge ideals let p and note that by lemma p must be in optimal form as m is expressed optimally however p since p was not initially expressed in optimal form we know that m could not have been an optimal factorization example let g be a cycle with vertices and consider m i g with edge factorization m where ei xi xi note that in this factorization there is an ancillary at and we will show that m is not in optimal form we can graphically represent m by drawing an edge between xi and xi for each ei in m and creating a bold outline for each ancillary as shown below using the method outlined in lemma we will break each of the red bolded edges back into standard xi notation so that we create new ancillaries at every vertex note that if we define a new monomial p based on this graphical representation where p it will still be true that m p because we are merely changing the factorization of the monomial not its value as one can see there are now consecutive ancillaries which we can pair up in a new way as shown below new edges are highlighted in green bolded in the second line now we have a third possible representation q of this monomial note that q and it is still true that q p as you can see this monomial representation has one more edge than our original representation which means that m is not optimal mike janssen thomas kamp and jason vander woude example let g be a cycle with vertices in it and consider the following edge factorization of m m again our goal is to determine whether or not m is optimal note that m is equal to the monomial q from example except that there are now ancillaries at and again we will create a graphical representation of m shown below however now it is impossible to remove the right combination of edges so that we create an ancillary at every vertex because no edge exists between and therefore we can not use lemma conclude that m is not optimal in fact we have no conclusive way to determine whether m is an optimal factorization at this point despite that this example has not been without value note the nonexistence of an edge between and and the fact we would have been able to prove that m was not optimal if not for the nonexistence of at least one of the following this will be useful for the latter stages of the proof of theorem p owers of edge ideals and their structures we will now turn to a decomposition of i t in terms of i t and another ideal j so that i t i t j our approach has numerous strengths including the ability to easily compute the symbolic defect of i for certain powers as well as determining which additional elements are needed to generate i t from i t although we will primarily focus on odd cycles in this section we go on to show that the same underlying principles can be extended to edge ideals of other types of graphs see section for more definition let v v g be a set of vertices for a monomial x a k with exponent vector a define the vertex weight wv x a to be w v x a ai x i we will usually be interested in the case when v is a minimal vertex cover using the language of vertex weights the definition of the symbolic power of an edge ideal given in lemma becomes i t x a for all minimal vertex covers v wv x a t now define sets l t x a deg x a and for all minimal vertex covers v wv x a t and d t x a deg x a and for all minimal vertex covers v wv x a t comparing powers of edge ideals and generate ideals l t and d t respectively note that i t l t d t the main work of this section is to show for the edge ideal i of an odd cycle that i t l t which is the content of the theorem lemma let r k xr g be a graph on xr i i g and l t is as defined above then i t l t b i j proof suppose m i t write m in optimal form as m xrar j ei j we know that given an arbitrary minimal vertex cover v and edge ei j xi x j dividing m it must be true that xi v or x j v or both thus wv m b m further since m i t we know b m t and deg m which means that m l t lemma let r k xr g be a graph on xr i i g and l t be as defined above then for all m i t if m has no ancillaries or a single ancillary of degree then m l t proof if there are no ancillaries in m then deg m m thus m can not be in l t which also means that it is not in l t as none of the divisors of m are in l t for a similar reason furthermore we reach the same conclusion if there is only one ancillary in m and it has an exponent of as deg m m and since m and are both odd m for the remainder of this section let r k g be an odd cycle of size with the vertices v g i be the edge ideal of g and v v g be a minimal vertex cover of we make the following definition which describes the sum of the exponents of a given monomial relative to a set of vertices theorem given i and l t as defined above i t l t proof by lemma we know that i t l t so we must only show the reverse containment let m i t which implies that b m t then we will show that m l t lemma allows us to consider only cases where m either has multiple ancillaries or has a single ancillary of at least degree a b given an arbitrary monomial m i t let m be an optimal factorization of m where x is an ancillary and our goal is to show that there exists some vertex cover with a weight equal to b m and as b m t m can not be in l t since l t is the generating set of l t this will be sufficient to claim that m l t because neither m nor any of its divisors whose vertex weights can only be less than that of m will be in the generating set we will construct a minimal vertex cover s of g out of a sequence of subsets sr of v where each sq is a cover for the induced subgraph hq of g on o n vhq a aj a for the sake of simplicity let xi i and x j be a pair of consecutive ancillaries or let xi i a aj a aj and x j in the wraparound case or let xi i and x j in the case of mike janssen thomas kamp and jason vander woude a b b b a j a single ancillary with degree greater than in addition let mq xi i ei i ei e x j note that by lemma mq is in optimal form we will show for each subgraph hq there exists some set of vertices sq vhq that covers hq such that w sq mq b mq case suppose that vhq has an odd number of elements consider s q xi xi x j we claim that w sq mq b mq this can be shown as follows b b a j a b b mq xi i ei i ei e e x j a xiai xi xi bi xi xi x x b x x j b x j j b b b a ai b i b i b b b xi xi x x j j xi intuitively this is because we are selecting alternating vertices to be in sq which would guarantee that no edge of mq contributes to the weight twice because edges can only a connect sequentially indexed vertices also there are no ancillaries in mq other than xi i aj and x j which would increase the weight if they are included by definition we know that the weight of a monomial with respect to a set of variables will be equal to the sum of the powers of those variables in the given monomial in this case w s q m q bi bi bi bi b j b j bh b m q case suppose now that vhq has an even number of elements note that it must contain more vertices than simply xi and x j because that would imply that there are no vertices between xi and x j and that the two ancillaries are adjacent and could thus be expressed as ei which would contradict the statement that m is expressed in optimal form j from lemma we know that for some h satisfying h the edge product ei does not appear in the current optimal form of mq that is bi consider sq xi xi xi xi xi x we claim that w sq mq b mq we see comparing powers of edge ideals a b b b b a j mq xi i ei i ei e e x j a xiai xi xi bi xi xi x x b x x j b x j j b b b a ai b i b i b b b xi xi x x j j xi then w sq mq bi bi bi bi bi bi bi bi bi bi b b bi bh bi b mq b m q b m q intuitively this is because of the same reasons that were given when vhq had an odd number of elements since alternating vertices are again chosen to be in sq with the exception of xi and xi however because the edge product ei does not appear in mq we are not including any redundant powers in our weight which means that w sq mq b mq hence it does not matter whether vhq has an odd or even number of vertices because w sq mq b mq regardless now since each s covers its respective set of vertices the union of all of these disjoint s q subcovers s sq is a vertex cover of in addition as each sq is completely disjoint from any other subgraph s cover w s m w sq mq b mq as each b mq was the number of edges that existed in that induced subgraph representation and no two subgraphs contained any of the same edges b mq b m the total number of edges in an optimal factorization of that is we have constructed a vertex cover s such that w s m b m thus m l t and therefore i t l t corollary given i and d t as above i t i t d t proof theorem states that i t l t as we also know that i t l t d t we can simply substitute l t with i t thus i t i t d t now that we have proved that i t i t d t we will use this result to carry out various computations related to the interplay between ordinary and symbolic powers we close this section with a brief remark on the proof of theorem specifically it relies on the fact that g is a cycle but not that g is an odd cycle however we focus on the odd cycle case as when g is an even cycle it is bipartite and showed in that case that i t i t for all t mike janssen thomas kamp and jason vander woude a pplications to i deal c ontainment q uestions given the edge ideal i of an odd cycle corollary describes a structural relationship between i t and i t given any t in this section we will exploit this relationship to establish the conjecture of we then will compute the resurgence of i and explore the symbolic defect of various powers of i given i i r k recall the definitions of l t and d t which generate ideals l t i t and d t respectively l t x a deg x a and for all minimal vertex covers v wv x a t and d t x a deg x a and for all minimal vertex covers v wv x a t we will begin by examining d t lemma for a given monomial x a if there exists some i such that ai then x a d t proof recall that d t is the ideal generated by d t although each graph can have many different minimal vertex covers there is a certain type of vertex cover that is guaranteed to exist for any odd cycle this type of cover includes any two adjacent vertices and alternating vertices thereafter without loss of generality consider x a and suppose two such minimal vertex covers that include are and in order for x a to be in d t it must be true that wv x a this means that t and adding the inequalities yields as it follows that ai which contradicts the requirement that deg x a hence any monomial x a with at least one exponent equal to can not be an element of d t or by extension d t lemma for a given monomial x a in d t if deg x a k then x a is divisible by k proof let x a be an element of d t such that deg x a k and suppose that x a is not divisible by k this means that there exists an such that moreover since x a d t we must have a j for all j if is odd consider minimal vertex covers and if is even use and in order for x a to be in d t it must be true that wv j x a t for j when is odd this means that t and comparing powers of edge ideals t and similarly is even combining these we k a see as contradiction the following corollary partially answers conjecture in the affirmative note that this is a restatement of theorem corollary let g be an odd cycle of size and i be its edge ideal then i t i t for t proof suppose that t n and recall that i t i t d t and any element of the generating set d t of d t must have degree less than however since there are variables at least two of them would need to have an exponent of in order to be an element of d t but from lemma we know that none of the variables in a monomial in d t can have an exponent of therefore there are no monomials that satisfy all of the conditions for being in d t which means that it is empty and thus i t i t when t a recent paper of galetto geramita shin and van tuyl introduced the notion of symbolic defect denoted sdefect i t to measure the difference between the symbolic power i t and ordinary power i t it is the number of minimal generators of i t i t as an corollary thus implies that sdefect i t for all t satisfying t corollary let g be an odd cycle of size and i be its edge ideal then sdefect i n in particular i i proof if we let t n proposition states that i i d n again recall that d n is the ideal generated by d n x a deg x a n and for all minimal vertex covers v wv x a n from this we know that the degree of any monomial in d n must be strictly less than and from lemma we also know that all variables have an exponent of at least as there are variables we can see that if any of the variables has an exponent of at least the total degree of the monomial becomes at least which is not valid thus every monomial that is not is not in d n it is straightforward to check that d n and therefore that d n thus i i recall that if i r k xn is a homogenous ideal the minimal degree of i denoted i is the least degree of a nonzero polynomial in i in particular if i is an edge ideal i and i r for any r in general if i t i s we may conclude that i t i s but the converse need not hold when i i however it does as the next lemma demonstrates lemma let i be the edge ideal of an odd cycle then i t i s if and only if i t i s mike janssen thomas kamp and jason vander woude proof the forward direction is clear for the converse suppose that i t i s from our definition of symbolic powers we know i t m for all minimal vertex covers v wv m t as i t i t we note that i t i t i s thus if m i t wv m t s and deg m i t i s and we observe i t m deg m and for all minimal vertex covers v wv m s l s is which completes the proof despite providing a condition which guarantees containments of the form i t i s lemma does not actually compute i t which is more delicate than computing i s we next adapt lemma and the linear programming approach of to compute it in order to do so we make the following definition definition fix a list of minimal vertex covers vr for such that we define the minimal vertex cover matrix a ai j to be the matrix of s and s defined by if x j vi ai j if x j vi remark note the minimum cardinality for a minimal vertex cover of is n in fact there are minimal vertex covers of size n as we have seen there do exist minimal vertex covers of size greater than n these covers will be accounted for in rows and higher of the minimal vertex cover matrix a we first seek a lower bound of i t using linear programming let t s n d where d consider the following linear program where a is the minimal vertex cover matrix t b and c t minimize b t y subject to ay c and y we refer to as the alpha program and observe that if is the value which realizes we have i t b t consider the following partition of a let be the submatrix of a consisting of the first rows and thus corresponding to the minimal vertex covers which contain exactly n vertices and b the matrix consisting of the remaining rows of a we thus create the following of comparing powers of edge ideals minimize b t y subject to y c and y lemma the value of is t proof we claim that t t t a column vector is a feasible solution to indeed is a column vector whose entries are all t s n d satisfying the constraint of the lp in this case t b t n to show that this is the value of we make use of the fundamental theorem of linear programming by showing the existence of an which produces the same value for the dual linear program maximize ct x subject to t x b and x specifically let x as the rows of t again have exactly n s we see t b is satisfied and it t is straightforward to check that ct b t lemma the value of is bounded below by t proof observe that is obtained from by possibly introducing additional con t straints thus the value of is at least the value of which is proposition let t s n d where d then i t proof by lemma we see that i t is bounded below by the value of t s n d d d i t s as n it s enough to find an element of degree s in i t we claim that m is such an element note that any minimal vertex cover v and hence minimal prime of i will contain one of and and at least n if it contains both and or n if it contains only one of and other vertices in the former case wv m s d s n s n t and so m i t in the latter case wv m s d sn s n d t and again we see m i t mike janssen thomas kamp and jason vander woude thus i t is an integer satisfying s i t s k j d t whence i s n s s s k j t d s n d recall that given a nontrivial homogeneous ideal i k n the gence of i introduced in and denoted i is the number i sup i m i r theorem if g is an odd cycle of size and i its edge ideal then i o n proof let t i m i r and suppose that i m i r so that mr in order for i m to not be a subset of i r it must be true that i m i r by lemma since we m know i r and i m nm by it follows that n and m m m that nm n thus n and we conclude that r our next goal is to prove that is the smallest upper bound of t and we will do mk this by finding a sequence ak r t with lim ak k we first make the following claim claim if t then m r proof of claim by lemma because t it follows that i m i r and it is then enough to show that i i r to conclude that m r by proposition we have i m m m m i m n i r i n recall that for any odd cycle of size i i so let n and then recursively define ak mr k where mk and rk k by the claim above we have ak mk note that this definition of the sequence n k ak is equivalent to the explicit formula ak moreover lim ak which finally implies that i in a new measure of the failure of i m to contain i m was introduced this measure is known as the symbolic defect and for a given m is the number m of minimal generators m such that i m i m m recall that corollaries comparing powers of edge ideals and imply for i i that sdefect i t if t n if t n next we explore additional terms in the symbolic defect sequence our general approach is to rely on the decomposition described in corollary in the parlance of our work the symbolic defect is the size of a minimal generating set for the ideal d t observe that in general this is not the same as computing the cardinality of the set d t as there may be monomials in d t which are divisible by other monomials in the set thus our goal is to determine the cardinality of the subset d t of d t which forms a minimal generating set of d t theorem let i i then for t satisfying n t we have sdefect i t t n proof as stated above we wish to count the number of minimal generators in d t recall that i t by definition as everything in d t has degree less than we see that d t consists only of monomials of degree the collection of all distinct monomials of degree is itself linearly independent and thus d t is a minimal generating set for d t d t d t consider an arbitrary m d t and note that deg m t since the edge monomials ei xi xi where again have degree we see that m is divisible by the product of at most t edge monomials further as i t t i by proposition lemma gives that m i and thus m is divisible by at least t edge monomials thus m must be divisible by exactly t edge monomials and an optimal factorization of m is b m where bi t that is m has a single ancillary with exponent by lemma m must be divisible by write in optimal form as if is odd and as if is even observe that in either case is the product of a single variable and n edge monomials thus the monomial p is the product of exactly t n edge monomials we have thus factored any m d t as m p where p is the product of exactly t n edge monomials observe that if where is the product of exactly t n edge monomials deg and if v is any minimal vertex cover of wv wv wv n t n t where wv follows from the fact that i by definition thus d t mike janssen thomas kamp and jason vander woude therefore to count the monomials in d t it suffices to count all monomials p that are products of t n edge monomials we can visualize this problem by counting the number of ways to place these t n edges around the cycle assuming that we can place multiple edges between the same two vertices to that end let be the number of pairs vertices between which we place at least one edge then there are n ways to place the t n edges first we choose from among the choices for pairs vertices between which to place n the edges and then we choose from the sdefect i t ways to arrange the edges thus t n in particular sdefect i n the computation of sdefect i t becomes much more complicated as t a n additional containment question our proof that i t i t d t does not hold for any graph other than a cycle as it relies on the fact that each path between ancillaries is disjoint from every other path this is not true for general this leads naturally to the following question question let g be a graph on the vertices v xd containing an odd cycle suppose i i g is the edge ideal of g in r k xd and let l t and d t retain their usual definitions with respect to then i t i t d t for all t the following example answers question in the negative example consider the graph g defined by v g and e g where we write the edges as products of vertices and let m observe that m i but as every minimal vertex cover v of g contains three of we have wv m thus i l however we observe in the following two theorems that i t l t for certain classes of graphs one case in which question holds is the case in which g is an odd cycle with one additional vertex connected to exactly one vertex of the cycle see figure for an example of such a graph constructed from theorem let g be a graph consisting of vertices and edges such that of them form a cycle and the remaining edge connects the remaining vertex to any existing vertex of the cycle further let i be the edge ideal of g and let l t and d t retain their usual definitions with respect to then i t i t d t comparing powers of edge ideals f igure an additional vertex and edge appended to proof without loss of generality consider the cycle formed by with being the newly added edge recall that ei xi xi when i and a b let m be a monomial expressed in optimal form m and recall that b m bi as with the cycle if i t l t it will follow that i t i t d t by lemma we know that i t l t so we must only show the reverse containment let m i t which implies that b m t then we will show that m l t lemma allows us to consider only cases where m either has multiple ancillaries or has a single ancillary of at least degree note that deg m else m l t by definition we will construct a minimal vertex cover v of g such that wv m b m first assume that is the only ancillary of m and observe that we b a may write m as m it can not be true that bi for all i because it would then be possible to divide m by some monomial p which must be in optimal form by lemma however in this case p contradicting that p was in optimal form thus at least one is then construct v as follows if let v then wv m bi i bi b m if for some j let v then wv m j bi bi b m now suppose that all ancillaries of m are in the cycle the ancillaries come from the set by adapting the argument from theorem we may assume that there is either one ancillary with exponent at least or that there are multiple ancillaries use the construction in the proof of theorem to decompose the subgraph of g as a b a b hr define the proof of theorem provides minimal subcovers sr such that s sq and w s bi mike janssen thomas kamp and jason vander woude if s then s covers g and w s m w s bi b m in this case we may let v on the other hand if s let v s then wv m w s m bi b m next assume that the ancillaries of m are and at least one x j in the cycle where j if j we may write contradicting the assumption that m is in optimal form use the construction of theorem to decompose the cycle into subgraphs hr and note that is a vertex in hr observe that since is not ancillary hi for any i let the vertices of hi be represented by where are ancillaries and we wrap around with representing for all i r the proof of theorem gives a construction of a minimal vertex subcover si with the required properties now construct a subgraph of g as follows v v hr and e e hr decompose h as two induced subgraphs a and b of g on the vertices and we observe that we may now use the construction in the proof of theorem to build minimal covers of a and b containing and not whose a b a b b b r union gives a cover sr of given mr e note that w mr b mr then the union v si has the required property that w v m b m in all cases m l t and by extension m l t we also verify that the answer to question is positive when g is a complete graph thus additional study is needed to identify the precise property for which question has an affirmative answer theorem let r k xn and let kn denote the complete graph on xn further let i i kn and l t and d t maintain their definitions as above then i t i t d t proof let ei j denote the edge between xi and x j such that i j we will show that i t l t by lemma we must only show the reverse containment let m i t which implies that b m t and recall that lemma allows us to consider only cases where m either has multiple ancillaries or has a single ancillary of at least degree let b b n n be a monomial in optimal form then m has at most ancillary en n m xnan a aj because if xi i and x j were both ancilaries then m could be expressed in a more optimal form as b i j a j b b b n n a ei j en n xnan m xi i x j for some s because there is guaranteed to be an edge between xi and x j as kn is complete thus m has exactly ancillary and it must have a degree of at least without loss of generality let be the ancillary of kn note that bi j if i j if this was not the case m could be expressed in a more optimal form as b b b i m xnan b jj b ei i j j b n n comparing powers of edge ideals let v xn observe that v covers kn and b b b b n b m b b b b n b n b n b n b n b n n j n j n j i j n j i j b m thus m b m t so m l t and by the same argument no divisor of m is in l t which means m l t therefore i t l t because i t l t d t by corollary this leads to the desired result that i t i t d t r eferences bocci cooper and harbourne containment results for ideals of various configurations of points in p n j pure appl algebra bocci and harbourne comparing powers and symbolic powers of ideals algebraic cristiano bocci susan cooper elena guardo brian harbourne mike janssen uwe nagel alexandra seceleanu adam van tuyl and thanh vu the waldschmidt constant for squarefree monomial ideals journal of algebraic combinatorics susan cooper robert embree huy and andrew hoefel symbolic powers of monomial ideals proceedings of the edinburgh mathematical society dao a de stefani grifo huneke and symbolic powers of ideals arxiv august denkert and janssen containment problem for points on a reducible conic in journal of algebra marcin dumnicki tomasz szemberg and halszka counterexamples to the i i containment journal of algebra ein lazarsfeld and smith uniform bounds and symbolic powers on smooth varieties invent galetto geramita shin and a van tuyl the symbolic defect of an ideal arxiv october le tuan hoa and tran nam trung regularity of symbolic powers of twodimensional monomial ideals commut algebra hochster and huneke comparison of symbolic and ordinary powers of ideals invent simis vasconcelos and villarreal on the ideal theory of graphs journal of algebra adam van tuyl a beginner s guide to edge and cover ideals pages springer berlin heidelberg berlin heidelberg mike janssen thomas kamp and jason vander woude rafael villarreal graphs manuscripta mathematica dec scarlet worthen ellis and lesley wilson symbolic powers of edge ideals undergraduate math journal m athematics tatistics d epartment d ordt c ollege s ioux c enter ia usa address m athematics tatistics d epartment d ordt c ollege s ioux c enter ia usa address thmskmp m athematics tatistics d epartment d ordt c ollege s ioux c enter ia usa address jsnvndrw
0
mar on the noether number of cziszter institute of mathematics hungarian academy of sciences u budapest hungary abstract a group of order pn p prime has an indecomposable polynomial invariant of degree at least if and only if the group has a cyclic subgroup of index at most p or it is isomorphic to the elementary abelian group of order or the heisenberg group of order keywords polynomial invariants degree bounds sequences introduction let g be a finite group and v a over a field f of characteristic not dividing the group order the g v is the maximal degree in a minimal generating set of the ring of polynomial invariants f v g it is known that g v see even more it was observed that g supv g v where v runs over all over the base field f is typically much less than for an algebraically closed base field of characteristic zero it was proved in that g holds only if g is cyclic then it turned out that g for any group g see and moreover g holds if and only if g has a cyclic subgroup of index at most two with the exception of four particular groups of small order see theorem recently some asymptotic extensions of this result were given in our goal in the present article is to establish the following strengthening of this kind of results for the class of theorem if g is a finite for a prime p and the characteristic of the base field f is zero or greater than p then the inequality g p partially supported by the national research development and innovation office nkfih grants erc hu and holds if and only if g has a cyclic subgroup of index at most p or g is the elementary abelian group or the heisenberg group of order the proof of theorem will be reduced to the study of a single critical case the heisenberg group hp which is the extraspecial group of order and exponent p for an odd prime we prove about this the following result theorem for any prime p and base field f of characteristic or greater than p we have hp the paper is organised as follows section contains some technical results on sequences over abelian groups that will be needed later in section we reduce the proof of theorem to that of theorem then in section we explain the main invariant theoretic idea behind the proof of theorem which is also applicable in a more general setting the proof itself of theorem will then be carried out in full detail in section finally section completes our argument by showing that for the case p we have in any characteristic some preliminaries on sequences we follow here in our notations and terminology the usage fixed in let a be an abelian group noted additively by a sequence s over a subset a we mean a multiset of elements of they form a free commutative monoid with respect to concatenation denoted by s t and unit element the empty sequence this has to be distinguished from the zero element of a the sequence a a a obtained by the repetition of an element a a is denoted by a k this has to be distinguished from the product ka a the multiplicity of an element a a in a sequence s is denoted by va s we also write a s to indicate that va s we say that t is a subsequence of s and write t s if there is a sequence r such that s t in this case we also write s t the length of a sequence denoted by can be expressed as va s whereas the sum of a sequence s an is s an a and by convention we set we say that s is a sequence if s the relevance of sequences for our topic is due to the fact that for an abelian group a the noether number a coincides with the davenport constant d a which is defined as the maximal length of a sequence over a not containing any proper subsequence see chapter its value for is given by the following formula theorem d c cpnr r x pni a variant of this notion is the kth davenport constant dk a defined for any k as the maximal length of a sequence s that can not be factored as the concatenation s of sequences si over a its numerical value is much less known for some recent results see we shall only need the fact that according to theorem dk cp cp kp p the following consequence of the definition of dk a will also be used lemma lemma any sequence s over an abelian group a of length at least dk a factors as s sk r with some sequences si we define for any sequence s over a the set of all partial sums of s as s t t s if s then s is called free the next result could also be deduced from the theorem see corollary but we provide here an elementary proof for the reader s convenience lemma let p be a prime then for any sequence s over cp we have s min p proof we use induction on the length of for the claim is trivial otherwise consider a sequence s a where the claim holds for we have s s a s where s s s then either s a s or else a s and a s s that is when s is a subgroup of cp containing a but since cp has only two subgroups and by assumption s a this means that s cp lemma theorem a sequence s over cp p prime of length p is free if and only if s a for some a cp lemma proposition let p be a prime and s be a sequence over cp cp of length then s has a subsequence x s of length p or we close this section with a technical result its motivation and relevance will become apparent through its application in the proof of proposition for any function defined on a and any sequence s over a we will write s for the sequence obtained from s by applying lemma let s be a sequence over cp of length if we have s p then s r where each si is a sequence and proof let denote the maximal integer such that s r for some sequences si then each si is irreducible hence p and r is free hence p assuming that we get s s p p p s whence s p follows in contradiction with our assumption proposition let a cp cp for some prime p and a cp the projection onto the first component if s is a sequence over a with and s p then for any given subsequence t s of length p there is a a factorisation s r where each si is a sequence over a while t s and cp proof let s s t be the maximal subsequence such that s then by assumption as p so there is a subsequence x s of length p or by lemma we have two cases i if then x for some sequences such that p and p as d cp cp by ii if p then we can take x then we have t so again by lemma we find a sequence s t of length p as above in both cases t s and by construction consequently cp cp hence by lemma we have a factorisation s r with sequences si for each i finally in both cases we had p and hence p by lemma reduction of theorem to theorem our main tool here will be the kth noether number g v which is defined for any k as the greatest integer d such that some invariant of degree d exists which is not contained in the ideal of f v g generated by the products of at least k invariants of positive degree this notion was introduced in section with the goal of estimating the ordinary noether number from information on its composition factors this was made possible by lemma according to which for any normal subgroup n g we have g v n v as observed in chapter if a is an abelian group then a coincides with dk a so that we can use in the applications of proof of theorem assuming theorem the if part follows from proposition which states that c g for any subgroup c so if c is cyclic of index at most p then g c g c moreover by and by proposition below p the only if part for p follows from theorem so for the rest we may assume that p let g be a group of order pn for which holds if g is then it has a normal subgroup n cp cp by lemma we claim that must be cyclic for otherwise by applying lemma to the factor group we find a subgroup k such that n k g and cp cp but then we get using and that k cp cp cp p p p as g k by lemma we get a contradiction with now let g g be such that gn generates then g p has order p or in the first case hgi has index p in g and we are done in the other case hgi hence g n if g acts trivially on n then g contains a subgroup h cp cp cp for which we have h by hence g h as p a contradiction this shows that g must act on cp cp it is well known that aut cp cp gl p has order p so its sylow must have order p and it is isomorphic to cp therefore g p must act trivially on n so if n then g p and the subgroup hn g pi is isomorphic to cp cp cp but this was excluded before the only case which remains open is that n and g cp cp cp where the factor group cp acts on cp cp this is the heisenberg group denoted by hp by theorem we have hp for all p under our assumption on the characteristic of the base field so among the heisenberg groups the inequality can only hold for remark the precise value of the noether number is already known for all the which satisfy according to theorem as the theorem states equality holds in for and for the rest the groups of order pn which have a cyclic subgroup of index p were classified by burnside see theorem as follows i if g is abelian then either g is cyclic with g pn or g in which case it has g p by ii if g is and p then g is isomorphic to the modular group mpn we have mpn by remark iii if g is and p then g is the dihedral group or the group or the generalised quaternion group we have and by theorem altogether these results imply that for any g we have g p p and this inequality is sharp only for the case p remark the notion of the davenport constant d g originally defined only for abelian groups as in section was extended to any finite group g in for the conjectural connection between the noether number and this generalisation of the davenport constant see section and invariant theoretic lemmas let us fix here some notations related to invariant rings for any vector space v over a field f we denote its coordinate ring by f v we say that a group g has a left action on v or that v is a if a group homomorphism g gl v is given and we abbreviate g v by writing g v for any g g and v v by setting f g v f g v for any f f v we obtain a right action of g on f v the ring of polynomial invariants is defined as f v g f f v f g f for all g g if the ring f v n is already known for some normal subgroup n g then f v g as a vector space is spanned by its elements of the form m where m runs over the set of all monomials and f v n f v g is the f v g epimorphism defined as x g m m see chapter when n is trivial this definition amounts to g b hom g the reynolds operator given any character g the set f v g f f v f g g f constitutes the f v g of of weight if the restriction of to n is trivial when then these can be obtained by the projection map f v n f v g defined with the analogous formula x g ug u f v and f v g are graded rings f v d denotes for any d the vector g space of degree d homogeneous polynomials and f v g d f v v d the l g g g set f v g f v d is a maximal ideal in f v while f v f v the ideal of f v generated by all polynomials of positive degree is the so called this ideal will be our main object of interest since as observed in section the graded factor ring f v f v g f v is finite dimensional and its top degree denoted by b g v yields an upper bound on the noether number by an easy argument using the reynolds operator g v b g v it is well known that g v is unchanged when we extend the base field so we will assume throughout this paper that f is algebraically closed lemma let g be a finite group with a normal subgroup n such that is abelian let w be a over f and assume that k g then f w n f w f w for any k d proof f w n regarded as a has the direct sum decomposition l g here we used both our assumptions on this means f w p that any element u f w n u can be written as a sum u n now for any k and uk f w we have k k y y x x ui ui uk the term uk belongs to the ideal f w g f w whenever the se quence over contains a subsequence but this holds for every term on the right of as k d lemma if in lemma the factor group cp is cyclic of prime order then for any g and any elements f w n we have the relation g f w f w proof observe that in with k p the weight sequence bp is free if and only if and is over c by lemma as a result we get x f w g f w bp replacing here and with and respectively and observing that g by definition we have u g u for any u f w n we infer that must belong to the same residue class modulo the ideal f w g f w to which does belong this proves our claim the heisenberg group hp the heisenberg group hp ha bi can be defined by the presentation ap bp cp a b c a c b c where a b denotes the commutator ab the subgroups a ha ci and b hb ci are normal and isomorphic to cp cp the the center and the derived subgroup of hp all coincide with hci so that hp is extraspecial in particular hp is also isomorphic to cp cp taking into account only the subgroup structure of hp the best upper bound that we can give about its noether number by means of and is the following hp cp cp cp p our goal in this section will be to enhance this estimate by analysing more closely the invariant rings of hp let f be an algebraically closed field with char f p so that there is a primitive root of unity f that will be regarded as fixed throughout this paper the irreducible hp over f are then of two types i composing any group homomorphism hom cp cp with the canonic surjection hp hp cp cp yields irreducible representations of hp ii for each primitive root of unity i f where i p h take the induced representation inda p hvi where hvi is a left such that v and i in the basis v this representation is then given in terms of matrices in the following form with ip the p p identity matrix a b c i ip i each is irreducible by mackey s criterion see and for i i it is easily seen from the matrix corresponding to c that and are as adding the squares of the dimensions of the above irreducible hp we get p so that no other irreducible hp exist as a result an arbitrary hp w over f has the canonic direct sum decomposition w u where u consists only of irreducible representations of hp with hci in their kernel while each vi is an isotypic hp consisting of the direct sum of ni isomorphic copies of the irreducible representation vi i i z ni times next we recall how does the action of g on w extend to the coordinate ring f w when speaking of a coordinate ring f f xi we always tacitly assume that the variables xi k form a dual basis of the basis used at by our convention from section hp acts from the right on the variables xg v x g v for all g hp so we can rewrite as xbi k xi mod p xai k ik xi k xci k i xi k here by some abuse of notation we identified the integers k occurring as indexes with the modulo p residue classes they represent this shows that the action of the subgroup a on a variable xi k is completely determined by the modulo p residue classes of the exponents ik and i of in we will call xi k ik i the weight of the variable xi k we shall also refer to the projections xi k ik and xi k i with this notation it is immediate from that for any n z and x xi k n xb x n x and n xb x where the subtraction and multiplication with n is understood in this implies the observation which will be used frequently later on that for any variable x with x and any arbitrarily given w there is always an element g hbi such that xg our discussion also shows that for a variable y f w we have y if and only if y f u and otherwise the value y i determines the isotypic hp vi such that y f vi any monomial u f w is an too hence we can associate a weight u j i to it so that ua j u and uc i u obviously then uv u v for any monomials u if u yn for some variables yi f w with repetitions allowed then we can form the sequence u yn over a which will be called the weight sequence of u obviously u u yn with the notations of section observe that a monomial u is if and only if u that is if u is a sequence over a finally we set u yn and u yn definition we call two monomialsqu v f w homologous denoted by qd d gn u v if deg u deg v d and u yn while v yn for some variables yn f w with repetitions allowed and group elements gn hbi observe that a monomial v obtained from a monomial u by repeated applications of will be homologous to it in the above sense proposition let p if u f w is a monomial with deg u u p and v u is a monomial such that deg v p and v then for any homologous monomial v v there is a homologous monomial u such that v and u f w g f w proof we use induction on the degree d deg v deg v if d then v v so we are done by taking u suppose now that the claim holds for some d p it suffices to prove that for any given divisor xv u where x is a variable deg v d xv and for any v v and g hbi a monomial u exists such that xg v and u f w g f w by the inductive hypothesis we already have a monomial u such that v and u f w g f w as u and x divides there is a t hbi such that xt divides by applying proposition to the weight sequences s t v we obtain a factorisation such that ui f w a for all i p up f w v divides u and we have two cases i if xt or similarly if xt then take up g g we have x v u and u u u while u f w f w by lemma ii otherwise xt uk for some k by our assumption on there is a divisor w with w xt as xt x there is an h hbi for which w h xt then for r we have u and f w g f w by lemma take the factorisation h t h h t where w x uk and ui for the rest by construction f w a for all i p v divides and xt so this factorisation of falls under case i and we are done we need some further notations the decomposition induces an isomorphism f w f u f f which in turn yields for any monomial m f w a factorisation m such that f u and mi f vi for all i then for each i the decomposition gives the n i j identifications f vi f f xi k k p j ni j where we set xi k xi k the variable xi k introduced at is placed in the jth tensor factor so for any monomial mi f vi we n j have a factorisation mi mi mi i where each monomial mi depends j only on the set of variables xi k k p observe finally that two monomials u v f are homologous u v if and only j j if deg ui deg vi for all i p and j ni we shall also need the polarisation operators defined for any polynomial f f w by the formula t i f x t s xi k k f s s where k denotes partial derivation with respect to the variable xi k all polarisation operations t i are degree preserving deg f deg f g and f f g therefore by the leibniz rule g f w g f w f w f w g f w g f w f w f w and proposition let p and assume that char f is or greater than if a monomial m f w has deg m then m f w g f w proof consider the factorisation m derived from as described above observe that for the weight sequence s m we have hci m deg so if deg p then m f w f w by lemma and we are done as d d cp cp by hci hence f w f w g f w by lemma it remains that deg then we must have deg mi p for some i say i as otherwise deg m deg p p would n follow take the factorisation corresponding to the direct j decomposition we proceed by induction on m deg j assume first that m this means that deg p for some j say j now let v be an arbitrary divisor of with degree deg v p q and let v xg for some variable x f then v is by construction moreover by we have v x and v x p x and consequently v is now as v v we can find by proposition a monomial m such that g m f w g f w and v m but then m f w f w and we are done for this case now let m as deg p we can take a divisor v such that v v i v j for some indices i j where we have deg v i m i j and deg v j then the monomial v m is homologous with this v and consequently by proposition a monomial f w exists such that m f w g f w and v m our claim will now follow by g proving that m f w f w i j to this end observe that for the monomial we have m hence by the induction hypothesis f w g f w i j already holds moreover m m by construction hence m f w g f w by and we are finished because by our assumption on f we are allowed to divide by m p char f proof of theorem from proposition we see that f w as a module over f w g is generated by elements of degree at most equivalently for the top degree in the factor ring f w f w g f w we have the estimate b g w whence by we conclude that g w the case proposition consider v for a primitive third root of unity f as given by then v proof let f v f x y z with the variables conforming our conventions f v is spanned by the elements m m mb mb where m is any monomial an easy argument shows that xyz y z are the only irreducible monomials then by enumerating all ainvariant monomials of degree at most we see that they have degree or so that for d we have f v h d rd where r f xyz x x y now if we assume that v then f v r follows observe however that all the generators of r are symmetric polynomials so that r f v on the other hand y f v is not a symmetric polynomial whence y this is a contradiction which proves that v the upper bound on will be obtained by an argument very similar to propositions and but since there are many different details too we preferred to give a treatment of this case here proposition if char f then proof suppose that w holds for a w then there is a monomial m f w a with deg m such that m f w g f w as otherwise for any d the space f w g spanned by the elements m d g would be contained in f w let s m identify hci with and let di vi s for i recall that we have the factorisation m corresponding to the direct decomposition w u so that deg mi di for i we may assume by symmetry that a we claim that s is a sequence over and this is only possible if mod so let for some integer k denoting by s the maximum number of sequences into which s can be factored we have s k as otherwise by lemma applied hci with n hci we get m f w f w g f w since and d by on the other hand subtracting from this inequality the previous one yields k whence the claim b for any w with deg w there is a factorisation m with ui f w a such that w and y for some variable y as deg there is a factorisation r with a f w a setting rw enforces f w here deg a g d a as otherwise f w a and m f w f w f w by lemma a contradiction therefore we can not have for then by we have deg deg so that and contradicting the assumption that is a sequence over a as a result there is a variable y not dividing whence the claim for any divisor v with deg v and any monomial v v there is a monomial f w such that v and m f w g f w let v xw and v xg w where deg w w w and g hbi by induction on deg v assume that we already have a monomial m such t that m f w g f w and x w m for some t hbi according to there are factorisations with ui f w a such that w and y for some variable y f we have two cases i if we can take y xt in one these factorisations then for m we have f w g f w by lemma so we are done as v m ii otherwise t t necessarily x w and y x still however there is an h hbi such that g y h xt hence for m we have m f w f w by lemma and we obtain a factorisation falling under case i h t by setting xt h y so we are done again now we proceed as in the proof of proposition for sake of nthe simplicity from now on we rename our variables so that f f s t f xi yi zi i moreover we abbreviate t as i if we have deg for some i then we can apply with g v xi yi zi f w g concluding that m f w f w a contradiction i otherwise if deg for some i then still there is a j i such j that deg after an application of we may assume that m is divisible by xj but then m i for the monomial mxi which falls under case hence m f w g f w by a contradiction finally if deg deg then after an application of we may assume that now consider the relation after multiplying with we get on the left hand side f w g f w by as all the three monomials occurring here fall under case and on the right hand g side f w g f w whence m f w f w follows this contradiction completes our proof now comparing proposition and immediately gives corollary if char f then remark it would be interesting to know if theorem also extends to the whole case for any field f whose characteristic does not divide just as it is the case for p by the above result acknowledgements the author is grateful to domokos for many valuable comments on the manuscript of this paper he also thanks the anonymous referee for many suggestions to improve the presentation of this material references berkovich groups of prime power order volume i of de gruyter expositions in mathematics de gruyter berlin new york cziszter domokos groups with large noether bound ann de l institut fourier pp cziszter domokos the noether number for the groups with a cyclic subgroup of index two journal of algebra pp cziszter domokos on the generalised davenport constant and the noether number central european journal of mathematics pp cziszter domokos and geroldinger the interplay of invariant theory with multiplicative ideal theory and with arithmetic combinatorics in scott chapman fontana geroldinger olberding eds multiplicative ideal theory and factorization theory pp cziszter domokos and the noether numbers and the davenport constants of the groups of order less than domokos noether s bound for polynomial invariants of finite groups arch math basel no pp fleischmann the noether bound in invariant theory of finite groups ad fogarty on noethers bound for polynomial invariants of a finite group electron res announc amer math soc freeze schmid remarks on a generalization of the davenport constant discrete math issue december pp geroldinger and grynkiewicz the large davenport constant i groups with a cyclic index subgroup j pure appl algebra geroldinger factorizations algebraic combinatorial and analytic theory monographs and textbooks in pure and applied mathematics chapman grynkiewicz the large davenport constant ii general upper bounds j pure appl algebra and pyber finite groups with large noether number are almost cyclic neusel smith invariant theory of finite groups mathematical surveys and monographs providence american mathematical society noether der endlichkeitssatz der invarianten endlicher gruppen math schmid finite groups and invariant theory in malliavin editor topics in invariant theory number in lecture notes in mathematics pages springer serre representations des groupes finis hermann paris sezer sharpening the generalized noether bound in the invariant theory of finite groups algebra no
4
secret sharing and shared information nov johannes rauh abstract secret sharing is a cryptographic discipline in which the goal is to distribute information about a secret over a set of participants in such a way that only specific authorized combinations of participants together can reconstruct the secret thus secret sharing schemes are systems of variables in which it is very clearly specified which subsets have information about the secret as such they provide perfect model systems for information decompositions however following this intuition too far leads to an information decomposition with negative partial information terms which are difficult to interpret one possible explanation is that the partial information lattice proposed by williams and beer is incomplete and has to be extended to incorporate terms corresponding to higher order redundancy these results put bounds on information decompositions that follow the partial information framework and they hint at where the partial information lattice needs to be improved introduction williams and beer have proposed a general framework to decompose the multivariate mutual information i s xn between a target random variable s and predictor random variables xn into different terms called partial information terms according to different ways in which combinations of the variables xn provide unique shared or synergistic information about williams and beer argue that such a decomposition can be based on a measure of shared information the underlying idea is that any information can be classified according to who knows but is this true a situation where the question who knows what is easy to answer very precisely is secret sharing a part of cryptography in which the goal is to distribute information the secret over a set of participants such that the secret can only be reconstructed if certain authorized combinations of the participants join their information see beimel for a survey the set of authorized combinations is called the access structure formally the secret is modelled as a random variable s and a secret sharing scheme assigns a random variable xi to each participant i in such a way that if ik is an authorized set of participants then s is a function of xik that is h xik and conversely if ik is not authorized then h xik it is assumed that the participants know the scheme and so any authorized combination of participants can reconstruct the secret if they join their information a secret sharing scheme is perfect if sets of participants know nothing about the secret h xik h s thus in a perfect secret sharing scheme it is very clearly specified who knows in this sense perfect secret sharing schemes provide model systems for which it should be easy to write down an information decomposition one connection between secret sharing and information decompositions is that the set of access structures of secret sharing schemes with n participants is in mathematics subject classification key words and phrases information decomposition partial information lattice shared information secret sharing johannes rauh correspondence with the partial information terms of williams and beer this correspondence makes it possible to give another interpretation to all partial information terms namely the partial information term is a measure of how similar a given system of random variables is to a secret sharing scheme with a given access structure this correspondence also allows to introduce the secret sharing property that makes precise the above intuition an information decomposition satisfies this property if and only if any perfect secret sharing scheme has just a single partial information term which corresponds to its access structure lemma states the secret sharing property is implied by the williams and beer axioms which shows that the secret sharing property plays well together with the ideas of williams and beer proposition shows that in an information decomposition that satisfies a natural generalization of this property it is possible to prescribe arbitrary nonnegative values to all partial information terms these results suggest that perfect secret sharing schemes fit well together with the ideas of williams and beer however following this intuition too far leads to inconsistencies as theorem shows extending the secret sharing property to pairs of perfect secret sharing schemes leads to negative partial information terms while other authors have started to build an intuition for negative partial terms and argue that they may be unavoidable in information decompositions the concluding section collects arguments against such claims and proposes as another possible solutions that the williams and beer framework is incomplete and is missing nodes that represent higher order redundancy cryptography where the goal is not only to transport information as in coding theory but also to keep it concealed from unauthorized parties has initiated many interesting developments in information theory for example by introducing new information measures and older ones see for example maurer and wolf csiszar and narayan this manuscript focuses on another contribution of cryptography probabilistic systems with distribution of information the remainder of this article is organized as follows section summarizes definitions and results about secret sharing schemes section introduces different secret sharing properties that fix the values that a measure of shared information assigns to perfect secret sharing schemes and combinations thereof the main result of section is that the pairwise secret sharing property leads to negative partial information terms section discusses the implications of this incompatibility result perfect secret sharing schemes we consider n participants among whom we want to distribute information about a secret in such a way that we can control which subsets of participants together can decrypt the secret definition an access structure a is a family of subsets of n closed to taking supersets elements of a are called authorized sets a secret sharing scheme with access structure a is a family of random variables s xn such that h xa s h xa whenever a a here xa xi for all subsets a n a secret sharing scheme is perfect if h xa s h xa h s whenever a a secret sharing and shared information the condition for perfection is equivalent to h h s see beimel for a survey on secret sharing theorem for any access structure a and any h there exists a perfect secret sharing scheme with access structure a for which the entropy of the secret s equals h s proof perfect secret sharing schemes for arbitrary access structures were first constructed by ito et al in this construction the entropy of the secret equals bit combining n copies of such a secret sharing scheme gives a secret sharing scheme with a secret of n bit as explained in beimel claim the distribution of the secret may be perturbed arbitrarily as long as the support of the distribution remains the same in this way it is possible to prescribe the entropy of the secret in a perfect secret sharing scheme example let s be independent uniform binary random variables and let a s b s c s where denotes addition modulo or the xor operation then s a b c is a perfect secret sharing scheme with access structure a b a c b c a b c it may be of little surprise that integer addition modulo k is an important building block in many secret sharing schemes while existence of perfect secret sharing schemes is solved there remains the problem of finding efficient secret sharing schemes in the sense that the variables xn should be as small as possible in the sense of a small entropy given a fixed entropy of the secret for instance in example h xi s for all i see beimel for a survey since an access structure a is closed to taking supersets it is uniquely determined by its elements a a a if b a and b a then b for instance in example the first three elements belong to a the set a has the property that no element of a is a subset of another element of a such a collection of sets is called an antichain conversely any such antichain equals the set of elements of a unique access structure the antichains have a natural lattice structure which was used by williams and beer to order the different values of shared information and organize them into what they call the partial information lattice the same lattice also has a description in terms of secret sharing definition let ak and bl be antichains then ak bl for any bi there exists aj with aj bi the partial information lattice for the case n is depicted in figure lemma let a be an access structure on n and let bl be an antichain then bl are all authorized for a if and only if a bl proof the statement directly follows from the definitions johannes rauh information decompositions of secret sharing schemes williams and beer proposed to decompose the total mutual information i s xn between a target random variable s and predictor random variables xn according to different ways in which combinations of the variables xn provide unique shared or synergistic information about one of their main ideas is to base such a decomposition on a single measure of shared information which is a function i s yk that takes as arguments a list of random variables of which the first s takes a special role to arrive at a decomposition of i s xn the variables yk are taken to be combinations xa xi of xn corresponding to subsets a of n for simplicity s xak is denoted by s ak for all ak n williams and beer proposed a list of axioms that such a measure should satisfy it follows from these axioms that it suffices to consider the function s ak in the case that ak is an antichain moreover s is a monotone function on the partial information lattice definition thus it is natural to write each value s ak on the lattice as a sum of local terms corresponding to the antichains that lie below ak in the lattice x s ak s bl bl ak the terms are called partial information terms this representation always exists and the partial information terms are uniquely defined using a inversion however it is not guaranteed that is always nonnegative if is nonnegative then is called locally positive williams and beer also defined a function denoted by imin that satisfies their axioms and that is locally positive while the framework is intriguing and has attracted a lot of further research as this special issue illustrates the function imin has been critiziced as not measuring the right thing the difficulty of finding a reasonable measure of shared information that is locally positive bertschinger et rauh et has led some to argue that maybe local positivity is not a necessary requirement for an information decomposition this issue is discussed further in section the goal of this section is to present additional natural properties for a measure of shared information that relate secret sharing with the intuition behind information decompositions in a perfect secret sharing scheme any combination of participants knows either nothing or everything about this motivates the following definition definition a measure of shared information has the secret sharing property if and only if for any access structure a and any perfect secret sharing scheme xn s with access structure a the following holds h s if ak are all authorized s ak otherwise for any ak xn lemma the secret sharing property is implied by the williams and beer axioms proof the williams and beer axioms imply that s ak i s ai whenever ai is not authorized on the other hand when ak are all authorized then the monotonicity axiom implies s ak s ak s s s h s secret sharing and shared information perfect secret sharing schemes lead to information decompositions with a single nonzero partial information term lemma if has the secret sharing property and if xn s is a perfect secret sharing scheme with access structure a then h s if a ak s ak otherwise for any ak xn proof suppose that a and let s ak be the right hand side of we need to show that since the inversion is unique it suffices to show that where x s ak s bl bl ak by lemma s ak h s if ak are all authorized otherwise for any ak xn from which the claim follows what happens when we have several secret sharing schemes involving the same participants in order to have a clear intuition assume that the secret sharing schemes satisfy the following definition definition let al be access structures on n a combination of perfect secret sharing schemes with access structures al consists of random variables sl xn such that si xn is a perfect secret sharing scheme with access structure ai for i l and such that h si sl xa h si if a ai this definition ensures that the secrets are independent in the sense that knowing some of the secrets provides no information about the other secrets formally one can see that the secrets are probabilistically independent as follows for any a ai for example a h si sl h si sl xa h si in definition if two access structures ai aj are identical then we can replace si and sj by a single random variable si sj and obtain a smaller combination of perfect secret sharing schemes in a combination of perfect secret sharing schemes it is very clear who knows what namely a group of participants knows all secrets for which it is authorized while it knows nothing about the remaining secrets this motivates the following definition definition a measure of shared information has the combined secret sharing property if and only if for any combination of perfect secret sharing schemes with access structures al sl ak h si ak ai the entropy of those secrets for which ak are all authorized has the pairwise secret sharing property if and only if the same holds true in the special case l johannes rauh the combined secret sharing property implies the pairwise secret sharing property the pairwise secret sharing property does not follow from the williams and beer axioms for example imin satisfies the williams and beer axioms but not the pairwise secret sharing property as will become apparent in theorem so one can ask whether the pairwise and combined secret sharing properties are compatible with the williams and beer axioms this question is difficult to answer since currently there are only two proposed measures of shared information that satisfy the williams and beer axioms namely imin and the minimum of mutual informations barrett immi s ak min i s ai k both measures do not satisfy the pairwise secret sharing property while there has been no further proposal for a function that satisfies the williams and beer axioms for arbitrarily many arguments several measures have been proposed for the bivariate case k notably ired of harder et al and f of bertschinger et al the appendix shows that si f at least satisfies the si combined secret sharing property as far as combinations of l perfect secret sharing schemes lead to information decompositions with at most l nonzero partial information terms lemma assume that has the combined secret sharing property if sl xn is a combination of perfect secret sharing schemes with pairwise different access structures al then si if ai ak sl ak for some i l otherwise for any ak xn the proof is similar to the proof of lemma and omitted the combined secret sharing property implies that any combination of nonnegative values can be prescribed as partial information values proposition suppose that a nonnegative number ha is given for any antichain a for any measure of shared information that satisfies the combined secret sharing property there exist random variables s xn such that the corresponding partial measure satisfies s ak ak for all antichains a ak proof by theorem for each antichain a there exists a perfect secret sharing scheme sa a xn a with h sa ha combine independent copies of these perfect secret sharing schemes and let s sa a a a xn xn a a where a runs over all antichains then s xn is an independent combination of perfect secret sharing schemes and the statement follows from lemma unfortunately not every random variable s can be decomposed in such a way as a combination of secret sharing schemes however proposition suggests that given a measure of shared information that satisfies the combined secret sharing property s a can informally be interpreted as a measure that quantifies how much xn s looks like a perfect secret sharing scheme with access structure lemma suppose that is a measure of shared information that satisfies the pairwise secret sharing property if and are independent then secret sharing and shared information in the language of ince the lemma says that the pairwise secret sharing property implies the independent identity property is pair of perfect secret sharing proof let then schemes with access structures and the statement follows from definition since is not authorized for and is not authorized for incompatibility with local positivity unfortunately although the combined secret sharing property very much fits the intuition behind the axioms of williams and beer it is incompatible with a nonnegative decomposition according to the partial information lattice theorem let be a measure of shared information that satisfies the williamsbeer axioms and has the pairwise secret sharing property then is not nonnegative proof the xor example which was already used by bertschinger et al and rauh et al to prove incompatibility results for properties of information decompositions can also be used here let be independent binary uniform random variables let and let s observe that the situation is symmetric in in particular are also independent and the following values of can be computed from the assumptions s s s bit since is a function of and by the monotonicity axiom s by lemma by monotonicity s moreover s bit since bit is the total entropy in the system but then s s s s s bit bit bit where denotes values of that vanish thus is not nonnegative note that the random variables s from the proof of theorem form three perfect secret sharing schemes that do not satisfy the definition of a combination of perfect secret sharing schemes the three secrets are not independent but they are independent and so lemma does not apply remark the xor example from the proof of theorem which was already used by bertschinger et al and rauh et al was criticized by chicharro and panzeri on the grounds that it involves random variables that stand in a deterministic functional relation in the sense that chicharro and panzeri argue that in such a case it is not appropriate to use the full partial information lattice instead the functional relationship should be used to eliminate or identify nodes from the lattice thus while the monotonicity axiom of williams and beer implies s s and so is not part of the partial information lattice the same axiom also implies that s s in the xor example and so should similarly be excluded from the lattice when analyzing this particular example but note that the first argument johannes rauh figure the partial information lattice for n each node is indexed by an antichain the values in bit of the shared information in the xor example from the proof of theorem according to the pairwise secret sharing property are given after the colon is a formal argument that is valid for all joint distributions of s while the second argument takes into account the particular underlying distribution it is easy to work around this objection the deterministic relationship disappears when an arbitrarily small stochastic noise is added to the joint distribution to be precise let be independent binary random variables and let be binary with if p otherwise for for the example from the proof is recovered assuming that the partial information terms depend continuously on this joint distribution the partial information term s will still be negative for small thus assuming continuity the conclusion of theorem still holds true when the information decomposition according to the full partial information lattice is only considered for random variables that do not satisfy any functional deterministic constraint remark analyzing the proof of theorem one sees that the independent identity axiom lemma is the main ingredient to arrive at the contradiction the same property also arises in the other uses of the xor example bertschinger et rauh et discussion perfect secret sharing schemes correspond to systems of random variables in which it is very clearly specified who knows in such a system it is easy to assign intuitive values to the shared information nodes in the partial information lattice and one may conjecture that the intuition behind this assignment is the same intuition that underlies the williams and beer axioms which define the partial information lattice moreover following the same intuition independent combinations of perfect secret sharing schemes can be used as a tool to construct systems of random variables with prescribable nonnegative values of partial information secret sharing and shared information unfortunately this extension to independent combinations of perfect secret sharing schemes is not without problems by theorem it leads to decompositions with negative partial information terms but what does it mean that the examples derived from the same intuition as the williams and beer axioms contradict the same axioms in this way is this an indication that the whole idea of information decomposition does not work and that the question posed in the first paragraph of the introduction can not be answered affirmatively there are several ways out of this dilemma the first solution is to assign different values to combinations of perfect secret sharing schemes this solution will not be pursued further in this text as it would change the interpretation of the information decomposition as measuring who knows the second solution is to accept negative partial values in the information decomposition it has been argued that negative values of information can be given an intuitive interpretation in terms of confusing or misleading information for also called local information quantities such as the mutual information i s x log p s this interpretation goes back to the early days of information theory fano sometimes this phenomenon is called misinformation ince wibral et however in the usual language misinformation refers to false or incorrect information especially when it is intended to trick someone macmillan publishers limited retrieved on which is not the effect that is modelled here thus the word misinformation should be avoided in order not to mislead the reader into the wrong intuition while negative information quantities are the situation is more problematic for average quantities when an agent receives sideinformation in the form of the value x of a relevant random variable x she changes her strategy while the prior strategy should be based on the prior distribution p s the new strategy should be based on the posterior p x clearly in a probabilistic setting any change of strategy can lead to a better or worse result in a single instance on average though never hurts and it is never advantageous on average to ignore which is why the mutual information is never negative similarly it is natural to expect of other information quantities it is difficult to imagine how correct or an aspect thereof can be misleading on average the situation is different for incorrect information where the interpretation of a negative value is much easier more conceptually i would suspect that an averaged information quantity that may change its sign actually conflates different aspects of just as the interaction information or conflates synergy and redundancy williams and beer in any case allowing negative partial values alters the interpretation of an information decomposition to a point where it is questionable whether the word decomposition is still appropriate when decomposing an object into parts the parts should in some reasonable way be for example in a fourier decomposition of a function the fourier components are never larger than the function in the sense of the and the sum of the squared of the fourier coefficients equals the squared of the original function as another example given a positive amount of money and two investment options it may indeed be possible to invest a negative share of the total amount into one of the two options in order to increase the funds that can be invested in the second option can argue whether the same should be true for quantities recently ince suggested to also write the mutual information as a difference of quantities johannes rauh however such short selling is regulated in many countries with much stronger rules than ordinary trading i do not claim that an information decomposition with negative partial information terms can not possibly make sense however it has to be made clear precisely how to interpret negative terms and it is important to distinguish between correct information that leads to a suboptimal decision due to unlikely events happening bad luck and incorrect information that leads to decisions being based on the wrong posterior probabilities as opposed to the correct conditional probabilities a third solution is to change the underlying lattice structure of the decomposition a first step in this direction was done by chicharro and panzeri who propose to decompose mutual information according to subsets of the partial information lattice however it is also conceivable that the lattice has to be enlarged williams and beer derived the partial information lattice from their axioms together with the assumption that everything can be expressed in terms of shared information that is according to who knows what shared information is sometimes equivalently called redundant information but it may be necessary to distinguish the two information that is shared by several random variables is information that is accessible to each single random variable but redundancy can also arise at higher orders an example is the infamous xor example from the proof of theorem in this example each pair xi xj is independent and contains of two bits but the total system has only two bits therefore there is one bit of redundancy however this redundancy bit is not located anywhere specifically it is not contained in either of and thus it is not shared information since the redundant bit is not part of it is not shared by in this sense this phenomenon corresponds to the fact that random variables can be pairwise independent without being independent this kind of higher order redundancy does not have a place in the partial information lattice so it may be that nodes corresponding to higher order redundancy have to be added when the lattice is enlarged in this way the structure of the inversion is changed and it is possible that the resulting lattice leads to nonnegative partial information terms without changing those cumulative information values that are already present in the original lattice if this approach succeeds the answer to the question from the introduction will be negative simply classifying information according to who knows what shared information does not work since it does not capture higher order redundancy the analysis of extensions of the partial information lattice is scope for future work acknowledgments i thank fero for teaching me about secret sharing schemes i am grateful to guido and pradeep kr banerjee for their remarks about the manuscript and to nils bertschinger jost and eckehard olbrich for many inspiring discussions on the topic i thank the reviewers for many comments in particular concerning the discussion i thank the organizers and participants of the pid workshop in december in frankfurt where the material was first presented secret sharing and shared information appendix a combined secret sharing properties for small k this section discusses the defining equation of the combined secret sharing property for k and k the case k is incorporated in the definition of a combination of perfect secret sharing schemes the following lemma implies that any measure of shared information that satisfies satisfies for k recall that williams and beer s axiom implies that s xa i s xa lemma let sl xn be a combination of perfect secret sharing schemes with access structures al then i sl xa h si a ai proof suppose that the secret for which a is authorized are sm then h sl h sm h sl sm xa h sl sm xa h sl l x h si on the other hand h sl sm xa l x h si xa l x h si sl xa l x h si pl by independence remark after definition h si h sl pm and h si h sm thus i sl xa h sl h sl h sm f the next result shows that the bivariate measure of shared information si s x y proposed by bertschinger et al satisfies eq for k the reader is f referred to loc cit for definitions and elementary properties of si proposition let sl xn be a combination of perfect secret sharing schemes with access structures al then f sl xa xa h si a si proof for given suppose that sm are the secrets for which at least one of or is authorized and that sl are the secrets for which neither nor is authorized alone let p be the joint distribution of sl let be the set of alternative joint distributions for sl that have the same marginal distributions as p on the subsets sl and sl according f we need to compare p with the elements of and find the to the definition of si maximum of hq sl over q where the subscript to h indicates with respect to which of these joint distributions the conditional entropy is evaluated define a distribution for sl by sl p sl p sl p sl then under p the secrets sl are independent of marginally and independent of and so sl are independent of johannes rauh the pair under on the other hand sm are a function of either or under p and so sm is a function of under thus sl sl hp sl on the other hand under any joint distribution q the secrets sm are functions of whence hq sl hq sl hp sl f it follows that solves the optimization problem in the definition of si suppose that the secrets for which is authorized are sr and that the secrets for which is authorized are ss sm with r s m one computes x sl h h si and r x sl h sr h si whence f sl xa xa si sl sl r x h si h ss sr references williams beer nonnegative decomposition of multivariate information beimel schemes a survey proceedings of the third international conference on coding and cryptology berlin heidelberg pp maurer wolf the intrinsic conditional mutual information and perfect secrecy proc ieee isit csiszar narayan secrecy capacities for multiple terminals ieee transactions on information theory ito saito nishizeki secret sharing scheme realizing general access structure proceedings of the ieee global telecommunication pp bertschinger rauh olbrich jost j shared information new insights and problems in decomposing information in complex systems in proc eccs springer pp rauh bertschinger olbrich jost reconsidering unique information towards a multivariate information decomposition proc ieee isit pp barrett an exploration of synergistic and redundant information sharing in static and dynamical gaussian systems corr harder salge polani a bivariate measure of redundant information phys rev e bertschinger rauh olbrich jost ay quantifying unique information entropy secret sharing and shared information ince measuring multivariate redundant information with pointwise common change in surprisal entropy chicharro panzeri synergy and redundancy in dual decompositions of mutual information gain and information loss entropy fano transmission of information mit press cambridge ma wibral lizier priesemann bits from brains for biologically inspired computing frontiers in robotics and ai macmillan publishers limited macmillan dictionary available at http retrieved on ince the partial entropy decomposition decomposing multivariate entropy and mutual information via pointwise common surprisal address jrauh max planck institute for mathematics in the sciences leipzig germany
7
a new class of permutation trinomials constructed from niho exponents oct tao bai and yongbo abstract permutation polynomials over finite fields are an interesting subject due to their important applications in the areas of mathematics and engineering in this paper we investigate the trinomial f x x xpq over the finite field where p is an odd prime and q pk with k being a positive integer it is shown that when p or f x is a permutation trinomial of if and only if k is even this property is also true for more general class of polynomials g x x x x where l is a nonnegative integer and gcd p q moreover we also show that for p the permutation trinomials f x proposed here are new in the sense that they are not multiplicative equivalent to previously known ones of similar form index terms finite fields permutation polynomials trinomials niho exponents multiplicative inequivalent ams introduction let fq denote the finite field with q elements and fq where q is a prime power a polynomial f x fq x is called a permutation polynomial of fq if the associated mapping f c f c permutes fq permutation polynomials were firstly studied by hermite for the finite prime fields and by dickson for arbitrary finite fields they have wide applications in coding theory cryptography and combinatorial designs for a finite field fq there are in total q permutation polynomials of fq and all of them can be obtained from the lagrange interpolation permutations with a few terms are of particular interest because of the simple algebraic expressions especially permutation binomials and trinomials attracted particular attention recent achievements on the study of permutation polynomials were surveyed in let p be a prime and k a positive integer a niho exponent over the finite field is a positive integer d satisfying d pj mod pk for some nonnegative integer j in the case of j it is called a normalized niho exponent researches in the past decades demonstrate that niho exponents corresponding author bai and xia are with the department of mathematics and statistics university for nationalities wuhan china xia is also with the hubei key laboratory of intelligent wireless communications university for nationalities wuhan china xia are good resources that lead to desirable objects in sequence design coding theory and cryptography recently a lot of permutation trinomials of the form k f x x xs p k xt p have been proposed where s and t are two integers and the coefficients and are restricted to for p li and helleseth gave a rather detailed list of known pairs s t and some new pairs such that f x is a permutation polynomial of in some permutation trinomials of of similar form were also presented for p li et al in investigated several permutation trinomials of of the form and proposed three conjectures which were later confirmed in and very recently for p wu and li in derived a series of sufficient conditions on s t and for f x to permute there are also some permutation polynomials constructed from niho exponents over with q being a power of an arbitrary prime hou in characterized the necessary and sufficient conditions on the coefficients for the polynomial ax bxq x to be a permutation of in for q mod the necessary and sufficient conditions for to be a permutation polynomial of were determined where t is a positive integer let t denote the trace function from to fq some permutation trinomials of of the form x xd were obtained in where and d is a niho exponent over in this paper we investigate the permutation property of the following trinomial f x x xpq where p is an odd prime and q pk for a positive integer it is easily verified that p q pq and q p are niho exponents over we show that for p or f x in is a permutation polynomial of if and only if k is even however for the case p such a result may not hold furthermore we prove that the above property is also true for more general polynomials g x x x x where l is a nonnegative integer and gcd p q in addition when p the permutation polynomials f x presented in are shown to be new in the sense that they are not multiplicative equivalent to the permutation polynomials of the form in the remainder of this paper is organized as follows section gives some preliminaries and notation including some useful lemmas in section we give the proofs of our main results section is devoted to demonstrating that the permutation trinomials f x given in are new when p section concludes the study preliminaries let p be a prime k a positive integer and q pk the trace function and the norm function from to fq will be denoted by t r x and n x respectively namely t r x x xq and n x x xq x the unit circle u of is defined by u x x in in order to prove the permutation property of the trinomials constructed from niho exponents the authors mainly used the following lemma which was proved by park and lee in and reproved by zieve in lemma let p be a prime and n a positive integer assume that d is a positive integer such that d pn h x fpn x and r is a integer then xr h x pn d is a permutation of fpn if and only if n i gcd r p ii xr h x d pn d and permutes where is the set of root of unity in the polynomials constructed from niho exponents over can always be rewritten as the form xr h x d with d q to determine the permutation property of the polynomials xr h x constructed from niho exponents by lemma the main task is to decide if xr h x unit circle u of however sometimes the corresponding polynomial xr h x d d d permutes the leads to fractional polynomial with high degree it is still a difficult problem in general to verify that xr h x d permutes u another general approach to investigating the permutation property of the polynomials constructed from niho exponents over is to concentrate on the subset fq more specifically assume that g x is a polynomial constructed from niho exponents over with coefficients in fq if we can show that g x is a permutation of fq and a permutation of fq respectively then g x is a permutation polynomial of to this end it is usually required that g x has the property g fq fq and the key step in the proof is to prove that g x is a permutation of fq this idea originated from and later was used in in this paper we will use this idea to prove our main result the following lemma is needed in the sequel its proof is trivial and is omitted here lemma let q be a prime power denote by t r x and n x the trace function and the norm function from to fq respectively then for any c c cq is uniquely determined by the pair t r c n c the following lemma is obtained by direct computations lemma let q be a prime power and t r x and n x be the trace function and the norm function from to fq respectively i if q then for any x t r t x n x and t r t x n x t x n x ii if q then for any x t r t x x t r t x x t r x t r t x n x t x x t r t x n x t x n x t x x and t r t x x t x n x t x x proof we only give the proof of ii in lemma of the expressions for t r t r t r and t r were given while that for t r was not now we compute t r to illustrate how to obtain the above results note that t x x xq x xq x xq x xq t r t r n x t r which implies t r t x n x t r substituting t r into we get the desired result lemma let u be defined as we have the following results i if q then y has no root in u if k is even and has two roots in u otherwise ii if q then y y has no root in u if k is even and has two roots in u otherwise proof i let be a primitive root of with q then the roots of y are which belong to u if and only if q is divisible by since q is divisible by if and only if k is odd it follows the desired result ii note that y y is an irreducible polynomial over because it has no solution in since the degree of y y is it follows that its two roots belong to we can rewrite y y as y then the two roots of y y in are where denote the two roots of in when k is even since and when k is odd thus when k is even we have which is not equal to when k is odd we have therefore when k is odd u from the above computations it follows the desired result the following lemma is a special case of exercise in for the reader s convenience we include a proof here lemma let fq be a finite field of characteristic then xp fq x is a permutation polynomial of fq if and only if u is not a p th power of an element of proof note that xp ux is a over fq and it is a permutation polynomial of fq if and only if it only has the root in fq thus xp ux is a permutation polynomial of fq if and only if u has no root in fq the latter exactly means that u is not a p th power in fq a new class of permutation trinomials from niho exponents in this section we present our main results about the permutation property of f x defined in and that of g x defined in the first main result is given in the following theorem theorem let q pk and f x be the trinomial defined in then for p or f x is a permutation polynomial of if and only if k is even before we prove this theorem in detail we mention some characterizations of f x and g x the three exponents appearing in f x are niho exponents since p q p q p pq p q p and q p q however the exponents in g x may not be niho exponents as we will see in the sequel the permutation property of f x and that of g x depend on a same condition after utilizing lemma if we obtain the condition for f x to permute then it is also true for g x in addition a permutation polynomial f x of the form is closely related to some permutation polynomial of the form in the next section we will study the relationship between f x in theorem and the previously known permutation trinomials of the form by comparison we will show that the permutation polynomial f x proposed here is new when p in order to prove theorem the following preparatory lemma is needed lemma let q pk and f x be defined in when p or for any x fq we have f x fq if and only if k is even proof assume that x fq note that f x fq if and only if x xpq xp x which is equivalent to xp xpq dividing the above equation by xp we get xp setting z can be rewritten as z p which equals z z z z z if p and if p note that if x fq then z u where u is defined as for p from we can conclude that for any x fq we have f x fq if and only if z satisfies z by lemma if k is even then z has no root in u thus in this case for any x fq we have f x fq f x fq if k is odd then z has two roots in u which shows that there are q elements x fq such that f x fq similarly for p we can obtain the same conclusion by and lemma from the above discussions it follows the desired result k proof of theorem note that in this proof q p and p is restricted to to prove that f x is a permutation of it suffices to show that for any c f x c has exactly one root in fq note that when x fq then f x xp which is a permutation of fq since gcd p q on the other hand if k is odd from the proof of lemma there are q elements x fq such that f x fq therefore when k is odd for some c fq there must exist at least two distinct elements such that f f thus f x is not a permutation polynomial of when k is odd next we prove that when k is even then f x is a permutation polynomial of f x c has exactly one root in we consider the following two cases case c fq then by lemma we can derive from f x c that x must belong to fq thus in this case f x c is equivalent to that xp obviously the latter has only one root in case c fq then by lemma the roots of f x c belong to fq next we will show that f x c has exactly one root in fq under the given conditions let t r x and n x be defined as we compute t r f x and n f x as follows t r f x x xpq x xpq xpq xp t rp x and n f x x xpq xp x x x x x p x n x t r n x t r n x t r when p or by lemma n f x in can be expressed in terms of t r x and n x as follows x t x n x t x n x for p n f x n x x t x n x t x x t x n x t x for p for any c fq from f x c we have t r f x t r c and n f x n c in what follows we will prove that t r x and n x are uniquely determined by c under the aforementioned conditions we only give the proof of this conclusion in the case p and for p it can be proved in the same way thus in the sequel we always assume that q by and we have t x t r c n x x t x n x t x x t x n x t x n c note that gcd q thus from t x t r c one knows that t r x is uniquely determined by therefore it suffices to show that n x is also uniquely determined by we consider the following two subcases subcase t r c then it follows that t r x from the second equation in we have n x n c and thus n c is also uniquely determined by subcase t r c then t r x for convenience put n x n c and s t x t c then divided by t x the second equation in can be rewritten as r s or equivalently r r s we claim that s is not equal to otherwise we have n c t c which implies cq c leading to c fq a contradiction to the assumption c fq let t r s then becomes note that t is not equal to since s is not equal to therefore can be further transformed into t t if we can show that t is not a fourth power in fq then by and lemma we can conclude that is uniquely determined by then from and it follows that n x is uniquely determined by thus it suffices to show that is not a fourth power in fq to this end we express in terms of c as follows n c c since c fq and t r c it follows that u let u suppose on the contrary that is a fourth power in fq then by we have u for some fq note that thus we can rewrite as which implies where denote the two root of in note that when k is even the two roots of which are belong to fq from it follows that u fq which contradicts to u u since fq u therefore is not a fourth power in fq and the desired result follows the discussions in subcases and show that for p when k is even and c fq we can derive from f x c that t r x and n x are uniquely determined by for p this conclusion can be similarly proved furthermore by lemma when p for any c fq one can conclude from f x c that the set x xq is uniquely determined by note that if one of x xq satisfies f x c the other q does not for instance if f x c then f xq f x cq c since c fq thus for p or when k is even for any c fq f x c has only one root in fq from cases and it follows the desired result corollary let u be defined as with q pk and p then the following fractional polynomial x xp x permutes u if and only if k is even proof note that f x in can be written as xp x xp then by lemma and theorem it follows that xp xp x permutes u if and only if k is even where p note that if xp xp x permutes u it is not equal to zero when x u thus for each p xp xp x can be written as xp xp x xp x which is exactly since xq when x u based on lemma and corollary we obtain our second main result which gives the permutation property of g x defined in theorem let q pk with p and l a nonnegative integer satisfying gcd p q then g x x x x is a permutation polynomial of if and only if k is even proof we can rewrite g x as x x xp note that gcd q l p q gcd p q then by lemma g x is a permutation polynomial of if and only if x x xp x permutes u the latter is equivalent to that xp x xp x permutes u since x l when x u the desired result follows now from corollary remark as we have shown in the proofs of corollary and theorem f x or g x is a permutation polynomial of if and only if xp x xp x permutes u this shows that the permutation property of f x and that of g x depend on a same condition however it seems difficult to verify directly whether this condition holds thus in this paper we use a different approach to investigate the permutation property of f x and then obtain the permutation property of g x remark let f x and g x be defined in and respectively the polynomial g x contains f x as a special case and by taking l g x is transformed into exactly f x note that when p f x xpq g x x and they are always permutation polynomials of for any positive integer for p theorems and may not hold by magma we have obtained some numerical results in table table is f x or g x a permutation over p k permutation over no yes no no no no no yes no no no yes no a comparison with known related permutation trinomials in this section we will compare the permutation polynomials f x proposed in theorem with previously known ones of the form it is straightforward that the composition of two permutation polynomials of the same finite field is also a permutation polynomial we recall the definition of multiplicative equivalence from definition let q be a prime power and h x and h x be two permutation polynomials of fq h x and h x are called multiplicative equivalent if there exists an integer d q such that gcd d q and h x ah xd for some a next we will determine whether or not the permutation trinomials f x given in theorem are multiplicative equivalent to the previously known ones of the form in we make some preparations as follows proposition let f x be defined in and q pk with k being even and p we have the following results a if p then f x is multiplicative equivalent to the following permutation trinomials of i x x x ii x x x q q x iii x x x b if p then f x is multiplicative equivalent to the following permutation trinomials of i x x x ii x x x iii x x xq x x xq proof we only prove the case p for the case p the result can be proved in the same way when p note that x f q and thus f x is multiplicative equivalent to x let q when k is even gcd q gcd q then by extended euclidean algorithm we have q q and where denotes is the inverse of di modulo q i note that i x and x therefore f x is multiplicative equivalent to x and x it is easily seen that x x and x are pairwise multiplicative equivalent to each other the following claims are needed claim recall that the inverse of a normalized niho exponent d over if it exists is again a normalized niho exponent and the product of two normalized niho exponents is also a normalized niho exponents let f x be a permutation polynomial of of the form s pk and t pk if one of and is invertible say then f is also a permutation polynomial of the form which is multiplicative equivalent to f x this analysis together with claim gives the following claim claim let f x be a permutation polynomial of of the form then all the permutation trinomials of the form that are multiplicative equivalent to f x are given by f xdi provided gcd di i claims and together with proposition give the following claim claim let f x be a permutation polynomial in theorem if a permutation trinomial of the form is multiplicative equivalent to f x then it must be one of x x and x in proposition in the sequel a permutation polynomial f x of of the form will be denoted by the tuple s t according to lemma f x is a permutation polynomial of if and only if the associated polynomial x xs xt permutes the unit circle u when f x is a permutation polynomial of can be further written table known permutation trinomials of of the form q s t fractional polynomial k ref k mod theorem odd k theorem all k conjecture k mod conjecture even k theorem d even k theorem e even k theorem f even k theorem a iv q q q q x x x x x x x x q x x q x x as a fractional polynomial x xs xt since x u which is called the fractional polynomial of f x for comparison purposes we collect all known permutation trinomials of and of the form we list them in tables and respectively to the best of our knowledge tables and contain such permutation trinomials completely note that the last one in table is exactly x in proposition a iii in hou determined all permutation trinomials of of the form ax bxq x when p according to theorem a iv of xq is a permutation polynomial of if and only if is a square of which is equivalent to that k is even the permutation polynomial x xq equals xq and thus its permutation property can be derived from theorem a iv of according to tables proposition and claim we conclude the following result proposition let f x be a permutation polynomial proposed in theorem when p f x is multiplicative equivalent to the permutation polynomial x xq or xq which is contained in theorem a iv of when p f x is not multiplicative equivalent to any permutation trinomial listed in table the above proposition shows that f x proposed in theorem is indeed new when p when p the permutation polynomial f x proposed in theorem is multiplicative equivalent to a known one contained in nevertheless the method for proving the permutation property in this paper are different from that in table known permutation trinomials of of the form q s t x x x x x x x x x x x x x x x x x x x x x fractional polynomial s x s xs x k ref all k theorem odd k theorem odd k theorem odd k theorem even k theorem even k theorem even k theorem i even k theorem ii even k theorem iii odd k proposition theorem even k proposition theorem q q q q t x x x q x x q x t x x even k theorem c even k theorem e even k theorem f even k theorem i odd k and t q even k theorem iii theorem a ii i denotes the exponent of in the canonical factorization of i conclusion in this paper we construct a class of permutation trinomials of with q and precisely for each p we prove that f x x xpq is a permutation trinomial of if and only if k is even this conclusion is also true for more general polynomials g x x x with l being a nonnegative integer satisfying gcd moreover when p we prove that f x presented here is not multiplicative equivalent to any known permutation trinomial of the form numerical experiments show that theorems and may not hold when p it would be nice if our construction can be generalized to arbitrary finite field namely the readers are invited to determine the permutation polynomials of of the form x xpq where p is a prime q pk for some positive integer k and fq acknowledgment bai and xia were supported in part by national natural science foundation of china under grant and grant and in part by natural science foundation of hubei province under grant bai was also supported by graduate innovation fund of university for nationalities references bartoli and giulietti permutation polynomials fractional polynomials and algebraic curves available online http chen li and zeng a class of binary cyclic codes with generalized niho exponents finite fields vol pp ding and yuan a family of skew hadamard difference sets comb theory ser a vol pp ding qu wang yuan and yuan permutation trinomials over finite fields with even characteristic siam dis vol no pp dobbertin almost perfect nonlinear power functions on gf the welch case ieee trans inf theory vol no pp apr dobbertin felke helleseth and rosendahl niho type functions via dickson polynomials and kloosterman sums ieee trans inf theory vol no pp dobbertin leander canteaut carlet felke and gaborit construction of bent functions via niho power functions comb theory ser a vol no pp gupta and sharma some new classes of permutation trinomials over finite fields with even characteristic finite fields vol pp hou determination of a type permutation trinomials over finite fields acta vol pp hou determination of a type permutation trinomials over finite fields ii finite fields vol pp hou permutation polynomials over finite survey of recent advances finite fields vol pp hou permutation polynomials of the form xr a nonexistence result available online http kyureghyan and zieve permutation polynomials of the form x r x k available online https permutation polynomials and applications to coding theory finite fields vol pp li helleseth and tang further results on a class of permutation polynomials over finite fields finite fields vol pp li and helleseth several classes of permutation trinomials from niho exponent cryptogr vol no pp li and helleseth new permutation trinomials from niho exponents over finite fields with even characteristic available online http li on two conjectures about permutation trinomials over finite fields vol pp lidl and niederreiter finite fields in encyclopedia of mathematics and its applications vol amsterdam the netherlands li qu and chen new classes of permutation binomials and permutation trinomials over finite fields finite fields vol pp li qu li and fu new permutation trinomials constructed from fractional polynomials available online https ma and ge a note on permutation polynomials over finite fields available online https mullen and panario handbook of finite fields boca raton taylor francis park and j lee permutation polynomials and group permutation polynomials bull austral math vol pp qu tan tan and li constructing differentially permutations over via the switching method ieee trans inf theory vol no pp july tu zeng hu and li a class of binomial permutation polynomials available online http tu zeng and hu several classes of complete permutation polynomials finite fields and vol pp wu yuan ding and ma permutation trinomials over finite fields vol pp wu and li several classes of permutation trinomials over from niho expeonents available online http xia li zeng and helleseth an open problem on the distribution of a niho type function ieee trans inf theory vol no pp zeng tian and tu permutation polynomials from trace functions over finite fields finite fields and vol pp zeng hu jiang yue and cao the weight distribution of a class of cyclic codes finite fields and vol pp zieve on some permutation polynomials over fq of the form xr h x proc amer math soc vol no pp zha hu and fan further results on permutation trinomials over finite fields with even characteristic finite fields vol pp
7
dynamic i ntegration of background k nowledge in n eural nlu s ystems dirk weissenborn german research center for ai chris dyer deepmind tkocisky cdyer oct a bstract or background knowledge is required to understand natural language but in most neural natural language understanding nlu systems the requisite background knowledge is indirectly acquired from static corpora we develop a new reading architecture for the dynamic integration of explicit background knowledge in nlu models a new reading module provides refined word representations to a nlu architecture by processing background knowledge in the form of statements together with the taskspecific inputs strong performance on the tasks of document question answering dqa and recognizing textual entailment rte demonstrate the effectiveness and flexibility of our approach analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way i ntroduction understanding natural language depends crucially on or background knowledge for example knowledge about what concepts are expressed by the words being read lexical knowledge and what relations hold between these concepts relational knowledge as a simple illustration if an agent needs to understand that the statement king farouk signed his abdication is entailed by king farouk was exiled to france in after signing his resignation it must know among other things that abdication means resignation of a king in most neural natural language understanding nlu systems the requisite background knowledge is implicitly encoded in the models parameters that is what background knowledge is present has been learned from task supervision and also by word embeddings where distributional information reliably reflects certain kinds of useful background knowledge such as semantic relatedness however acquisition of background knowledge from static training corpora is limiting for two reasons first we can not expect that all background knowledge that could be important for solving an nlu task can be extracted from a limited amount of training data second as the world changes the facts that may influence how a text is understood will likewise change in short building suitably large corpora to capture all relevant information and keeping the corpus and derived models up to date with changes to the world would be impractical in this paper we develop a new architecture for dynamically incorporating external background knowledge in nlu models rather than relying only on static knowledge implicitly present in the training data supplementary knowledge is retrieved from a knowledge base to assist with understanding text inputs since nlu systems must necessarily read and understand text inputs our approach incorporates background knowledge by repurposing this reading is we read the text being understood together with supplementary natural language statements that assert facts assertions which are relevant to understanding the content our nlu systems operate in a series of phases first given the text input that the system must understand which we call the context a set of relevant supporting assertions is retrieved while learning to retrieve relevant information for solving nlu tasks is an important question nogueira cho narasimhan et inter alia in this work we focus on learning how to incorporate retrieved information and use simple heuristic retrieval methods to identify plausibly relevant background from an external knowledge base once the supplementary texts have been retrieved we use a word embedding refinement strategy that incrementally reads the context and retrieved assertions starting with word embeddings and building successively refined embeddings of the words that ultimately reflect both the relevant supporting assertions and the input context these contextually refined word embeddings which serve as dynamic memory to store newly incorporated knowledge are used in any reading architecture the overall architecture is illustrated in figure although we are incorporating a new kind of information into the nlu pipeline a strength of this approach is that the architecture of the reading module is independent of the final nlu only requirement is that the final architecture use word embeddings we carry out experiments on several different datasets on the tasks of document question answering dqa and recognizing textual entailment rte evaluating the impact of our proposed solution with both basic task architectures and a sophisticated task architecture for rte we find that our embedding refinement strategy is quite effective on four standard benchmarks we show that refinement refining the embeddings just using the context and no additional background information can improve performance significantly and adding background knowledge helps further our results are very competitive setting a new on the recent triviaqa benchmarks which is remarkable considering the simplicity of the chosen architecture finally we provide a detailed analysis of how knowledge is being used by an rte system including experiments showing that our system is capable of making appropriate counterfactual inferences when provided with false knowledge e xternal k nowledge as s upplementary t ext i nputs knowledge resources make information that could potentially be useful for improving nlu available in a variety different formats such as subject predicate object relational databases and other structured formats rather than tailoring our solution to a particular structured representation we assume that all supplementary information either already exists in natural language statements or can easily be recoded as natural language in contrast to mapping from unstructured to structured representations the inverse problem is not terribly difficult for example given a triple monkey isa animal we can construct the assertion a monkey is an animal using a few simple rules finally the format means that knowledge that exists only in unstructured text form is usable by our system a major question that remains to be answered is given some text that is to be understood what supplementary knowledge should be incorporated the retrieval of contextually relevant information from knowledge sources is a complex research topic by itself and it is likewise crucially dependent on the format of the underlying knowledge base there are several statistical manning et and more recently neural approaches mitra craswell and approaches based on reinforcement learning nogueira cho in this work we make use of a simple heuristic from which we almost exhaustively retrieve all potentially relevant assertions see and rely on our reading architecture to learn to extract only relevant information in the next section we turn to the question of how to leverage the retrieved supplementary knowledge encoded as text in a nlu system r efining w ord e mbeddings by r eading in order to incorporate information from retrieved input texts we propose to compute contextually refined word representations prior to processing the nlu task at hand and pass them to the task in the form of word embeddings word embeddings thus serve as a form of memory that not only contains knowledge as in typical neural nlu systems but also contextual information including retrieved background knowledge our incremental refinement process encodes input texts followed by updates on the word embedding matrix using the encoded input in multiple reading steps words are first represented standard word type embeddings which can be conceived of as the columns in an embedding matrix at each progressive reading step a new embedding matrix e is constructed by refining the embeddings from the previous step e using contextual information x for reading step which is a set of natural language sequences texts an illustration of our incremental refinement strategy can be found in figure group onlookers glance watch people premise a group of onlookers glance at a person doing a strange trick on her head group onlookers glance watch people hypothesis people watch another person do a trick group onlookers glance watch people assertions onlooker related to watches people is a group group onlookers glance watch people task system rte p a group of onlookers glance at a person doing a strange trick on her head q people watch another person do a trick entailment no update weighted update figure illustration of our refinement strategy for word representations on an example from the snli dataset comprising the premise hypothesis and additional external information in form of assertions the reading architecture constructs refinements of word representations incrementally conceptually represented as columns in a series of embedding matrices e are incrementally refined by reading the input text and textual renderings of relevant background knowledge before computing the representations used by the task model in this figure rte in the following we define this procedure formally we denote the hidden dimensionality of our model by n and a layer by fc z wz b w b rn u rm u nrefined w ord e mbeddings the first representation level consists of word representations that is word representations that do not depend on any input these can be conceived of as an embedding matrix whose columns are indexed by words in the word representation for a single word w is computed by using a gated combination of fixed word vectors epw rn with n learned embeddings echar w r the formal definition of this combination is given in eq epw relu fc epw epw gw fc char ew gw epw gw echar w we compute echar using a convolutional neural network using n convolutional filters w of width followed by a operation over time combining with character based word embeddings in such a way is common practice our approach follows seo et weissenborn et c ontextually r efined w ord r epresentations e in order to compute contextually refined word embeddings e given prior representations e we assume a given set of texts x x x that are to be read at refinement iteration each text x i is a sequence of word tokens we embed all tokens of every x i using the embedding matrix from the previous layer e to each word we concatenate a vector of length l with position set to indicating which layer is currently being stacking the vectors into a matrix we obtain a x i this matrix is processed by a bidirectional recurrent neural network a bilstm hochreiter schmidhuber in this work the resulting output is further projected to i by a layer followed by a relu eq i relu fc bilstm x i of word w we initially maxpool all representations to finally update the previous embedding ew of occurrences matching the lemma of w in every x x resulting in w eq finally we combine the representation ew with to form a representation e w via a gated addition which lets the model determine how much to revise the embedding with the newly read information eq w max k x x lemma x k lemma w ew uw fc w e w u w e u w w w note that we soften the matching condition for w using lemmatization lemma w during the pooling operation of eq because contextual information about certain words is usually independent of the current word form w they appear in as a consequence this minor linguistic step allows for additional interaction between tokens of the same lemma the important difference between our contextual refinement step and conventional rnn architectures is the pooling operation that is performed over occurrences of tokens that share the same lemma this effectively connects different positions within and between different texts with each other thereby mitigating the problems arising from dependencies more importantly however it allows models to make use of additional input such as relevant background knowledge e xperimental s etup we run experiments on four benchmarks for two popular tasks namely recognizing textual entailment rte and document question answering dqa in the following we describe different aspects of our experimental setup in more detail models our primary interest is to explore the value of our refinement strategy with relatively generic task architectures therefore we chose basic bidirectional lstms bilstms as encoders with a neural network on top for the final prediction such models are common baselines for nlu tasks and can be considered general reading architectures as opposed to the more highly tuned nlu systems that are necessary adding this feature lets the refinement model to learn update word embeddings differently in different levels to achieve art results however since such models frequently underperform more customized architectures we also add our refinement module to a reimplementation of a architecture for rte called esim chen et all models are trained jointly with the refinement module for the dqa baseline system we add a simple feature liq as suggested in weissenborn et al when encoding the context to compare against competitive baseline results we provide the exact model implementations for our bilstm baselines and general training details in appendix a question answering we apply our dqa models on recent dqa benchmark datasets squad rajpurkar et and triviaqa joshi et the task is to predict an answer span within a provided document p given a question q both datasets are containing on the order of examples because triviaqa is collected via distant supervision the test set is divided into a large but noisy distant supervision part and a much smaller on the order of hundreds human verified part we report results on both see appendix for implementation details recognizing textual entailment we test on the frequently used snli dataset bowman et a collection of sentence pairs and the more recent multinli dataset sentence pairs williams et given two sentences a premise p and a hypothesis q the task is to determine whether p either entails contradicts or is neutral to q see appendix for implementation details knowledge source we make use of speer havasi a semantic network that originated from the open mind common sense project and incorporates selected knowledge from various other knowledge sources such as open multilingual opencyc and it presents information in the form of relational assertion retrieval we would like to obtain information about the relations of words and phrases between q and p from conceptnet in order to strengthen the connection between the two sequences because assertions a in conceptnet come in form of subject predicate object s r o we retrieve all assertions for which s appears in q and o appears in p or vice versa because still too many such assertions might be retrieved for an instance we rank all retrievals based on their respective subject and object to this end we compute a ranking score which is the inverse product of appearances of the subject and the object in the kb that is score a p p i sa i oa where i denotes the indicator function this is very related to the popular idf score inverted document frequency from information retrieval which ranks terms higher that appear less frequently across different documents during training and evaluation we only retain the assertions which we specify for the individual experiments separately note that although very rarely it might happen that no assertions are retrieved at all refinement order when employing our strategy we first read the document p followed by the question q in case of dqa and the premise p followed by the hypothesis q for rte that is x p and x q additional knowledge in the form of a set of assertions a is integrated after reading the input for both dqa and rte that is x a in preliminary experiments we found that the final performance is not significantly sensitive to the order of presentation so we decided to fix our order as defined above model squad dev exact triviaqa wiki test exact triviaqa web test exact bilstm liq reading reading knowledge reading knowledge sota results table results on the squad development set as well as and test sets for n models triviaqa results are further divided by distant supervision results left and human verified results right the model using external knowledge is trained with the retrieved conceptnet assertions the is only used for the baseline hu et al wang et al pan et al r esults q uestion a nswering sq uad and t riviaqa table presents our results on two question answering benchmarks we report results on the squad development and the two more challenging triviaqa test sets which demonstrate that the introduction of our reading architecture helps consistently with additional gains from using background knowledge our systems do even outperform current models on triviaqa which is surprising given the simplicity of our architecture and the complexity of the others for instance the system of hu et al uses a complex attention mechanism to achieve their results even our baseline bilstm liq system reaches very competitive results on triviaqa which is in line with findings of weissenborn et al to verify that it is not the additional computation that gives the performance boosts when using only our reading architecture without knowledge we also ran experiments with bilstms for our baselines which exhibit similar computational complexity to bilstm reading we found that the second layer even hurts performance this demonstrates that pooling over occurrences in a given context between layers which constitutes the main difference to conventional stacked rnns is a powerful yet simple technique in any case the most important finding of these experiments is that knowledge actually helps considerably with up to improvements on the measures r ecognizing t extual e ntailment snli and m ulti nli table shows the results of our rte experiments in general the introduction of our refinement strategy almost always helps both with and without external knowledge when providing additional background knowledge from conceptnet our bilstm based models improve substantially while the models improve only on the more difficult multinli dataset compared to previously published systems our models acquit themselves very well on the multinli benchmark and competitively on the snli benchmark in parallel to this work gong et al developed a novel architecture for rte that achieves slightly better performance on multinli than our based it is worth observing that with our embedding architecture our generic task model outperforms esim on multinli which is architecturally much more complex and designed specifically for the rte task finally we remark that despite careful tuning our of esim fails to http http http http we exclude conceptnet assertions created by only one contributor and from verbosity to reduce noise due to restrictions on code sharing we are not able to use the public evaluation server to obtain test set scores for squad however for the remaining tasks we report both development accuracy and test set performance our refinement architecture can be used of course with this new model model snli dev test mnli matched dev test mnli mismatched dev test bilstm reading reading knowledge esim reading reading knowledge sota results table results on the snli as well as and for n models the model using external knowledge is trained with the retrieved conceptnet assertions chen et al gong et al chen et al snli esim reading reading knowledge esim reading reading knowledge multinli matched mismatched table development set results when reducing training data and embedding dimsensionality with pca in parenthesis we report the relative differences to the respective result directly above match the reported in chen et al however with multinli we find that our implementation of esim performs considerably better by approximately the instability of the results suggests as well as the failure of a custom to consistently perform well suggests that current sota rte models may be overfit to the snli dataset r educing t raining data d imensionality of p re trained w ord e mbeddings we find that there is only little impact when using external knowledge on the rte task when using a more sophisticated task model such as esim we hypothesize that the attention mechanisms within esim jointly with powerful word representations allow for the recovery of some important lexical relations when trained on a large dataset it follows that by reducing the number of training data and impoverishing word representations the impact of using external knowledge should become larger to test this hypothesis we gradually impoverish word embeddings by reducing their dimensionality with pca while reducing the number of training instances at the same our joint data and dimensionality reduction results are presented in table they show that there is indeed a slightly larger benefit when employing background knowledge in the more impoverished settings with largest improvements over using only the novel reading architecture when using around examples and reduced dimensionality to however we observe that the biggest overall impact over the baseline esim model stems from our contextual refinement strategy reading which is especially pronounced for the and experiments this highlights once more the usefulness of our refinement strategy even without the use of additional knowledge although reducing either embedding dimensionality or data individually exhibit similar but less pronounced results we only report the joint reduction results here p h his style seems amateurish he didn t look like an amateur the net cost of operations the gross cost but uh these guys file their uh their final exams these men filed their midterm exams a look like synonym seem contradiction gross antonym net contradiction midterm antonym final contradiction look like antonym seem entailment gross synonym net entailment midterm synonym final entailment table three examples for the antonym synonym swapping experiment on multinli assertion a bilstm on multinli b esim on multinli c bilstm on squad figure performance differences when ignoring certain types of knowledge relation predicates during evaluation normalized performance differences are measured on the subset of examples for which an assertion of the respective relation predicate occurs a nalysis of k nowledge u tilization is additional knowledge used to verify whether and how our models make use of additional knowledge we conducted several experiments first we evaluated models trained with knowledge on our tasks while not providing any knowledge at test time this ablation drops performance by accuracy on multinli and by on squad this indicates the model is refining the representations using the provided assertions in a useful way are models sensitive to semantics of the provided knowledge the previous result does not show that the models utilize the provided assertions in any consistent way it may just reflect a mismatch of training and testing conditions therefore to test our models sensitivity towards the semantics of the assertions we run an experiment in which we swap the synonym with the antonym predicate in the provided assertions during test time because of our heuristic retrieval mechanism not all such counterfactuals will affect the truth of the inference but we still expect to see a more significant impact the performance drop on multinli examples for which either a synonym or an is retrieved is about for both the bilstm and the esim model this very large drop clearly shows that our models are sensitive to the semantics of the provided knowledge examples of prediction changes are presented in table they demonstrate that the system has learned to trust the presented assertions to the point that it will make appropriate counterfactual is the change in knowledge has caused the change in prediction what knowledge is used after establishing that our models are somehow sensitive to semantics we wanted to find out which type of knowledge is important for which task for this analysis we exclude assertions including the most prominent predicates in our knowledge base individually when evaluating our models the results are presented in figure they demonstrate that the biggest performance drop in total blue bars stems from related to assertions this very prominent predicate appears much more frequently than other assertions and helps connecting related parts of the input sequences with each other we believe that related to assertions offer benefits mainly from a modeling perspective by strongly connecting the input sequences with each other and thus bridging dependencies similar to attention looking at the relative drops obtained by normalizing the performance differences on the actually affected examples green we find that our models depend highly on the presence of antonym and synonym assertions for all tasks as well as partially on is a and derived from assertions this is an interesting finding which shows that the sensitivity of our models is selective wrt the type of knowledge and task the fact that the largest relative impact stems from antonyms is very interesting because it is known that such information is hard to capture with distributional semantics contained in word embeddings r elated w ork the role of background knowledge in natural language understanding has long been remarked on especially in the context of classical models of ai schank abelson minsky however it has only recently begun to play a role in neural network models of nlu ahn et xu et long et dhingra et however previous efforts have focused on specific tasks or certain kinds of knowledge whereas we take a step towards a more solution for the integration of heterogeneous knowledge for nlu systems by providing a simple reading architecture that can read background knowledge encoded in simple natural language statements abdication is a type of resignation bahdanau et al use textual word definitions as a source of information about the embeddings of oov words in the area of visual question answering wu et al utilize external knowledge in form of dbpedia comments short to improve the answering ability of a model marino et al explicitly incorporate knowledge graphs into an image classification model xu et al created a recall mechanism into a standard lstm cell that retrieves pieces of external knowledge encoded by a single representation for a conversation model concurrently dhingra et al exploit linguistic knowledge using an adapation of grus to handle graphs however external knowledge has to be present in form of triples the main difference to our approach is that we incorporate external knowledge in free text form on the word level prior to processing the task at hand which constitutes a more flexible setup ahn et al exploit knowledge base facts about mentioned entities for neural language models bahdanau et al and long et al create word embeddings by reading word definitions prior to processing the task at hand pilehvar et al seamlessly incorporate information about word senses into their representations before solving the downstream nlu task which is similar we go one step further by seamlessly integrating all kinds of assertions about concepts that might be relevant for the task at hand another important aspect of our approach is the notion of dynamically updating wordrepresentations tracking and updating concepts entities or sentences with dynamic memories is a very active research direction kumar et henaff et ji et kobayashi et however those works typically focus on particular tasks whereas our approach is taskagnostic and most importantly allows for the integration of external background knowledge other related work includes storing temporary information in weight matrices instead of explicit neural activations such as word representations as a biologically more plausible alternative c onclusion we have presented a novel reading architecture that allows for the dynamic integration of background knowledge into neural nlu models our solution which is based on the incremental refinement of word representations by reading supplementary inputs is flexible and be used with virtually any existing nlu architecture that rely on word embeddings as input our results show that embedding refinement using both the system s text inputs as well as supplementary texts encoding background knowledge can yield large improvements in particular we have shown that relatively simple task architectures based on simple bilstm readers can become competitive with architectures when augmented with our reading architecture acknowledgments this research was conducted during an internship of the first author at deepmind it was further partially supported by the german federal ministry of education and research bmbf through the projects all sides bbdc and software campus genie r eferences sungjin ahn heeyoul choi tanel and yoshua bengio a neural knowledge language model arxiv dzmitry bahdanau tom bosc jastrzebski edward grefenstette pascal vincent and yoshua bengio learning to compute word embeddings on the fly arxiv samuel r bowman gabor angeli potts christopher and christopher d manning a large annotated corpus for learning natural language inference in emnlp association for computational linguistics qian chen xiaodan zhu ling si wei hui jiang and diana inkpen recurrent neural sentence encoder with gated attention for natural language inference arxiv qian chen xiaodan zhu zhenhua ling si wei and hui jiang enhancing and combining sequential and tree lstm for natural language inference acl bhuwan dhingra zhilin yang william w cohen and ruslan salakhutdinov linguistic knowledge as memory for recurrent neural networks arxiv yichen gong heng luo and jian zhang natural language inference over interaction space arxiv mikael henaff jason weston arthur szlam antoine bordes and yann lecun tracking the world state with recurrent entity networks in iclr sepp hochreiter and schmidhuber long memory neural computation minghao hu yuxing peng and xipeng qiu reinforced mnemonic reader for machine comprehension arxiv yangfeng ji chenhao tan sebastian martschat yejin choi and noah a smith dynamic entity representations in neural language models in emnlp mandar joshi eunsol choi daniel weld and luke zettlemoyer triviaqa a large scale distantly supervised challenge dataset for reading comprehension in acl july diederik kingma and jimmy ba adam a method for stochastic optimization iclr sosuke kobayashi naoaki okazaki and kentaro inui a neural language model for dynamically representing the meanings of unknown words and entities in a discourse arxiv preprint ankit kumar ozan irsoy peter ondruska mohit iyyer james bradbury ishaan gulrajani victor zhong romain paulus and richard socher ask me anything dynamic memory networks for natural language processing in icml pp teng long emmanuel bengio ryan lowe jackie chi kit cheung and doina precup world knowledge for reading comprehension rare entity prediction with hierarchical lstms using external descriptions in emnlp christopher d manning hinrich et al foundations of statistical natural language processing volume mit press kenneth marino ruslan salakhutdinov and abhinav gupta the more you know using knowledge graphs for image classification cvpr marvin minsky interfaces communications of the acm bhaskar mitra and nick craswell neural models for information retrieval arxiv preprint karthik narasimhan adam yala and regina barzilay improving information extraction by acquiring external evidence with reinforcement learning in emnlp rodrigo nogueira and kyunghyun cho query reformulation with reinforcement learning in emnlp boyuan pan hao li zhou zhao bin cao deng cai and xiaofei he memen embedding with memory networks for machine comprehension arxiv jeffrey pennington richard socher and christopher manning glove global vectors for word representation in emnlp mohammad taher pilehvar jose roberto navigli and nigel collier towards a seamless integration of word senses into downstream nlp applications in acl pranav rajpurkar jian zhang konstantin lopyrev and percy liang in squad questions for machine comprehension of text tim edward grefenstette karl moritz hermann and phil blunsom reasoning about entailment with neural attention iclr roger schank and robert abelson scripts plans goals and understanding psychology press minjoon seo aniruddha kembhavi ali farhadi and hananneh hajishirzi attention flow for machine comprehension in iclr robert speer and catherine havasi representing general relational knowledge in conceptnet in lrec wenhui wang nan yang furu wei baobao chang and ming zhou gated networks for reading comprehension and question answering in acl dirk weissenborn georg wiese and laura seiffe making neural qa as simple as possible but not simpler in conll adina williams nikita nangia and samuel r bowman a challenge corpus for sentence understanding through inference arxiv qi wu peng wang chunhua shen anton van den hengel and anthony dick ask me anything visual question answering based on knowledge from external sources cvpr zhen xu bingquan liu baoxun wang chengjie sun and xiaolong wang incorporating loosestructured knowledge into lstm with recall gate for conversation modeling arxiv a i mplementation d etails in the following we explain the detailed implementation of our two baseline models we assume to have computed the contextually refined word representations depending on the setup and embedded our input sequences q qlq and p plp to q and p respectively the word representation update gate in eq is initialized with a bias of to refine representations only slightly in the beginning of training in the following as before we denote the hidden dimensionality of our model by n and a layer by fc z wz b w b rn u rm q uestion a nswering encoding in the dqa task q refers to the question and p to the supporting text at first we process both sequences by identical bilstms in parallel eq followed by separate linear projections eq bilstm q bilstm p uq r up r uq up are initialized by i i where i is the identity matrix prediction our or answer layer is the same as in weissenborn et al we first compute a weighted representation of the processed question eq softmax vq vq rn x i the probability distribution ps for the location of the answer is computed by a mlp with a relu activated hidden layer sj as follows sj relu fcs ps j exp vs sj ej relu fce vs r n pe j exp ve sj v e rn the model is trained to minimize the loss of the predicted start and end positions respectively during evaluation we extract the span i k with the best ps i pe k of maximum token length k i r ecognizing t extual e ntailment encoding analogous to dqa we encode our input sequences by bilstms however for rte we use conditional encoding et instead therefore we initially process the embedded hypothesis q by a bilstm and use the respective end states of the forward and backward lstm as initial states for the forward and backward lstm that processes the embedded premise prediction we hconcatenate i the outputs of the forward and backward lstms processing the fw bw premise p and run each of the resulting lp outputs through a fullyconnected layer with relu activation ht followed by a operation over time resulting in a hidden state h rn finally h is used to predict the rte label as follows f w ht relu fc t h maxpool ht t p c exp vc h vc rn the probability of choosing category c entailment contradiction neutral is defined in eq finally the model is trained to minimize the loss of the predicted category probability distribution t raining as steps we lowercase all inputs and tokenize it additionally we make use of lemmatization as described in which is necessary for matching as word representations we use from glove pennington et we employed adam kingma ba for optimization with an initial of which was halved whenever the measure dqa or the accuracy rte dropped on the development set between minibatches for dqa and rte respectively we used of size for dqa and for rte additionally for regularization we make use of dropout with a rate of on the computed word representations ew defined in with the same dropout mask for all words in a batch all our models were trained with different random seeds and the top performance is reported
2
polychronous interpretation of synoptic a domain specific modeling language for embedded besnard gautier ouy talpin bodeveix cortier pantel strecker garcia rugina buisson dagnat besnard gautier ouy talpin inria rennes bretagne atlantique irisa campus de beaulieu rennes cedex france bodeveix cortier pantel strecker paul sabatier route de narbonne toulouse cedex france bodeveix cortier pantel strecker garcia thales alenia space boulevard midi cannes france rugina eads astrium rue des cosmonautes du palays toulouse cedex france buisson dagnat institut bretagne brest iroise brest cedex france the spacify project which aims at bringing advances in mde to the satellite flight software industry advocates a approach built on a modeling language named synoptic in line with previous approaches to modeling such as statecharts and simulink synoptic features hierarchical decomposition of application and control modules in synchronous block diagrams and state machines its semantics is described in the polychronous model of computation which is that of the synchronous language s ignal introduction in collaboration with major european manufacturers the spacify project aims at bringing advances in mde to the satellite flight software industry it focuses on software development and maintenance phases of satellite lifecycle the project advocates a approach built on a modeling language dsml named synoptic the aim of synoptic is to support all aspects of embedded flightsoftware design as such synoptic consists of heterogeneous modeling and programming principles defined in collaboration with the industrial partners and end users of the spacify project used as the central modeling language of the spacify model driven engineering process synoptic allows to describe different layers of abstraction at the highest level the software architecture models the bujorianu and fisher eds workshop on formal methods for aerospace fma eptcs pp c spacify project this work is licensed under the creative commons attribution license spacify project functional decomposition of the flight software this is mapped to a dynamic architecture which defines the thread structure of the software it consists of a set of threads where each thread is characterized by properties such as its frequency its priority and its activation pattern periodic sporadic a mapping establishes a correspondence between the software and the dynamic architecture by specifying which blocks are executed by which threads at the lowest level the hardware architecture permits to define devices processors sensors actuators busses and their properties finally mappings describe the correspondence between the dynamic and hardware architecture on the one hand by specifying which threads are executed by which processor and describe a correspondence between the software and hardware architecture on the other hand by specifying which data is carried by which bus for instance figure depicts these layers and mappings figure global view layers and architecture mappings the aim is to synthesize as much of this mapping as possible for example by appealing to internal or external schedulers however to allow for human intervention it is possible to give a mapping thus overriding or bypassing schedules anyway consistency of the resulting dynamic architecture is verified by the spacify tool suite based on the properties of the software and dynamic model at each step of the development process it is also useful to model different abstraction levels of the system under design inside a same layer functional dynamic or hardware architecture synoptic offers this capability by providing an incremental design framework and refinement features to summarize synoptic deals with diagrams mode automata blocks components dynamic and hardware architecture mapping and timing the functional part of the synoptic language allows to model software architecture the corresponding is well adapted to model synchronous islands and to specify interaction points between these islands and the middleware platform using the concept of external variables synchronous islands and middleware form a globally asynchronous and locally synchronous gals system software architecture the development of the synoptic software architecture language has been tightly coordinated with the definition of the geneauto language synoptic uses essentially two types of modules called blocks in synoptic which can be mutually nested diagrams and polychronous interpretation of synoptic mode automata nesting favors a hierarchical design and enables viewing the description at different levels of detail by embedding blocks in the states of state machines one can elegantly model operational modes each state represents a mode and transitions correspond to mode changes in each mode the system may be composed of other or have different connection patterns among components apart from structural and behavioral aspects the synoptic software architecture language allows to define temporal properties of blocks for instance a block can be parameterized with a frequency and a worst case execution time which are taken into account in the mapping onto the dynamic architecture synoptic is equipped with an assertion language that allows to state desired properties of the model under development we are mainly interested in properties that permit to express for example coherence of the modes if component x is in mode then component y is in mode or can eventually move into mode specific transformations extract these properties and pass them to the verification tools the main purpose of this paper is to describe a formal semantics of synoptic expressed in terms of the synchronous language s ignal s ignal is based on synchronized flows with synchronization a process is a set of equations on elementary flows describing both data and control the s ignal formal model provides the capability to describe systems with several clocks polychronous systems as relational specifications a brief overview of the abstract syntax of synoptic is provided in section then section describes the interpretation of each one of these constructions in the model of the s ignal language an overview of synoptic blocks are the main structuring elements of synoptic a block block x a defines a functional unit of compilation and of execution that can be called from many contexts and with different modes in the system under design a block x encapsulates a functionality a that may consist of automata and a block x is implicity associated with two signals and the signal starts the execution of a the specification a may then operate at its own pace until the next is signaled the signal is delivered to x at some and forces a to reset its state and variables to initial values blocks a b block x a dataflow x a automaton x a a b blocks with data and events trigger and reset signals a flow can simpliy define a connection from an event x to an event y written event x y combine data y and z by a simple operation f to form the flow x written data y f z x or feed a signal y back to x written data y init v x in a feedback loop the signal x is initially defined by then at each occurrence n of the signal y it takes its previous value xn the execution of a is controlled by its parent clock a simultaneously executes each connection it is composed of every time it is triggered by its parent block data f low a b data y init v x data y f z x event x y a b actions are sequences of operations on variables that are performed during the execution of automata an assignment x y f z defines the new value of the variable x from the current values of y and z by the function f the skip stores the new values of variables that have been defined before it so that they spacify project become current past it the conditional if x then a else b executes a if the current value of x is true and executes b otherwise a sequence a b executes a and then b action a b skip x y f z if x then a else b a b automata schedule the execution of operations and blocks by performing timely guarded transitions an automaton receives control from its trigger and reset signals and as specified by its parent block when an automaton is first triggered or when it is reset its starts execution from its initial state specified as initial state on any state s do a it performs the action a from this state it may perform an immediate transition to new state t written s on x t if the value of the current variable x is true it may also perform a delayed transition to t written s on x t that waits the next trigger before to resume execution in state t if no transition condition applies it then waits the next trigger and resumes execution in state states and transitions are composed as a b the timed execution of an automaton combines the behavior of an action or a the execution of a delayed transition or of a stutter is controlled by an occurrence of the parent trigger signal as for a the execution of an immediate transition is performed without waiting for a trigger or a reset as for an action automaton a b state s do a s on x t s on x t a b polychronous interpretation of synoptic the model of computation on which synoptic relies is that of the polychronous language s ignal this section describes how synoptic programs are interpreted into this core language a brief introduction to s ignal in s ignal a process p consists of the composition of simultaneous equations x f y z over signals x y z a delay equation x y init v defines x every time y is present initially x is defined by the value v and then it is defined by the previous value of y a sampling equation x y when z defines x by y when z is true finally a merge equation x y default z defines x by y when y is present and by z otherwise an equation x y f z can use a boolean or arithmetic operator f to define all of the nth values of the signal x by the result of the application of f to the nth values of the signals y and z the synchronous composition of processes p q consists of the simultaneous solution of the equations in p and in q it is commutative and associative the process restricts the signal x to the lexical scope of p q x y f z p q process in s ignal the presence of a value along a signal x is an expression noted it is true when x is present otherwise it is absent specific processes and operators are defined in s ignal to manipulate clocks explicitly we only use the simplest one y that synchronizes all occurrences of the signals x and y interpretation of blocks the execution of a block is driven by the trigger t of its parent block the block resynchronizes with that trigger every time itself or one of its makes an explicit reference to time a skip for an action or a delayed transition s t for an automaton otherwise the elapse of time is sensed from outside the block whose operations on ci are perceived as belonging to the same period as within polychronous interpretation of synoptic ti the interpretation implements this feature by encoding actions and automata using static single assignment as a result and from within a block every sequence of actions a b or transitions a b defines the value of all its variables once and defines intermediate ones in the flow of its execution interpretation of are structurally similar to s ignal programs and equally combined using synchronous composition the interpretation a rt hhpii of a fig is parameterized by the reset and trigger signals of the parent block and returns a process p the input term a and the output term p are marked by a and hhpii for convenience a delayed flow data y init v x initially defines x by the value it is reset to that value every time the reset signal r occurs otherwise it takes the previous value of y in time dataflow f a rt hh a rt a ii data y init v x rt hhx v when r default y init v y ii data y f z x rt hhx y f zii event y x rt hhx when yii a b rt hh a rt b rt ii figure interpretation of connections in fig we write pi for a finite product of processes pn similarly ei is a finite merge default en a functional flow data y f z x defines x by the product of y z by f an event flow event y x connects y to define x particular cases are the operator y to convert an event y to a boolean data and the operator y to convert the boolean data y to an event we write in a and out a for the input and output signals of a a by default the convention of synoptic is to synchronize the input signals of a to the parent trigger it is however possible to define alternative policies one is to the input signals at the pace of the trigger another is to adapt or resample them at that trigger w interpretation of actions the execution of an action a starts at an occurrence of its parent trigger and shall end before the next occurrence of that event during the execution of an action one may also wait and synchronize with this event by issuing a skip a skip has no behavior but to signal the end of an instant all the newly computed values of signals are flushed in memory and execution is resumed upon the next parent trigger action x sends the signal x to its environment execution may continue within the same symbolic instant unless a second emission is performed one shall issue a skip before that an operation x y f z takes the current value of y and z to define the new value of x by the product with f a conditional if x then a else b executes a or b depending on the current value of x as a result only one new value of a variable x should at most be defined within an instant delimited by a start and an end or a skip therefore the interpretation of an action consists of its decomposition in static single assignment form to this end we use an environment e to associate each variable with its definition an expression and a guard that locates it in time spacify project an action holds an internal state s that stores an integer n denoting the current portion of the actions that is being executed state represents the start of the program and each n labels a skip that materializes a synchronized sequence of actions the interpretation a s m g e hhpiin h f of an action a fig takes as parameters the state variable s the state m of the current section the guard g that leads to it and the environment it returns a process p the state n and guard h of its continuation and an updated environment we write usege x for the expression that returns the definition of the variable x at the guard g and defge x for storing the final values of all variables x defined in e x v e at the guard usege x i f x v e then hhe x i i else hh x init when gii defg e e x usege x execution is started with s upon receipt of a trigger it is also resumed from a skip at s n with a trigger hence the signal t is synchronized to the state s of the action the signal r is used to inform the parent block an automaton that the execution of the action has finished it is back to its initial state an end resets s to stores all variables x defined in e with an equation x usege x and finally stops its returned guard is a skip advances s to the next label n when it receives control upon the guard e and flushes the variables defined so far it returns a new guard s init n to resume the actions past it an action x emits x when its guard e is true a sequence a b evaluates a to the process p and passes its state na guard ga environment ea to b it returns p q with the state guard and environment of b similarly a conditional evaluates a with the guard g when x to p and b with g when not x to q it returns p q but with the guard ga default gb all variables x x defined in both ea and eb are merged in the environment do a rt hh p r s where hhpiin h f a end s pre end s n g e hhs when g defg e skip s n g e hhs n when g defg e s pre x s n g e hhx when giin g e x y f z s n g e hhx eiin g ex where e hh f usege y usege z when gii a b s n g e hhp qiinb gb eb where hhpiina ga ea a s n g e and hhqiinb gb eb b s na ga ea if x then a else b s n g e hhp qiinb ga default gb ea eb g g where hhpiina ga ea a s n g when usee x e and hhqiinb gb eb b s na g when not usee x e figure interpretation of timed sequential actions in fig we write e f to merge the definitions in the environments e and for all variables x v e v f in the domains of e and f x v e v f e x f x x v f v e e f x e x default f x x v e v f note that an action can not be reset from the parent clock because it is not synchronized to it a sequence of emissions x x yields only one event along the signal x because they occur at the same logical time as opposed to x skip x which sends the second one during the next trigger polychronous interpretation of synoptic interpretation of automata an automaton describes a hierarchic structure consisting of actions that are executed upon entry in a state by immediate and delayed transitions an immediate transition occurs during the period of time allocated to a trigger hence it does not synchronize to it conversely a delayed transition occurs upon synchronization with the next occurrence of the parent trigger event as a result an automaton is partitioned in regions each region corresponds to the amount of calculation that can be performed within the period of a trigger starting from a given initial state notations we write and for the immediate and delayed transition relations of an automaton a we write s t t x s r and s t s x t r resp s and s for the predecessor and successor states of the immediate resp delayed transitions resp from a state s in an automaton we write for the region of a state it is defined by an equivalence relation t s a s x t for any state s of a written s s a it is required that the restriction of to the region is acyclic notice that still a delayed transition may take place between two states of the same region interpretation an automaton a is interpreted by a process automaton x a rt parameterized by its parent trigger and reset signals the interpretation of a defines a local state it is synchronized to the parent trigger it is set to the initial state upon receipt of a reset signal r and otherwise takes the previous value of that denotes the next state the interpretation of all states is performed concurrently we give all states si of an automaton a a unique integer label i dsi e and designate with dae its number of states is the initial state and for each state of index i we call ai its action i and xi j the guard of an immediate or delayed transition from si to s j automaton x a rt hh t s s when r default init a si s ii the interpretation si s of all states i dae of an automaton fig is implemented by a series of mutually recursive equations that define the meaning of each state si depending on the result obtained for its predecessors s j in the same region since a region is by definition acyclic this system of equations has therefore a unique solution the interpretation of state si starts with that of its actions ai an action ai defines a local state si synchronized to the parent state s i of the automaton the automaton stutters with s if the evaluation of the action is not finished it is in a local state si interpreting the actions ai requires the definition of a guard gi and of an environment ei the guard gi defines when ai starts it requires the local state to be or the state si to receive control from a predecessor s j in the same region with the guard x ji the environment ei is constructed by merging these fj returned by its immediate predecessors s j once these parameters are defined the interpretation of ai returns a process pi together with an exit guard hi and an environment fi holding the value of all variables it defines upon evaluation of ai delayed transition from si are checked this is done by the definition of a process qi which first checks if the guard xi j of a delayed transition from si evaluates to true with fi if so variables defined in fi are stored with defhi fi spacify project all delayed transitions from si to s j are guarded by hi one must have finished evaluating i before moving to j and a condition gi j defined by the value of the guard xi j the default condition is to stay in the current state s while si until mode i is terminated hence the next state from i is defined by the equation the next state equation of each state w is composed with the other to form the product dae that is merged as i dae dae si s pi qi si when s i where i ei hhpi iin hi fi ai si g qi si xi j s j defhi when usefi xi j fi ei u s j si fj gi when si init default use x e ji s j x ji si gi j hi when usefi xi j si xi j s j s when si default si xi j s j j when gi j figure recursive interpretation of a mode automaton conclusion synoptic has a formal semantics defined in terms of the synchronous language s ignal on the one hand this allows for neat integration of verification environments for ascertaining properties of the system under development on the other hand a formal semantics makes it possible to encode the metamodel in a proof assistant in this sense synoptic will profit from the formal correctness proof and subsequent certification of a code generator that is under way in the geneauto project moreover the formal model of s ignal is the basis for the polychronous modeling environment sme sme is used to transform synoptic diagrams and generate executable c code references toom naks pantel gandriau and wati geneauto an automatic code generator for a safe subset of european congress on embedded real time software erts des de l automobile le guernic talpin and le lann polychrony for system design journal for circuits systems and computers special issue on application specific hardware design world scientific polychrony and sme available at http brunette talpin and gautier a metamodel for the design of polychronous systems the journal of logic and algebraic programming elsevier
6
distortion for abelian subgroups of out fn jan derrick wigglesworth abstract we prove that abelian subgroups of the outer automorphism group of a free group are quasiisometrically embedded our proof uses recent developments in the theory of train track maps by feighnhandel as an application we prove the rank conjecture for out fn introduction given a finitely generated group g a finitely generated subgroup h is undistorted if the inclusion h g is a embedding with respect to the word metrics on g and h for some any finite generating sets a standard technique for showing that a subgroup is undistorted involves finding a space on which g acts nicely and constructing a height function on this space satisfying certain properties elements which are large in the word metric on h should change the height function by a lot elements of a fixed generating set for g should change the function by a uniformly bounded amount in this paper we use a couple of variations of this method let rn be the wedge of n circles and let fn be its fundamental group the free group of rank n the outer automorphism group of the free group out fn is defined as the quotient of aut fn by the inner automorphisms those which arise from conjugation by a fixed element much of the study of out fn draws parallels with the study of mapping class groups furthermore many theorems concerning out fn and their proofs are inspired by analogous theorems and proofs in the context of mapping class groups both groups satisfy the tits alternative both have finite virtual cohomological dimension and both have serre s property fa to name a few importantly this approach to the study of out fn has yielded a classification of its elements in analogy with the classification of elements of the mapping class group along with constructive ways for finding good representatives of these elements in the authors proved that infinite cyclic subgroups of the mapping class group are undistorted their proof also implies that higher rank abelian subgroups are undistorted in proved that infinite cyclic subgroups of out fn are undistorted in contrast with the mapping class group setting her proof does not directly apply to higher rank subgroups the question of whether all abelian subgroups of out fn are undistorted has been left open in this paper we answer this in the affirmative theorem abelian subgroups of out fn are undistorted this theorem has implications for various open problems in the study of out fn in behrstock and minsky prove that the geometric rank of the mapping class group is equal to the maximal rank of an abelian subgroup of the mapping class group as an application of theorem we prove the analogous result in the out fn setting corollary the geometric rank of out fn is which is the maximal rank of an abelian subgroup of out fn we remark that in principle this could have been done earlier by using the techniques in to show that a specific maximal rank abelian subgroup is undistorted mathematics subject classification primary january the author is partially supported by the nsf grant of mladen bestvina and also acknowledges support from national science foundation grants and derrick wigglesworth in the course of proving theorem we show that up to finite index only finitely many marked graphs are needed to get good representatives of every element of an abelian subgroup of out fn in the setting of mapping class groups the analogous statement is that for a surface s and an abelian subgroup h of mcg s there is a thurston decomposition of s into disjoint subsurfaces which is respected by every element of this can also be viewed as a version of the kolchin theorem of for abelian subgroups we prove proposition for any abelian subgroup h of out fn there exists a finite index subgroup h such that every h can be realized as a ct on one of finitely many marked graphs the paper is outlined as follows in section we prove that the translation distance of an arbitrary element of out fn acting on outer space is the maximum of the logarithm of the expansion factors associated to the exponentially growing strata in a relative train track map for this result was obtained previously and independently by richard wade in his thesis this is the analog for out fn of bers result that the translation distance of a mapping class f acting on space endowed with the metric is the maximum of the logarithms of the dilatation constants for the components in the thurston decomposition of f in section we then use our result on translation distance to prove the main theorem in the special case where the abelian subgroup h has enough exponential data more precisely we will prove the result under the assumption that the collection of expansion factor homomorphisms determines an injective map h zn in section we prove proposition and then use this in section to prove the main result in the case that h has enough polynomial data this is the most technical part of the paper because we need to obtain significantly more control over the types of that can occur in nice circuits in a marked graph than was previously available the bulk of the work goes towards proving proposition this result provides a connection between the comparison homomorphisms introduced in which are only defined on subgroups of out fn and s twisting function we then use this connection to complete the proof of our main result in the polynomial case finally in section we consolidate results from previous sections to prove theorem the methods used in sections and can be carried out with minimal modification in the general setting i would like to thank my advisor mladen bestvina for many hours of his time and for his patience i would also like to thank mark feighn for his encouragement and support finally i would also like to express my gratitude to radhika gupta for patiently listening to me go on about completely split paths for weeks on end and to msri for its hospitality and partial support preliminaries identify fn with rn once and for all a marked graph g is a finite graph of rank n with no valence one vertices equipped with a homotopy equivalence rn g called a marking the marking identifies fn with g as such a homotopy equivalence f g g determines an outer automorphism of fn we say that f g g represents all homotopy equivalences will be assumed to map vertices to vertices and the restriction to any edge will be assumed to be an immersion let be the universal cover of the marked graph a path in g resp is either an isometric immersion of a possibly infinite closed interval i g resp or a constant map i g resp if is a constant map the path will be called trivial if i is finite then any map i g resp is homotopic rel endpoints to a unique path we say that is obtained by tightening if f g g is a homotopy equivalence and is a path in g we define f as f if is a lift of f we define similarly if the domain of is finite then the image has a natural decomposition into edges ek called the edge path associated to a circuit is an immersion s for any path or circuit let be with its orientation reversed a decomposition of a path or circuit into subpaths is a splitting for f g g and is denoted k k k if f f f for all k let g be a graph an unordered pair of oriented edges is a turn if and have the same initial endpoint as with paths we denote by e the edge e with the opposite orientation if is a path distortion for abelian subgroups of out fn which contains e or e in its edge path then we say takes the turn a train track structure on g is an equivalence relation on the set of edges of g such that implies and have the same initial vertex a turn is legal with respect to a train track structure if a path is legal if every turn crossed by the associated edge path is legal the equivalence classes of this relation are called gates a homotopy equivalence f g g induces a train track structure on g as follows f determines a map df on oriented edges in g by definining df e to be the first edge in the edge path f e we then declare if d f k d f k for some k a filtration for a representative f g g of an outer automorphism is an increasing sequence of f subgraphs gm we let hi gi and call hi the stratum a turn with one edge in hi and the other in is called mixed while a turn with both edges in hi is called a turn in hi if gi does not contain any illegal turns in hi then we say is we denote by mi the submatrix of the transition matrix for f obtained by deleting all rows and columns except those labeled by edges in hi for the representatives that will be of interest to us the transition matrices mi will come in three flavors mi may be a zero matrix it may be the identity matrix or it may be an irreducible matrix with eigenvalue we will call hi a zero z growing neg or exponentially growing eg stratum according to these possibilities any stratum which is not a zero stratum is called an irreducible stratum definition we say that f g g is a relative train track map representing out fn if for every exponentially growing stratum hr the following hold df maps the set of oriented edges in hr to itself in particular all mixed turns are legal if is a nontrivial path with endpoints in hr then so is f if gr is then f is suppose that u r that hu is irreducible hr is eg and each component of gr is and that for each u i r hi is a zero stratum which is a component of and eachsvertex of hi has r valence at least two in gr then we say that hi is enveloped by hr and we define hrz hk k a path or circuit in a representative f g g is called a periodic nielsen path if f for some k if k then is a nielsen path a nielsen path is indivisible if it can not be written as a concatenation of nielsen paths if w is a closed nielsen path and ei is an edge such that f ei ei wdi then we say e is a linear edge and we call w the axis of if ei ej are distinct linear edges with the same axis such that di dj and di dj then we call a path of the form ei e j an exceptional path in the same scenario if di and dj have different signs we call such a path a path we say that x and y are nielsen equivalent if there is a nielsen path in g whose endpoints are x and y we say that a periodic point x g is principal if neither of the following conditions hold x is not an endpoint of a periodic nielsen path and there are exactly two periodic directions at x both of which are contained in the same eg stratum x is contained in a component c of periodic points that is topologically a circle and each point in c has exactly two periodic directions a relative train track map f is called rotationless if each principal periodic vertex is fixed and if each periodic direction based at a principal vertex is fixed we remark that there is a closely related notion of an outer automorphism being rotationless we will not need this definition but will need the following relevant facts from theorem corollary there exists k depending only on n so that is rotationless for every out fn theorem corollary for each abelian subgroup a of out fn the set of rotationless elements in a is a subgroup of finite index in a for an eg stratum hr we call a path with endpoints in hr a connecting k path for hr let e be an edge in an irreducible stratum hr and let be a maximal subpath of f e in a zero stratum for some k then we say that is taken a path or circuit is called derrick wigglesworth completely split if it has a splitting where each of the s is a single edge in an irreducible stratum an indivisible nielsen path an exceptional path or a connecting path in a zero stratum which is both maximal and taken we say that a relative train track map is completely split if f e is completely split for every edge e in an irreducible stratum and if for every taken connecting path in a zero stratum f is completely split definition a relative train track map f g g and filtration f given by gm g is said to be a ct if it satisfies the following properties rotationless f g g is rotationless completely split f g g is completely split filtration f is reduced the core of each filtration element is a filtration element vertices the endpoints of all indivisible periodic necessarily fixed nielsen paths are necessarily principal vertices the terminal endpoint of each neg edge is principal and hence fixed periodic edges each periodic edge is fixed and each endpoint of a fixed edge is principal if the unique edge er in a fixed stratum hr is not a loop then is a core graph and both ends of er are contained in zero strata if hi is a zero stratum then hi is enveloped by an eg stratum hr each edge in hi is and each vertex in hi is contained in hr and has link contained in hi hr linear edges for each linear ei there is a closed nielsen path wi such that f ei ei widi for some di if ei and ej are distinct linear edges with the same axes then wi wj and di dj neg nielsen paths if the highest edges in an indivisible nielsen path belong to an neg stratum then there is a linear edge ei with wi as in linear edges and there exists k such that ei wik eg nielsen paths if hr is eg and is an indivisible nielsen path of height r then f fr where fr gr is a composition of proper extended folds defined by iteratively folding is a composition of folds involving edges in gr is a homeomorphism we remark that several of the properties in definition use terms that have not been defined we will not use these properties in the sequel the main result for cts is the following existence theorem theorem theorem every rotationless out fn is represented by a ct f g for completely split paths and circuits all cancellation under iteration of f is confined to the individual terms of the splitting moreover f has a complete splitting which refines that of finally just as with improved relative train track maps introduced in every circuit or path with endpoints at vertices eventually is completely split culler and vogtmann s outer space cvn is defined as the space of homothety classes of free minimal actions of fn on simplicial metric trees outer space has a metric defined in analogy with the metric on space the distance from t to t is defined as the logarithm of the infimal lipschitz constant among all fn maps f t t let be the universal cover of the marked graph each c fn acts by a covering translation tc which is a hyperbolic isometry and therefore has an axis which we denote by ac the projection of ac to g is the circuit corresponding to the conjugacy class if e is a linear edge in a ct so that f e ewd as in linear edges then we say w is the axis of the space of lines in is denoted and is the set where denotes the diagonal and acts by interchanging the factors equipped with the topology the space of abstract lines is denoted by fn and defined by the action of fn on resp induces an action on fn resp the marking of g defines an fn homeomorphism between fn and the quotient of by the fn action is the space of lines in g and is denoted b g the space of abstract lines in rn is denoted by distortion for abelian subgroups of out fn a lamination is a closed set of lines in g or equivalently a closed fn subset of the elements of a lamination are its leaves associated to each out fn is a finite set of attracting laminations denoted by l in the coordinates given by a relative train track map f g g representing the attracting laminations for are in bijection with the eg strata of for each attracting lamination l there is an associated expansion factor homomorphism p stabout fn z which has been studied in we briefly describe the essential features of p here but we direct the reader to for more details on lines laminations and expansion factor homomorphisms for each stab at most one of l and l can contain if neither l nor l contains then p let f g g be a relative train track map representing if l and hr is the eg stratum of g associated to with corresponding pf eigenvalue then p log conversely if l then p log where is the pf eigenvalue for the eg stratum of a rtt representative of which is associated to the image of p is a discrete subset of r which we will frequently identify with z for out fn each element l has a paired lamination in l which is denoted by the paired lamination is characterized by the fact that it has the same free factor support as that is the minimal free factor carrying is the same as that which carries we denote the pair by translation lengths in cvn in this section we will compute the translation distance for an arbitrary element of out fn acting on outer space as is standard for out fn we define the translation distance of on outer space n it is straightforward to check that this is independent of x cvn for the as d x n remainder of this section out fn will be fixed and f g g will be a relative train track map representing with filtration gm lemma if hr is an exponentially growing stratum of g then there exists a metric on g such that f e e for every edge e hr where is the eigenvalue associated to hr proof let mr be the transition matrix for the exponentially growing stratum p hr and let v be a left eigenvector for the pf eigenvalue with components v i normalize v so that v i for ei hr define ei v i if e hr define e we now check the condition on the growth of edges in the eg stratum hr if e is an edge in hr implies that f e is now write f e f e as an edge path f e ej and we have f e f e j x ei j x ei hr e completing the proof of the lemma we define the r of a path or circuit in g by ignoring the edges in other strata explicitly r hr where hr is considered as a disjoint union of of note that the definition of and the proof of the previous lemma show that r f ei ei lemma if is an reduced edge path in g and is the metric defined in lemma then r f r proof we write bj as a decomposition into maximal subpaths where aj hr and bj as in lemma of applying the lemma we conclude that f f f f f bj thus x x x x r f r f ai r f bi r f ai r ai r i i i i theorem let out fn with f g g a rtt representative for each eg stratum hr of f let be the associated pf eigenvalue then max log hr is an eg stratum derrick wigglesworth proof we first show that log for every eg stratum hr let x g id where is the length function provided by lemma recall that the logarithm of the factor by which a candidate loop is stretched gives a lower bound on the distance between two points in cvn let be an circuit n contained in gr of height r and let c r implies that f is for all n so repeatedly applying lemma we have n n n f r f r f r f r f r c r r f r f rearranging the inequality taking logarithms and using the result of yields log c log c d x x log n n n taking the limit as n we have a lower bound on the translation distance of for the reverse inequality fix we must find a point in outer space which is moved by no more than max log the idea is to choose a point in the simplex of cvn corresponding to a relative train track map for in which each stratum is much larger than the previous one this way the metric will see the growth in every eg stratum let f g g be a relative train track map as before but assume that each neg stratum consists of a single edge this is justified for example by choosing f to be a ct let k be the maximum edge length of the image of any edge of define a length function on g as follows r if e is the unique edge in the neg stratum hr e r if e is an edge in the zero stratum hr r vi if ei hr and hr is an eg stratum with as above the logarithm of the maximum amount that any edge is stretched in a difference of markings map gives an upper bound on the lipschitz distance between any two points so we just check the factor by which every edge is stretched clearly the stretch factor for edges in fixed strata is if e is the single edge in an neg stratum hi then f e e k max e e i k e e i similarly if e is an edge in the zero stratum hi then k f e e i we will use the notation to denote the length of the intersection of with so for any path contained in gr we have r now if ei is an edge in the eg stratum hr with normalized pf eigenvector v then f ei r f ei f ei f ei k r ei ei ei r v i v i e since the vector v is determined by f after replacing we have that f e max for every edge of thus the distance g is moved by is less than max log and the proof is complete now that we have computed the translation distance of an arbitrary acting on outer space we ll use this result to establish our main result in a special case the exponential case in this section we ll analyze the case that the abelian subgroup h i has enough exponential data so that the entire group is seen by the so called lambda map more precisely given an attracting lamination for an outer automorphism let p stab z be the expansion factor homomorphism defined by corollary of in corollary the authors prove that every abelian subgroup of out fn has a finite index subgroup which is rotationless meaning that every element of the distortion for abelian subgroups of out fn subgroup is rotationless distortion is unaffectedsby passing to a finite index subgroup so there is no loss in assuming that h is rotationless now let l h l be the set of attracting laminations for elements of by lemma l h is a finite set of laminations define p fh h z l h by taking the collection of expansion factor homomorphisms for attracting laminations of the subgroup in what follows we will need to interchange p for p and for that we will need the following lemma pf lemma if l and l are paired laminations then p is a constant map that is p and p differ by a multiplicative constant and so determine the same homomorphism proof first corollary of gives that stab stab which we will henceforth refer to as stab so the ratio in the statement is always well defined now p and p each determine a homomorphism from stab to r and it suffices to show that these homomorphisms have the same kernel suppose ker p so that by corollary either l or l after replacing by if necessary we may assume l now has a paired lamination l which a priori could be different from but corollary of says that in fact and therefore that l a final application of corollary gives that ker p this concludes the proof theorem if p fh is injective then h is undistorted in out fn proof let k be the rank of h and start by choosing laminations l h so the restriction of the function p fh to the coordinates determined by is still injective first note that can not contain an lamination pair by lemma next pass to a finite index subgroup of h and choose generators so that after reordering the s if necessary each generator satisfies p fh p let cvn be arbitrary and let we complete the proof one orthant at a time by replacing some of the s by their inverses so that all the pi s are next after replacing some of the s by their paired laminations again using lemma we may assume that p fh has all coordinates nonnegative by theorem the translation distance of is the maximum of the eigenvalues associated to the eg strata of a relative train track representative f of some but not necessarily all of are attracting laminations for those s which are in l are associated to eg strata of f for such a stratum the logarithm of the pf eigenvalue is p and the fact that p is a homomorphism implies p p p pk p pi p thus the translation distance of acting on outer space is max log is pf eigenvalue associated to an eg stratum of max p is in l and i k max pi p i k in the last equality the maximum is taken over a larger set but the only values added to the set were let s be a symmetric s s generating set for out fn and let d s if we write in terms of the generators sl then d d sl d sl sl d sl sl d sl d d fn let min p j i j k rearranging this and combining these inequalities we have i fn d max pi p i k max pi we have thus proved that the image of h under the injective homomorphism p fh is undistorted in zk to conclude the proof recall that any injective homomorphism between abelian groups is a embedding derrick wigglesworth now that we have established our result in the exponential setting we move on to the polynomial case first we prove a general result about cts representing elements of abelian subgroups abelian subgroups are virtually finitely filtered in this section we prove an analog of theorem for abelian subgroups in that paper the authors prove that any unipotent subgroup of out fn is contained in the subgroup q of homotopy equivalences respecting a fixed filtration on a fixed graph they call such a subgroup filtered while generic abelian subgroups of out fn are not unipotent we prove that they are virtually filtered namely that such a subgroup is virtually contained in the union of finitely many q s first we review the comparison homomorphisms introduced in comparison homomorphisms feighn and handel defined certain homomorphisms to z which measure the growth of linear edges and families in a ct representative though they can be given a canonical description in terms of principal lifts we will only need their properties in coordinates given by a ct presently we will define these homomorphisms and recall some basic facts about them complete details on comparison homomorphisms can be found in comparison homomorphisms are defined in terms of principal sets for the subgroup the exact definition of a principal set is not important for us we only need to know that a principal set x for an abelian subgroup h is a subset of which defines a lift s h aut fn of h to the automorphism group let and be two principal sets for h that define distinct lifts and to aut fn suppose further that contains the endpoints of an axis ac since h is abelian h aut fn defined by is a homomorphism it follows from lemma that for any h ikc for some k where ic aut fn aut fn denotes conjugation by therefore defines homomorphism into hic i which we call the comparison homomorphism determined by and generally we will use the letter for comparison homomorphisms for a rotationless abelian subgroup h there are only finitely many comparison homomorphisms lemma let k be the number of distinct comparison homomorphisms and as before let n be the number of attracting laminations for the map h zn defined as the product of the comparison homomorphisms and expansion factor homomorphisms is injective lemma an element h is called generic if every coordinate of zn is if is generic and f g g is a ct representing then there is a correspondence between the comparison homomorphisms for h and the linear edges and families in g described in the introduction to of which we briefly describe now there is a comparison homomorphism for each linear edge ei in if f ei ei udi then di there is also a comparison homomorphism for each family ei e j which is denoted by e j if ei is as before and f ej ej udj then e j and di dj we illustrate this correspondence with an example example let g be the rose with three petals labeled a b and for i j z define gi j g g as follows a a gi j b bai c caj each gi j determines an outer automorphism of which we denote by j the automorphisms j all lie in the rank two abelian subgroup h i the subgroup h has three comparison homomorphisms which are easily understood in the coordinates of a ct for a generic element of the element is generic in h and is a ct representing it two of the comparison homomorphisms manifest as and where j i and j j the third homomorphism is denoted by c and it measures how a path of the form c changes when gi j is applied since gi j c c we have c j i j in the sequel we will rely heavily on this correspondence between the comparison homomorphisms of h and the linear edges and families in a ct for a generic element of we now prove the main result of this section distortion for abelian subgroups of out fn proposition for any abelian subgroup h of out fn there exists a finite index subgroup h such that every h can be realized as a ct on one of finitely many marked graphs most of the proof consists of restating and combining results of feighn and handel from we refer the reader to of their paper for the relevant notation and most of the relevant results proof first replace h by a finite index rotationless subgroup corollary the proof is by induction on the rank of the base case follows directly from lemma let h and let f be ct s for and which are both generic in the definitions then guarantee that i i i i for i is both generic and admissible lemma then says that is a ct representing i so we are done assume now that the claim holds for all abelian subgroups of rank less than k and let h i the set of generic elements of h is the complement of a finite lemma collection of hyperplanes every element lies in a rank k abelian subgroup of h the kernel of the corresponding comparison homomorphism by induction and the fact that there are only finitely many hyperplanes every element has a ct representative on one of finitely many marked graphs we now add a single marked graph for each sector defined by the complement of the hyperplanes let be generic and let f g g be a ct representative let d be the disintegration of as defined in and recall that d h is finite index in h theorem let be the semigroup of generic elements of d h that lie in the same sector of h as for every and every coordinate of the signs of and agree the claim is that every element of can be realized as a ct on the marked graph g and we will show this by explicitly reconstructing the generic tuple a such that fa fix and let be a generating set for h with ai generic corollary write as a word in the generators and define a jk ak since the admissibility condition is a set of homogeneous linear equations which must be preserved under taking linear combinations as long as every coordinate of a is a must be admissible to see that every coordinate of a is in fact positive let be a coordinate of using the fact that is a homomorphism to z and repeatedly applying lemma to the s we have jk s s jk ak s jk ak s a s where a s denotes the coordinate of the vector a since and were assumed to be generic and to lie in the same sector we conclude that every coordinate of a is positive the injectivity lemma then implies that fa that a is in fact generic follows from the fact which is directly implied by the definitions that if a is a generic tuple then is a generic element of finally we apply lemma to conclude that fa g g is a ct thus every element of has a ct representative on the marked filtered graph repeating this argument in each of the finitely many sectors and passing to the intersection of all the finite index subgroups obtained this way yields a finite index subgroup h and finitely many marked graphs so that every generic element of h can be realized as a ct on one of the marked graphs the elements were already dealt with using the inductive hypothesis so the proof is complete the polynomial case in the author introduced a function that measures the twisting of conjugacy classes about an axis in fn and used this function to prove that cyclic subgroups of upg are undistorted in order to use the comparison homomorphisms in conjunction with this twisting function we need to establish a result about the possible terms occuring in completely split circuits after establishing this connection we use it to prove theorem the main result under the assumption that h has enough polynomial data derrick wigglesworth in the last section we saw the correspondence between comparison homomorphisms and certain types of paths in a ct in order to use the twisting function from our goal is to find circuits in g with single linear edges or families as subpaths and moreover to do so in such a way that we can control cancellation at the ends of these subpaths under iteration of f this is the most technical section of the paper and the one that most heavily relies on the use of cts the main result is proposition completely split circuits one of the main features of train track maps is that they allow one to k understand how cancellation occurs when tightening f k to f in previous incarnations of train track maps this cancellation was understood inductively based on the height of the path one of the main advantages of completely split train track maps is that the way cancellation can occur is now understood directly rather than inductively given a ct f g g representing the set of allowed terms in completely split paths would be finite were it not for the following two situations a linear edge e eu gives rise to an infinite family of inps of the form e and two linear edges with the same axis with and having the same sign give rise to an infinite family of exceptional paths of the form e to see that these are the only two subtleties one only needs to know that there is at most one inp of height r for each eg stratum hr this is precisely corollary to connect s comparison homomorphisms to s twisting function we would like to show that every linear edge and exceptional family occurs as a term in the complete splitting of some completely split circuit we will in fact show something stronger proposition there is a completely split circuit containing every allowable term in its complete splitting that is the complete splitting of contains at least one instance of every edge in an irreducible stratum fixed neg or eg maximal taken connecting subpath in a zero stratum infinite family of inps e infinite family of exceptional paths e the proof of this proposition will require a careful study of completely split paths with that aim we define a directed graph that encodes the complete splittings of such paths given a ct f g g representing define a csp f or just csp when f is clear whose vertices are oriented allowed terms in completely split paths more precisely there are two vertices for each edge in an irreducible stratum one labeled by e and one labeled by e which we will refer to at and there are two vertices for each maximal taken connecting path in a zero stratum one for and one for which will be referred to as and similarly there are two vertices for each family of exceptional paths two vertices for each inp of eg height and one vertex for each infinite family of neg nielsen paths there is only one vertex for each family of indivisible nielsen path whose height is neg because and determine the same initial direction there is an edge connecting two vertices and in csp f if the path is completely split with splitting given by this is equivalent to the turn being legal by the uniqueness of complete splittings lemma any completely split path resp circuit with endpoints at vertices in g defines a directed edge path resp directed loop in csp f given by reading off the terms in the complete splitting of conversely a directed path or loop in csp f yields a not quite well defined path or circuit in g which is necessarily completely split the only ambiguity lies in how to define when the path in csp f passes through a vertex labeled by a nielsen path of neg height or a family example consider the rose consisting of two edges a and b with the identity marking let f be defined by a ab b bab this is a ct representing a fully irreducible outer automorphism there is one indivisible nielsen path abab the graph csp f is shown in figure the blue edges represent the fact that each of the paths b b b a b and b a is completely split remark a basic observation about the graph csp is that every vertex has at least one incoming and at least one outgoing edge while this is really just a consequence of the fact that every vertex in a ct has at least two gates a bit of care is needed to justify this formally indeed let v be the initial endpoint distortion for abelian subgroups of out fn figure the graph of csp f for example of if there is some legal turn e at v where e is an edge in an irreducible stratum then e is completely split so there is an edge in csp from to the other possibility is that the only legal turns at v consist of an edge in a zero stratum hi in this case zero strata guarantees that v is contained in the eg stratum hr which envelops hi and that the link of v is contained in hi hr in particular there are a limited number of possibilities for may be a taken connecting subpath in hi an edge in hr or an k eg inp of height in the first two cases is a term in the complete splitting of f e for some edge by increasing k if necessary we can guarantee that is not the first or last term in this splitting therefore there is a directed edge in csp with terminal endpoint in the case that is an inp has a first edge which is necessarily of eg height we have already established that there is a directed edge in csp pointed to so we just observe that any vertex in csp with a directed edge ending at will also have a directed edge terminating at the same argument shows that there is an edge in csp emanating from the statement of proposition can now be rephrased as a statement about the graph csp namely that there is a directed loop in csp which passes through every vertex we will need some basic terminology from the study of directed graphs we say a is strongly connected if every vertex can be connected to every other vertex in by a directed edge path in any we may define an equivalence relation on the vertices by declaring v w if there is a directed edge path from v to w and vice versa we are required to allow the trivial edge path so that v v of the equivalence classes of this relation partition the vertices of into strongly connected components we will prove that csp f is connected and has one strongly connected component from this proposition follows directly the proof proceeds by induction on the core filtration of g which is the filtration obtained from the given one by considering only the filtration elements which are their own cores because the base case is in fact more difficult than the inductive step we state it as a lemma lemma if f g g is a ct representing a fully irreducible automorphism then csp f is connected and strongly connected proof under these assumptions there are two types of vertices in csp f those labeled by edges and those labeled by inps we denote by csp e the subgraph consisting of only the vertices which are labeled by edges recall that denotes the vertex in csp corresponding to the edge if the leaves of the derrick wigglesworth attracting lamination are then we can produce a path in csp e starting at then passing through every other vertex in csp e and finally returning to by looking at a long segment of a leaf of the attracting lamination more precisely completely split says that f k e is a completely split path for all k and the fact that f is a train track map says that this complete splitting contains no inps moreover irreducibility of the transition matrix and of the lamination implies that for sufficiently large k this path not only contains every edge in g with both orientations but contains the edge e followed by every other edge in g with both of its orientations and then the edge e again such a path in g exactly shows that csp e is connected and strongly connected we isolate the following remark for future reference remark if there is an indivisible nielsen path in g write its edge path ek recall that all inps in a ct have endpoints at vertices if is any vertex in csp with a directed edge pointing to then is completely split since the turn must be legal hence there is also a directed edge in csp from to the same argument shows that there is an edge in csp from to some vertex since csp e is strongly connected and the remark implies that each vertex for an inp in g has directed edges coming from and going back into csp e we conclude that csp is strongly connected in the case that leaves of the attracting lamination are now choose an orientation on the attracting lamination if we imagine an ant following the path in g determined by a leaf of then at each vertex v we see the ant arrive along certain edges and leave along others let e be an edge with initial vertex v so that e determines a gate e at we say that e is a departure gate at v if e occurs in some any oriented leaf similarly we say the gate e is an arrival gate at v if the edge e occurs in some gates may be both arrival and departure gates suppose now that there is some vertex v in g that has at least two arrival gates and some vertex w that has at least two departure gates as before we will produce a path in csp e that shows this subgraph has one strongly connected component start at any edge in g and follow a leaf of the lamination until you have crossed every edge with its forward orientation continue following the leaf until you arrive at v say through the gate e since v has two arrival gates there is some edge e which occurs in with the given orientation and whose terminal vertex is v e is a second arrival gate now turn onto e since e and e are distinct gates this turn is legal follow going backwards until you have crossed every edge of g now in the opposite direction finally continue following until you arrive at w where there are now two arrival gates because you are going backwards use the second arrival gate to turn around a second time and follow now in the forwards direction again until you cross the edge you started with by construction this path in g is completely split and every term in its complete splitting is a single edge the associated path in csp e passes through every vertex and then returns to the starting vertex so csp e is strongly connected in the presence of an inp remark completes the proof of the lemma under the current assumptions we have now reduced to the case where the lamination is orientable and either every vertex has only one departure gate or every vertex has only one arrival gate the critical case is the latter of the two and we would like to conclude in this situation that there is an inp example illustrates this scenario some edges are colored red to illustrate the fact that in order to turn around and get from the vertices labeled by a and b to those labeled by a and b one must use an inp the existence of an inp in this situation is provided by the following lemma lemma assume f g g is a ct representing a fully irreducible rotationless automorphism suppose that the attracting lamination is orientable and that every vertex has exactly one arrival gate then g has an inp and the initial edges of and are oriented consistently with the orientation of the lamination we postpone the proof of this lemma and explain how to conclude our argument if every vertex has one arrival gate then we apply the lemma to conclude that there must be an inp since inps have exactly one illegal turn using the previous argument we can turn around once now if we are again in a situation where there is only one arrival gate then we can apply the lemma a second time this time with the orientation of reversed to obtain the existence of a second inp allowing us to turn around a second time distortion for abelian subgroups of out fn we remark that since there is at most one inp in each eg stratum of a ct lemma implies that if the lamination is orientable then some vertex of g must have at least gates proof of lemma there is a vertex of g that is fixed by f since lemma guarantees that every eg stratum contains at least one principal vertex and principal vertices are fixed by rotationless choose such a vertex v and let be a lift of v to the universal cover of let g be the unique arrival gate at lift f to a map fixing let t be the infinite subtree of consisting of all embedded rays starting at and leaving every vertex through its unique arrival gate that is and whenever t is a vertex t should be the unique arrival gate at t refer to figure for the tree t for example a a a b b a a b a a a b b b a b a b b a a a b b a a b figure the tree t for example the red path connects two vertices of the same height first we claim that t t to see this notice that since f is a topological representative it suffices to show that p t for every vertex p of t notice that vertices p of t are characterized by two things first p is legal and second for every edge e in the edge path of p the gate e is the unique arrival gate at the initial endpoint of now p p is legal because f is a train track map moreover every edge e in the edge path of p occurs with orientation in a leaf of the lamination since takes leaves to leaves preserving orientation the same is true for p the gate determined by every edge in the edge path of is the unique arrival gate at that vertex thus for every edge e in the edge path of p e is the unique arrival gate at that vertex which means that p t endow g with a metric using the left pf eigenvector of the transition matrix so that for every edge of g we have f e e where is the pf eigenvalue of the transition matrix lift the metric on g to a metric on and define a height function on the tree t by measuring the distance to h p d p since legal paths are stretched by exactly we have that for any p t h p p now let w and be two distinct lifts of v with the same height h w h to see that this is possible just take and to be two distinct circuits in g based at v which are obtained by following a leaf of the lamination the initial vertices of the lifts of and which end at are distinct lifts of v which are contained in t and have the same height k let be the unique embedded segment connecting w to in t by lemma is k completely split for all sufficiently large moreover the endpoints of f are distinct since the restriction of to the lifts of v is injective this is simply because represents an automorphism of k fn and lifts of v correspond to elements of fn now observe that the endpoints have the same height and for any pair of distinct vertices with the same height the unique embedded segment connecting them must contain an illegal turn this follows from the definition of t and the assumption that every vertex has k a unique arrival gate therefore the completely split path contains an illegal turn in particular it derrick wigglesworth must have an inp in its complete splitting that the initial edges of and are oriented consistently with the orientation on is evident from the construction the key to the inductive step is provided by the moving up through the filtration lemma from which explicitly describes how the graph g can change when moving from one element of the core filtration to the next recall the core filtration of g is the filtration glk gm g obtained by restricting to those filtration elements which are their own cores for each gli the stratum of the core sli hj finally we let filtration is defined to be hlci i gli denote the negative of the change in euler characteristic lemma lemma if hlci does not contain any eg strata then one of the following holds a li and the unique edge in hlci is a fixed loop that is disjoint from b li and both endpoints of the unique edge in hlci are contained in c li and the two edges in hlci are nonfixed and have a common initial endpoint that is not in and terminal endpoints in in case in cases and if hlci contains an eg stratum then hli is the unique eg stratum in hlci and there exists ui li such that both of the following hold a for j ui hj is a single nonfixed edge ej whose terminal vertex is in and whose initial vertex has valence one in gui in particular gui deformation retracts to and gui b for ui j li hj is a zero stratum in other words the closure of gli gui is the extended eg stratum hlzi if some component of hlci is disjoint from gui then hlci hli is a component of gli and otherwise as we move up through the core filtration we imagine adding new vertices to csp and adding new edges connecting these vertices to each other and to the vertices already present thus we define csp li to be the subgraph of csp consisting of vertices labeled by allowable terms in gli here we use the fact that the restriction of f to each connected component of an element of the core filtration is a ct the problem with proving that csp is strongly connected by induction on the core filtration is that csp li may have multiple connected components this only happens however if gli has more than one connected component in which case csp li will have multiple connected components if any component of gli is a topological circle necessarily consisting of a single fixed edge e then csp li will have two connected components for this circle lemma for every i k the number of strongly connected components of csp li f is equal to connected components of gli that are circles connected components of gli that are not circles the following proof is in no way difficult it only requires a careful analysis of the many possible cases the only case where there is any real work is in case of lemma proof lemma establishes the base case when is exponentially growing if is a circle then csp has exactly two vertices each with a self loop so the lemma clearly holds we now proceed to the inductive step which is analysis based on lemma we set some notation to be used throughout e will be an edge with initial vertex v and terminal vertex w it s possible that v w we denote by gvli the component of gli containing v and similarly for let csp vli be the component s of csp li containing paths which pass through in the case that gvli is a topological circle there will be two such components in case of lemma csp li is obtained from csp by adding two new vertices and each new vertex has a self loop and no other new edges are added so the number of connected components of csp increases by two each component is strongly connected in case there are several subcases according to the various possibilities for the edge e and the topological types of and gw first suppose that e is a fixed edge then csp li is obtained from distortion for abelian subgroups of out fn csp by adding two new vertices there are no new inps since the restriction of f to each component of gli is a ct and any inp is of the form provided by neg nielsen paths or eg nielsen paths as in remark the vertex has an incoming edge with initial endpoint and an outgoing edge with terminal w endpoint moreover csp and csp w we then have a directed edge from csp to and a directed edge from to csp hence there are directed paths in csp li connecting the two strongly connected subgraphs csp and csp w to each other and passing through all new vertices therefore there is one strongly connected component of csp li corresponding to the component of gli containing v and w this component can not be a circle since it contains at least two edges in the case that resp gw is a topological circle we remark that there are incoming resp outgoing edges w v in csp vli resp csp w li to from each of the components of csp resp csp see figure e gw csp f csp w f figure a possibility for gli and the graph csp li when hlci is a single neg edge suppose now that e is a neg edge there are two new vertices in csp li labeled and the argument given in the previous paragraph goes through once we notice that if v w then gw can not be a circle since this would imply that w is not a principal vertex in gli see first bullet point in the definition contradicting the fact that f is a ct vertices is not satisfied if e is a edge then we are done if e is linear then there will be other new vertices in csp li there will be a new vertex for the family of neg nielsen paths the fact that we have concluded the inductive step for the vertex along with remark shows that this new vertex is in the same strongly connected component as there will also be two vertices for each family of exceptional paths e for the exact same reasons these vertices are also in this strongly connected component this concludes the proof in case of lemma the arguments given thus far apply directly to case of lemma we remark that in this case neither of the components of containing the terminal endpoints of the new edges can be circles for the same reason as before the most complicated way that g and hence csp can change is when hlci contains an eg stratum in case of lemma if some component of hlci is disjoint from gui then hlci is a component of gli and the restriction of f to this component is a fully irreducible in particular csp li has one more strongly connected component than csp by lemma though case of lemma describes gli as being built from in three stages from bottom to top somehow it is easier to prove csp li has the correct number of connected components by going from top to bottom by looking at a long segment of a leaf of the attracting lamination for hli we can see as in lemma that the vertices in csp li labeled by edges in the eg stratum hli are in at most two different strongly connected components in fact we can show that these vertices are all in the same strongly connected component since we are working under the assumption that no component of hlci is disjoint from gui we can use one of the components of gui to turn around on a leaf of the lamination indeed choose some component of gui which intersects hli let e be an eg edge in hli with terminal vertex w note that if deformation derrick wigglesworth retracts onto a circle with vertex v then some eg edge in hli must be incident to v since otherwise f would not be a ct thus by replacing e if necessary we may assume in this situation that w is on the circle using the inductive hypothesis and the fact that mixed turns are legal we can connect the vertex to the vertex in csp li then we can follow a leaf of the lamination going backwards until we return to w say along e if e e then the leaves of the lamination were in the first place and all the vertices labeled by edges in hli are in the same strongly connected component of csp li otherwise apply the inductive hypothesis again and use the fact that mixed turns are legal to get a path from to this shows all vertices labeled by edges in hli are in the same strongly connected component of csp li we will henceforth denote the strongly connected component of csp li which contains all these vertices by csp eg li if there is an inp of height hli its first and last edges are necessarily in hli remark then implies that and are in csp eg li recall that the only allowable terms in complete splittings which intersect zero strata are connecting paths which are both maximal and taken in particular each vertex in csp li corresponding to such a connecting path is in the aforementioned strongly connected component csp eg li now let e be an neg edge in hlci with terminal vertex there is necessarily an outgoing edge from eg w into csp w and an incoming edge to from csp li if the graph is not a topological circle w then the corresponding component csp is already strongly connected and there is a directed edge from this graph back to and from there back into csp eg li thus this subgraph is contained in the strongly eg connected component csp li on the other hand if gw is a topological circle then there is a directed edge from back into csp eg because mixed turns are legal and as before some edge in hli must be li incident to thus all the vertices in csp li labeled by neg edges are in the strongly connected component w csp eg li as are all vertices in csp for w as above the same argument and the inductive hypothesis shows that for any component of which intersects hli the corresponding strongly connected component s of csp are also in csp eg li the only thing remaining is to deal with neg nielsen paths and families of exceptional paths both of these are handled contains all vertices of the form by remark and the fact that we have already established that csp eg li or for neg edges in hlci we have shown that every vertex of a strongly connected component of csp coming from a component of which intersects hlci is in the strongly connected component csp eg li in particular there is only one strongly connected component of csp li for the component of gli which contains edges in hlci this completes the proof of the proposition in the proof of theorem we will need to consider a weakening of the complete splitting of paths and circuits the splitting of a completely split path or circuit is the coarsening of the complete splitting obtained by considering each subpath to be a single element given a ct f g g we define the graph csp qe f by adding two vertices to csp f for each one for ei e j and one for ej e i for every vertex with a directed edge terminating at add an edge from to e j and similarly for every edge emanating from j add an edge to the same vertex beginning at e j do the same for the vertex e i as before every completely split path gives rise to a directed edge path in csp qe corresponding to its it follows immediately from the definition and proposition that corollary there is a completely split circuit containing every allowable term in its we are now ready to prove our main result in the polynomial case polynomial subgroups are undistorted in this subsection we will complete the proof of our main result in the polynomial case we first recall the height function defined by in given two conjugacy classes u w of elements of fn define the twisting of w about u as twu w max k w auk b where u w are a cyclically reduced conjugates of u w distortion for abelian subgroups of out fn then define the twisting of w by tw w max twu w u fn proved the following lemma using bounded cancellation which we restate for convenience a critical point is that is independent of lemma lemma there is a constant such that tw s w tw w for all conjugacy classes w and all s s our symmetric finite generating set of out fn since we typically work with train tracks we have a similar notion of twisting adapted to that setting let be a path or circuit in a graph g and let be a circuit in define the twisting of about as max k k where the path k is immersed then define tw max is a circuit the bounded cancellation lemma of directly implies lemma if rn g and w is a conjugacy class in fn rn then tw w tw w we are now ready to prove for polynomial abelian subgroups recall the map h zn was defined by taking the product of comparison and expansion factor homomorphisms in the following theorem we will denote the restriction of this map to the last k coordinates those corresponding to comparison homomorphisms by theorem let h be a rotationless abelian subgroup of out fn and assume that the map from h into the collection of comparison factor homomorphisms h zk is injective then h is undistorted proof the first step is to note that it suffices to prove the generic elements of h are uniformly undistorted this is just because the set of elements of h is a finite collection of hyperplanes so there is a uniform bound on the distance from a point in one of these hyperplanes to a generic point we set up some constants now for later use this is just to emphasize that they depend only on the subgroup we are given and the data we have been handed thus far let g be the finite set of marked graphs provided by proposition and define as the maximum of bcc and bcc g as g varies over the finitely many marked graphs in lemma then implies that tw w tw w for any conjugacy class w and any of the finitely many marked graphs in let be the constant from lemma fix a minimal generating set for h and let be generic in let f g g be a ct representing with g chosen from g and let be the comparison homomorphism for which is the largest the key point is that given corollary will provide a split circuit for which the twisting will grow by under application of the map f indeed let be the circuit provided by corollary as we discussed in section there is a correspondence between the comparison homomorphisms for h and the set of linear edges and families in assume first that corresponds to the linear edge e with axis u so that by definition f e e since the splitting of f refines that of and e is a term in the complete splitting of f not only contains the path e but in fact splits at the ends of this subpath under iteration t t we see that f contains the path e and therefore tw f this isn t quite good enough for our purposes so we will argue further to conclude that for some tw f tw f t suppose for a contradiction that no such t exists then for every t we have tw f tw f t using a telescoping sum and repeatedly applying this assumption we obtain tw f combining and rearranging inequalities this implies t tw tw f t t t for all t a contradiction this establishes the existence of satisfying equation the above argument works without modification in the case that corresponds to a family of quasiexceptional paths we now address the minor adjustment needed in the case that corresponds to a family of exceptional paths ei e j let f ei ei udi and f ej ej udj since contains both ei e j and ej e i in its complete splitting we may assume without loss that di dj the only problem is that the derrick wigglesworth t exponent of u in the term ei e j occuring in the complete splitting of may be negative so that tw f may be less than in this case just replace by a sufficiently high iterate so that the exponent is positive now write in terms of the generators sp so that for any conjugacy class w by repeatedly applying lemma we obtain tw sp w tw sp w tw sp w tw w so that fn tw w tw w applying this inequality to the circuit f just constructed t and letting w be the conjugacy class f we have i h fn tw w tw w tw f tw f the second inequality is justified by lemma and the third uses the property of established in above since was chosen to be largest coordinate of and is injective the proof is complete the mixed case there are no additional difficulties with the mixed case since both the distance function on cvn and s twisting function are well suited for dealing with outer automorphisms whose growth is neither purely exponential nor purely polynomial consequently for an element of an abelian subgroup h if the image of is large under p fh then we can use cvn to show that fn is large and if the image is large under then we can use the methods from to show fn is large the injectivity of lemma exactly says that if is large then at least one of the aforementioned quantities must be large as well theorem abelian subgroups of out fn are undistorted proof assume by passing to a finite index subgroup that h is rotationless by lemma the map h zn is injective choose a minimal generating set for h and write h i the restriction of to the first n coordinates is precisely the map p fh from section choose k coordinates of so that the restriction to those coordinates is injective let p p be the subset of the chosen coordinates corresponding to expansion factor homomorphisms pass to a finite index subgroup of h and choose generators so that p for i now we proceed as in the proofs of theorems and fix a basepoint cvn and let in we may assume without loss that is generic in h again it suffices to prove that generic elements are uniformly undistorted replace the s by their inverses if necessary to ensure that all pi s are then for each of the first l coordinates of replace by its paired lamination if necessary lemma to ensure that p look at the coordinates of and pick out the one with the largest absolute value we first consider the case where the largest coordinate corresponds to an expansion factor homomorphism p we have already arranged that p by theorem the translation distance of is the maximum of the eigenvalues associated to the eg strata of a relative train track representative f of since is generic and the first l coordinates of are l each is associated to an eg stratum of f for such a stratum the logarithm of the pf eigenvalue is p just as in the proof of theorem for each i l we have that p pi p so the translation distance of acting on outer space is max pi p i l the inequality is because there may be other laminations in l just as in theorem we have d fn where d s let min p j i l j k then we have i fn d max pi p i k max pi distortion for abelian subgroups of out fn we now handle the case where the largest coordinate of corresponds to a comparison homomorphism let g be the finite set of marked graphs provided by proposition and let f g g be a ct for where g define exactly as in the proof of theorem so that tw w tw w for all conjugacy classes w and any marking or inverse marking of the finitely many marked graphs in the construction of the completely split circuit satisfying equation given in the polynomial case works without modification in our current setting where the comparison homomorphism in equation is the coordinate of which is largest in absolute value using this circuit and defining w f the inequalities and their justifications in the proof of theorem now apply verbatim to the present setting to conclude fn max we have thus shown that the image of h under undistorted since is injective it is a embedding of h into zk so the theorem is proved we conclude by proving the rank conjecture for out fn the maximal rank of an abelian subgroup of out fn is so theorem gives a lower bound for the geometric rank of out fn rank out fn the other inequality follows directly from the following result whose proof we sketch below theorem if g has virtual cohomological dimension k then rank g the virtual cohomological dimension of out fn is thus for n we have corollary the geometric rank of out fn is which is the maximal rank of an abelian subgroup of out fn proof of let g be a finite index subgroup whose cohomological dimension is since g is to its finite index subgroups we have rank rank g a well known theorem of provides the existence of a cw complex x which is a k by it suffices to show that there can be no embedding of into the universal cover suppose for a contradiction that f is such a map the first step is to replace f by a continuous f which is a bounded distance from f this is done using the argument whose proof is sketched in the key point is that is uniformly contractible that is for every r there is an s s r such that any continuous map of a finite simplicial complex into x whose image is contained in an is contractible in an s r it is a standard fact theorem that x may be replaced with a simplicial complex of the same dimension so that may be assumed to be simplicial we now construct a cover u of the simplicial complex whose nerve is equal to the barycentric subdivision of the cover u has one element for each cell of for each vertex v the set uv u is s a small neighborhood of for each define by taking a sufficiently small neighborhood of to ensure that the key property of u is that all k intersections are necessarily empty because the dimension of the barycentric subdivision of is equal to dim since we have arranged f to be continuous we can pull back the cover just constructed to obtain a cover v f u u of since the elements of u are bounded and f is a embedding the elements of v are bounded as well the intersection pattern of the elements of v is exactly the same as the intersection pattern of elements of u but the cover u was constructed so that any intersection of k elements is necessarily empty thus we have constructed a cover of by bounded sets with no k intersections we will contradict the fact that the lebesgue covering dimension of any compact subset of is k let k be compact in and let v be an arbitrary cover of let be the constant provided by the lebesgue covering lemma applied to v since the elements of v are uniformly bounded we can scale them by a single constant to obtain a cover of k whose sets have diameter such a cover is necessarily a refinement of v but has multiplicity k this contradicts the fact that k has covering dimension k so the theorem is proved derrick wigglesworth references emina translation lengths in out fn geom dedicata dedicated to john stallings on the occasion of his birthday lipman bers an extremal problem for quasiconformal mappings and a theorem by thurston acta mladen bestvina mark feighn and michael handel the tits alternative for out fn dynamics of exponentiallygrowing automorphisms ann of math mladen bestvina mark feighn and michael handel the tits alternative for out fn ii a kolchin type theorem ann of math mladen bestvina and michael handel train tracks and automorphisms of free groups ann of math jason behrstock and yair minsky dimension and rank for mapping class groups ann of math daryl cooper automorphisms of free groups have finitely generated fixed point sets algebra marc culler and karen vogtmann moduli of graphs and automorphisms of free groups invent samuel eilenberg and tudor ganea on the category of abstract groups ann of math mark feighn and michael handel abelian subgroups of out fn geom mark feighn and michael handel the recognition theorem for out fn groups geom mark feighn and michael handel algorithmic constructions of relative train track maps and cts arxiv november benson farb alexander lubotzky and yair minsky phenomena for mapping class groups duke math stefano francaviglia and armando martino metric properties of outer space publ john harer the virtual cohomological dimension of the mapping class group of an orientable surface invent allen hatcher algebraic topology cambridge university press cambridge michael handel and lee mosher the free splitting complex of a free group ii loxodromic outer automorphisms arxiv february john mccarthy a for subgroups of surface mapping class groups trans amer math estelle souche and bert wiest an elementary approach to of tree rn in proceedings of the conference on geometric and combinatorial group theory part ii haifa volume pages richard d wade symmetries of free and artin groups phd thesis university of oxford department of mathematics university of utah salt lake city ut http address dwiggles
4
catroid a mobile visual programming system for children wolfgang slany institute for software technology graz university of technology inffeldgasse graz austria abstract catroid is a free and open source visual programming language programming environment image manipulation program and website catroid allows casual and users starting from age eight to develop their own animations and games solely using their android phones or tablets catroid also allows to wirelessly control external hardware such as lego mindstorms robots via bluetooth bluetooth arduino boards as well as parrot s popular and inexpensive quadcopters via wifi categories and subject descriptors programming languages miscellaneous general terms design human factors languages keywords visual programming language mobile smart phone tablet programming animations educational games music kids children teenagers pedagogical introduction why programming for children there is a worldwide shortage of qualified software developers this is due to rapidly increasing demand together with stagnating or even declining number of computer science students this decline has been even more pronounced for females over the last years and it seems that even though younger girls can be interested in programming to the same degree as boys of their age girls consistently seem to lose interest in their late teens at the same time our society increasingly relies on software which is thus less and less understood by the general population moreover software development skills are not only of interest for obvious professional but also for philosophical reasons developing software is a skill that helps understanding the fundamental mechanisms and limitations underlying rational thinking be more attractive to kids than simple text and the success of mit s scratch programming environment undeniably has proven in practice more than two million times that it is very appealing to note that visual programming is not easy but that if children are motivated they are ready to spend the necessary time visual programming is not about dumbing down programming but instead about motivation by avoiding frustration due to spurious syntactic mumbo jumbo unnecessarily complicated work flows or hard to spot syntax errors as frequently encountered in mainstream programming languages a drawback of visual programming visual programming has been criticized to not scale well to larger and more complex programs however practical evidence from visual programming environments shows that large and complex programs such as a chess engine with an ai based machine opponent multi level jump and run games complex physics simulations sudoku solvers and much more are possible with a hierarchical organization of program elements why mobile devices worldwide there are ten times more mobile phones than pcs and this ratio even is much more pronounced for children think china and developing countries moreover one s smartphone nowadays is always in one s pocket and can easily be used everywhere without preparation when commuting to one s school using public transportation or at the backseat of the family car being able to program mobile devices also has become an important job qualification cheap smartphones from china are increasingly becoming available on a worldwide scale sony ericsson s xperia play android smartphone a playstation based portable game console is particularly attractive to kids what is visual programming and why do we use it visual programming predominantly consists in moving graphical elements instead of typing text we use visual programming because based on informal experiences it seems aesthetically to figure catroid s hello world program http and http figure lego mindstorm robot and user interface visually programmed with catroid as illustrated in figure figure catroid program for lego mindstorm robots salient features of catroid catroid runs on smartphones and tablets is intended for the use by children and has been strongly inspired by the already mentioned scratch programming language environment and thriving online which were developed by the lifelong kindergarten group at the mit media lab as known from scratch or s app catroid programs are written in a visual where individual commands are stuck together by arranging them visually with one s fingers figure on the left shows catroid s hello world program with the bricks sticking together the result on the screen is shown on the right the speak brick at the bottom left of figure additionally pronounces the phrase via android s text to speech engine in the default language of one s android device the android device occurs when first executing the program the possibilities for creative applications are infinite especially when attaching the phone to the lego robot and using the many sensors built into the phone such as acceleration or gyro sensors or gps for location based programs voice synthesis voice recognition as well as image recognition all can equally easily be used to build autonomous intelligent robots using the similarly controlled arduino hardware arbitrary external devices can be controlled using catroid figure on the left shows the main screen of catroid on the right a part of the list of bricks that appears when one adds a brick to a script of an object is shown the top three bricks are used for broadcasting and receiving messages and the lower ones are used for loops and sprite movements on the screen new command bricks selected by the user from the list of bricks partly shown in figure on the right can be visually dragged and dropped using one s fingers or deleted by dragging them with one s fingers to a catroid also differs in important aspects from scratch and app inventor compared to both with catroid there is no need for a pc the apps can be written by solely using smartphones or tablets devices scratch is intended for pc use with a keyboard mouse and comparatively large screen size whereas catroid focuses on small devices with sensitive screens and thereby very different user interaction and usability challenges as pictures often say more than a thousand words figures to show catroid in action thereby illustrating some of the features mentioned so far even better as a session with an interactive system often says more than a thousand pictures i cordially invite the reader to try out the latest version of figure shows parts of a catroid program that allows controlling a lego mindstorm robot on the left a list of sprite objects is shown each possessing its own scripts and images on the right scripts are shown that are associated with object turn left which is the one at the bottom on the screenshot on the left figure shows the resulting user interface and the robot the necessary bluetooth connection handshake between the robot and http http figure main screen of catroid and typical command bricks figure parrot s quadcopter controlled via wifi by a user s program written using catroid waste basket figure shows how parrot s popular and inexpensive quadcopter can be controlled from catroid via wifi the quadcopter has two video cameras that transmit their data to catroid for image processing catroid uses intel s opencv computer vision open source library running as a service on the android device to follow simple patterns such as the helipad in the photo on the left side of figure a video showing how it follows the moving helipad is available at http being able to quickly use such powerful but very simple to use features is a tremendous motivator to acquire the necessary programming skills for users of all age figure shows catroid running on sony ericsson s xperia play android smartphone a playstation based portable game console we plan to support the gamepad keys of such phones in the near future parents will most likely be much more willing to buy such gaming smartphones for their kids when they know that their children will not only be able to play games but moreover also be empowered to creatively build their own games animations simulations or other programs figure shows the screen one sees when executing a hannah montana interactive music video animation programmed with catroid which was created by children it is a remix from an original scratch project that can be found at http creating interactive music video animations is tremendously motivating both for boys but equally for girls even though a lot of programming is required and kids can spend days on their creations being able to upload such animations as videos to youtube is an additional strong motivator as kids in general love to show off their creations to their friends regardless of what mobile phone or pc their friends are using a youtube recorder for catroid programs is currently being developed and kids will soon be able to upload their videos to youtube in high quality in order to allow the recording also on android devices and to decrease the amount of data that needs to be uploaded from one s android device only the play data is transmitted from the phone to our server it then will be interpreted on our server in exactly the same way with the same user interaction and additional input such as random seeds our server records it in high quality and optionally uploads it directly to youtube similar to scratch catroid is an interpreted programming language with a procedural control flow objects communicate via simple broadcast messages see figure on the right and figure sony ericsson s xperia play android smartphone a playstation based portable game console have a set of scripts that are all excecuted concurrently with each script running in its own thread thus allowing real parallelism to take advantage of the multiple cores of recent smartphones because children easily can think in terms of objects actors and messages this style of and process synchronization feels totally natural to them the main design objective of the language is to make it as simple to understand and use as possible the current version of catroid as of march is not yet a full programming language as no variables and formulas are supported at this time though we are working hard to extend it in that direction the project started in april with a small team our aim was to quickly produce a working partial solution with the most important features implemented first and contuinue from there on we started to implement some minimal functionality that is sufficient to emulate the highly popular creativity tool that is preinstalled on many nintendo dsi game consoles only a background image that can change according to a prespecified timeline while an audio file is playing we then went on to implement the bricks most used in scratch programs around the figure interactive music video animation http related work there is a plethora of research papers about visual programming the acm digital library lists more than papers about the topic and google scholar reports more than documents related to visual programming i limit myself here to previous work regarding visual programming languages intended for the use by children and in this set of languages to those featuring community sites supporting and encouraging the sharing of interactive animations games created by kids beside scratch and catroid other visual programming systems with varying expressive power of the language include those associated with nintendo s wario ware microsoft s flipnote and game youtube can also be seen as a platform to share user contributed multimedia content though it is not primarily oriented towards children and the contributed content can not be made interactive acknowledgments my thanks to the team members and supporters of figure catroid s online community website world based on statistics published on the scratch all of those are now implemented and a lot more are currently being implemented to eventually make catroid a general programming language catroid s community website the catroid system includes a community website allowing children to upload and share their projects with others it is an important and integral part of our catroid system all projects uploaded to the community website are open source and published under a free software license everyone can download and edit every project from the website add new functionality or change the current behavior of the project and upload the new version again this is called remixing and was a core idea behind the scratch online community see figure for some images of the community website on a smartphone on the left a list of projects on catroid s community website is shown on the right the details page of a project is shown from there the project can be downloaded directly into catroid or reported as inappropriate as not all inappropriate content can automatically be detected regarding the latter names of projects descriptions comments and user names are compared to an extensive multilingual set of cuss words as well as their creative spelling variations and when recognized automatically rejected in order to serve the needs of children on a worldwide scale both the smartphone parts as well as the website of catroid are available in many languages a crowd sourcing localization internationalization support site based on pootle allows adding further languages we currently support several languages with speakers of english mandarin cantonese hindi arabian german turkish french japanese urdu russian rumanian and malaysian in the team http references craig and horton designs for girls a program and its evaluation in proceedings of the acm technical symposium on computer science education sigcse chattanooga tennessee march acm press new york ny kelleher motivating programming using storytelling to make computer programming attractive to middle school girls thesis carnegie mellon university school of computer science technical report maclaurin b the design of kodu a tiny visual programming language for children on the xbox in proceedings of the annual acm symposium on principles of programming language popl acm press new york ny maloney resnick rusk silverman and eastmond the scratch programming language and environment acm trans comp a designing a website for creative learning in proceedings of the web science society online athens greece march overmars teaching computer science through game design computer ieee http http http http http
6
scheduling distributed clusters of parallel machines and approximation algorithms full version riley samir and megan oct department of industrial engineering operations research university of california berkeley berkeley ca usa rjmurray department of computer science university of maryland college park college park md usa samir department of electrical engineering computer science massachusetts institute of technology vassar st cambridge ma usa megchao abstract the computing framework rose to prominence with datasets of such size that dozens of machines on a single cluster were needed for individual jobs as datasets approach the exabyte scale a single job may need distributed processing not only on multiple machines but on multiple clusters we consider a scheduling problem to minimize weighted average completion time of n jobs on m distributed clusters of parallel machines in keeping with the scale of the problems motivating this work we assume that each job is divided into m subjobs and distinct subjobs of a given job may be processed concurrently when each cluster is a single machine this is the concurrent open shop problem a clear limitation of such a model is that a serial processing assumption sidesteps the issue of how different tasks of a given subjob might be processed in parallel our algorithms explicitly model clusters as pools of resources and effectively overcome this issue under a variety of parameter settings we develop two constant factor approximation algorithms for this problem the first algorithm uses an lp relaxation tailored to this problem from prior work this algorithm provides strong performance guarantees our second algorithm exploits a surprisingly simple mapping to the special case of one machine per cluster this algorithm is combinatorial and extremely fast these are the first constant factor approximations for this problem remark a shorter version of this paper one that omitted several proofs appeared in the proceedings of the european symposium on algorithms acm subject classification nonnumerical algorithms and problems keywords and phrases approximation algorithms distributed computing machine scheduling lp relaxations algorithms digital object identifier all authors conducted this work at the university of maryland college park this work was made possible by the national science foundation reu grant ccf and the winkler foundation this work was also partially supported by nsf grant ccf riley murray samir khuller megan chao licensed under creative commons license annual european symposium on algorithms esa editors piotr sankowski and christos zaroliagis article no pp leibniz international proceedings in informatics schloss dagstuhl informatik dagstuhl publishing germany scheduling distributed clusters of parallel machines full version introduction it is becoming increasingly impractical to store full copies of large datasets on more than one data center as a result the data for a single job may be located not on multiple machines but on multiple clusters of machines to maintain fast and avoid excessive network traffic it is advantageous to perform computation for such jobs in a completely distributed fashion in addition commercial platforms such as aws lambda and microsoft s azure service fabric are demonstrating a trend of centralized cloud computing frameworks in which the user manages neither data flow nor server allocation in view of these converging issues the following scheduling problem arises if computation is done locally to avoid excessive network traffic how can individual clusters on the broader grid coordinate schedules for maximum throughput this was precisely the motivation for hung golubchik and yu in their acm symposium on cloud computing paper hung et al modeled each cluster as having an arbitrary number of identical parallel machines and choose an objective of average job completion time as such a problem generalizes the concurrent open shop problem they proposed a heuristic approach their heuristic called swag runs in o m time and performed well on a variety of data sets unfortunately swag offers poor performance as we show in section our contributions to this problem are to extend the model considered by hung et al and to introduce the first approximation algorithms for this general problem our extensions of hung et s model are to allow different machines within the same cluster to operate at different speeds to incorporate release times times before which a subjob can not be processed and to support weighted average job completion time we present two algorithms for the resulting problem our combinatorial algorithm exploits a surprisingly simple mapping to the special case of one machine per cluster where the problem can be approximated in o nm time we also present an approach with strong performance guarantees a when machines are of unit speed and subjobs are divided into equally sized but not necessary unit tasks formal problem statement i definition concurrent cluster scheduling there is a set m of m clusters and a set n of n jobs for each job j n there is a set of m subjobs one for each cluster cluster i m has mi parallel machines and machine in cluster i has speed v i without loss of generality assume v i is decreasing in the ith subjob for job j is specified by a set of tasks to be performed by machines in cluster i denote this set of tasks tji for each task t tji we have an associated processing time pjit again assume pjit is decreasing in t we will frequently refer to the subjob of job j at cluster i as subjob j i different subjobs of the same job may be processed concurrently on different clusters different tasks of the same subjob may be processed concurrently on different machines within the same cluster where we write decreasing we mean where we write increasing we mean murray khuller and chao a subjob is complete when all of its tasks are complete and a job is complete when all of its subjobs are complete we denote a job s completion time by cj the objective is to minimize weighted average job completion time job j has weight wj p for the purposes of computing approximation ratios it is equivalent to minimize wj cj we work with this equivalent objective throughout this paper a machine is said to operate at unit speed it if can complete a task with processing requirement p in p units of time more generally a machine with speed v v processes the same task in units of time machines are said to be identical if they are all of unit speed and uniform if they differ only in speed in accordance with graham et s taxonomy for scheduling problems we take cc to refer to the concurrent cluster environment and denote our problem by p wj cj optionally we may associate a release time rji to every subjob if any p subjobs are released after time zero we write wj cj example problem instances we now illustrate our model with several examples see figures and the tables at left have rows labeled to identify jobs and columns labeled to identify clusters each entry in these tables specifies the processing requirements for the corresponding subjob the diagrams to the right of these tables show how the given jobs might be scheduled on clusters with the indicated number of machines figure two examples of our scheduling model left our baseline example there are jobs and clusters cluster has identical machines and cluster has identical machines note that job has no subjob for cluster this is permitted within our framework in this case every subjob has at most one task right our baseline example with a more general subjob framework subjob and subjob both have two tasks the tasks shown are unit length but our framework does not require that subjobs be divided into equally sized tasks related work concurrent cluster scheduling subsumes many fundamental machine scheduling problems for example if we restrict ourselves to a single cluster m we can schedule a set of jobs on a bank of identical parallel machines to minimize makespan cmax or total p weighted completion time wj cj with a more clever reduction we can even minimize p total weighted lateness wj lj on a bank of identical parallel machines see section alternatively with m but m mi our problem reduces to the concurrent open shop problem a problem implies a particular environment objective function and optional constraints esa scheduling distributed clusters of parallel machines full version figure two additional examples of our model left our baseline example with variable machine speeds note that the benefit of high machine speeds is only realized for tasks assigned to those machines in the final schedule right a problem with the peculiar structure that all clusters but one have a single machine and most clusters have processing requirements for only a single job we will use such a device for the total weighted lateness reduction in section using graham et s taxonomy the concurrent open shop problem is written as p p wj cj three groups independently discovered an p for p wj cj using the work of queyranne the linear program in question has an exponential number of constraints but can still be solved in polynomial time with a variant of the ellipsoid method our strong algorithm for concurrent cluster scheduling refines the techniques contained therein as well as those of schulz see section p mastrolilli et al developed a algorithm for p wj cj that does not use lp solvers mussq is significant for both its speed and the strength of its performance guarantee it achieves an approximation ratio of in only o nm time although mussq does not require an lp solver its proof of correctness is based on the fact that it finds a feasible solution to the dual a particular linear program our fast algorithm for concurrent cluster scheduling uses mussq as a subroutine see section hung golubchik and yu presented a framework designed to improve scheduling across geographically distributed data centers the scheduling framework had a centralized scheduler which determined a job ordering and local dispatchers which carried out a schedule consistent with the controllers job ordering hung et al proposed a particular algorithm for the controller called swag performed well in a wide variety of simulations where each data center was assumed to have the same number of identical parallel machines we adopt a similar framework to hung et but we show in section that swag has no performance guarantee paper outline algorithmic results although only one of our algorithms requires solving a linear program both algorithms use the same linear program in their proofs of correctness we introduce this linear program in section before discussing either algorithm section establishes how an ordering of jobs can be processed to completely specify a schedule this is important because the complex work in both of our algorithms is to generate an ordering of jobs for each cluster section introduces our strong algorithm can be applied to any instance of concurrent cluster scheduling including those with release times rji a key in s strong performance guarantees lay in the fact that it allows different permutations of subjobs for different clusters by providing additional structure to the problem but while maintaining a generalization of concurrent open shop becomes a a permutation of the author s names mastrolilli queyranne schulz svensson and uhan murray khuller and chao this is significant because it is to approximate concurrent open shop and by extension our problem with ratio for any our combinatorial algorithm is presented in section the algorithm is fast provably accurate and has the interesting property that it can schedule all clusters using the same permutation of after considering in the general case we show how approximation ratios can be obtained in the fully parallelizable setting of zhang et al we conclude with an extension of that maintains performance guarantees while offering improved empirical performance the following table summarizes our results for approximation ratios for compactness condition id refers to identical machines v i constant over condition a refers to rji and condition b refers to pjit constant over t tji id a b id b id a id a the term r is the maximum over i of ri where ri is the ratio of fastest machine to average machine speed at cluster i the most surprising of all of these results is that our scheduling algorithms are remarkably simple the first algorithm solves an lp and then the scheduling can be done easily on each cluster the second algorithm is again a rather surprising simple reduction to the case of one machine per cluster the well understood concurrent open shop problem and yields a simple combinatorial algorithm the proof of the approximation guarantee is somewhat involved however in addition to algorithmic results we demonstrate how our problem subsumes that of minimizing total weighted lateness on a bank of identical parallel machines see section section provides additional discussion and highlights our more novel technical contributions the core linear program our linear program has an unusual form rather than introduce it immediately we conduct a brief review of prior work on similar lp s all the lp s we discuss in this paper have p wj cj where cj is a decision variable corresponding to the completion objective function time of job j and wj is a weight associated with job j for the following discussion only we adopt the notation in which job j has processing time pj in addition if multiple machine problems are discussed we will say that there are m such machines possibly with speeds si i m the earliest appearance of a similar linear program comes from queyranne in his paper queyranne presents an lp relaxation for sequencing n jobs on a single machine p p p where all constraints are of the form pj cj pj where s is pj an arbitrary subset of jobs once a set of optimal cj is found the jobs are scheduled in increasing order of cj these results were primarily theoretical as it was known at his p time of writing that sequencing n jobs on a single machine to minimize wj cj can be done optimally in o n log n time we call such schedules as we will see later on as a constructive proof of existence of schedules for all instances of wj cj including those instances for which schedules are strictly this is addressed in section esa scheduling distributed clusters of parallel machines full version queyranne s constraint set became particularly useful for problems with coupling across distinct machines as occurs in concurrent open shop four separate groups saw this and used the following lp in a for concurrent open shop scheduling x p p min wj cj p c p p ji j ji ji in view of its tremendous popularity we sometimes refer to the linear program above as the canonical relaxation for concurrent open shop andreas schulz s thesis developed queyranne s constraint set in greater depth as part of his thesis schulz considered scheduling n jobs on m identical parallel machines with p p constraints of the form pj cj in addition schulz showed p j p pm p that the constraints pj cj si pj are satisfied by pj any schedule of n jobs on m uniform machines in schulz refined the analysis for several of these problems for constructing a schedule from the optimal cj schulz considered scheduling jobs by increasing order of cj cj pj and cj pj statement of the model we consider allows for more control of the job structure than is indicated by the lp relaxations above inevitably this comes at some expense of simplicity in lp formulations in an effort to simplify notation we define the following constants and give verbal interpretations for each pmi v i qji min mi pqji v i p pji pjit from these definitions is the processing power of cluster i for subjob j i qji is the maximum number of machines that could process the subjob and is the maximum processing power than can be brought to bear on the same lastly pji is the total processing requirement of subjob j i in these terms the core linear program is as follows min p p wj cj pji cj p pji p pji cj pjit rji m j n t tji cj pji rji n i m n i m constraints are more carefully formulated versions of the polyhedral constraints introduced by queyranne and developed by schulz the use of term is new and allows us to provide stronger performance guarantees for our framework where subjobs are composed of sets of tasks as we will see this term is one of the primary factors that allows us to parametrize results under varying machine speeds in terms of maximum to average machine speed rather than maximum to minimum machine speed constraints and are simple lower bounds on job completion time the majority of this section is dedicated to proving that is a valid relaxation of p wj cj once this is established we prove the that can be solved in polynomial time by providing a separation oracle with use in the ellipsoid method both of these proofs use techniques established in schulz s thesis murray khuller and chao proof of s validity the lemmas below establish the basis for both of our algorithms lemma generalizes an inequality used by schulz lemma relies on lemma and cites an inequality mentioned in the preceding section and proven by queyranne i lemma let az be a set of real numbers we assume that k z of them are positive let bi be a set of decreasing positive real numbers then pz pz k a a b i i i i proof we only show the case where k z define a b bk ak k and as the vector of k ones now set u b and w b and pk note that ha hu wi in these terms it is clear that ai hu given this one need only cite namely hu hu ui hw wi and plug in the definitions of u and w to see the desired result j p i lemma validity lemma every feasible schedule for an instance i of wj cj has completion times that define a feasible solution to i proof as constraints and are clear lower bounds on job completion time it suffices to show the validity of constraint thus let s be a subset of n and fix an arbitrary but feasible schedule f for f define cji as the completion time of subjob j i under schedule f similarly define f cji as the first time at which tasks of subjob j i scheduled on machine of cluster i are finished lastly define p ji as the total processing requirement of job j scheduled on f f machine of cluster i note that by construction we have cji max mi cji and p m i f cjf cji since pji p ji we can rather innocuously write f p p f pji cji pji cji f f but using cji cji we can f pji cji namely f pmi p p p pmi f f pji cji pji cji v i pji i cji f p the next inequality uses a bound on pji i cji proven by queyranne for any subset s of n jobs with processing times p ji i to be scheduled on a single f pji i cji p p p p pji i pji i combining inequalities and we have the following p p p pmi f v i pji i pji i pji cji p pmi pmi i pji i pji the proceedings version of this paper stated that the proof cites the inequality and proceeds by induction from z k we have opted here to demonstrate a different simpler proof that we discovered only after the proceedings version was finalized here our machine is machine on cluster esa scheduling distributed clusters of parallel machines full version next we apply lemma to the right hand side of inequality a total of times p pmi mi mi p v p p p i ji i ji ji p p pmi qji mi v i j s i pji pji f we arrive at the desired result citing cjf cji p p p f p c p p i ji ji j ji ji constraint j theoretical complexity of as the first of our two algorithms requires solving directly we need to address the fact that has m n constraints luckily it is still possible to such solve linear programs in polynomial time with the ellipsoid method we introduce the following separation oracle for this purpose i definition oracle define the violation p p v s i p p i ji ji ji pji cj let cj rn be a potentially feasible solution to let denote the ordering when jobs are sorted in increasing order of cj pji find the most violated constraint in for i m by searching over v si i for si of the form j j j n if any of maximal v i then return i as a violated constraint for otherwise check the remaining n constraints and directly in linear time for fixed i finds the subset of jobs that maximizes violation for cluster i that is finds such that v i v s i we prove the correctness of by establishing a necessary and sufficient condition for a job j to be in p i lemma for pi a pji we have x cx pxi pi proof for given s not necessarily equal to it is useful to express v s i in terms of v s x i or v s x i depending on whether x s or x n s without loss of generality we restrict our search to s x s px i suppose x by writing pi s pi s x pi x and similarly decomposing the p sum one can show the following s pxi pxi v s i s x i pxi cx now suppose x n in the same strategy as above this time writing pi s pi s x pi x one can show that s pxi pxi v s i s x i pxi cx note that equations and hold for all s including s turning our attention to we see that x implies that the second term in equation is cx pxi pxi pi murray khuller and chao similarly x n implies the second term in equation is cx pxi pxi pi it follows that x iff cx pxi pi j given lemma it is easy to verify that sorting jobs in increasing order of cx pxi to define a permutation guarantees that is of the form j j for some j n this implies that for fixed i finds in o n log n time this procedure is executed once for each cluster leaving the remaining n constraints in and to be verified in linear time thus runs in o mn log n time by the equivalence of separation and optimization we have proven the following theorem i theorem i is a valid relaxation of i and is solvable in polynomial time as was explained in the beginning of this section linear programs such as those in are processed with an appropriate sorting of the optimal decision variables cj it is important then to have bounds on job completion times for a particular ordering of jobs we address this next in section and reserve our first algorithm for section list scheduling from permutations the complex work in both of our proposed algorithms is to generate a permutation of jobs the procedure below takes such a permutation and uses it to determine start times end times and machine assignments for every task of every subjob given a single cluster with mi machines and a permutation of jobs introduce list a i as an ordered set of tasks belonging to subjob a i ordered by longest processing time first now define list list i list i list n i where is the concatenation operator place the tasks of list in from the largest task of subjob i to the smallest task of subjob n i when placing a particular task assign it whichever machine and start time results in the task being completed as early as possible without moving any tasks which have already been placed insert idle time on all mi machines as necessary if this procedure would otherwise start a job before its release time the following lemma is essential to bound the completion time of a set of jobs processed by the proof is adapted from gonzalez et al i lemma suppose n jobs are scheduled on cluster i according to then for the completion time of subjob j i denoted j i satisfies j i max k i j j k i j proof for now assume all jobs are released at time zero let the task of subjob j i to finish last be denoted if is not the task in j i with least processing time then construct a new set j i t j j it j i because the tasks of subjob j i were scheduled by the sets of potential start times and machines for task and hence the set of potential completion times for esa scheduling distributed clusters of parallel machines full version task are the same regardless of whether subjob j i consisted of tasks j i or the subset j i accordingly reassign j i j i without loss of generality let d j denote the total demand for machine on cluster i once all tasks of subjobs i through j i and all tasks in the set j i are scheduled using the fact that j i v i d j j mi sum the left and right and sides over pmi j pmi d dividing by the sum of machine this implies j i v i mi j speeds and using the definition of yields pmi j j j i mi j d j p p j k i where we estimated j upward by j inequality completes our proof in the case when rji now suppose that some rji we take our policy to the extreme and suppose that all machines are left idle until every one of jobs through j are released note that this occurs precisely at time k i it is clear that beyond this point in time we are effectively in the case where all jobs are released at time zero hence we can bound the remaining time to completion by the right hand side of inequality as inequality simply adds these two terms the result follows j lemma is cited directly in the proof of theorem and lemma lemma is used implicitly in the proofs of theorems and an algorithm in this section we show how can be used to construct near optimal schedules for concurrent cluster scheduling both when rji and when some rji although solving is somewhat involved the algorithm itself is quite simple p algorithm let i t r w v denote an instance of wj cj use the optimal solution cj of i to define m permutations i m which sort jobs in increasing order of cj pji for each cluster i execute each theorem in this section can be characterized by how various assumptions help us cancel an additive in an upper bound for the completion time of an arbitrary subjob x i theorem is the most general while theorem is perhaps the most surprising for uniform machines i theorem let be the completion time of job j using algorithm and let r be p p as in section if rji then wj r op t otherwise wj r op t proof for y r define y max y now let x n be arbitrary and let i m be such that pxi but otherwise arbitrary define as the last task of job x to complete on cluster i and let ji be such that ji lastly denote the optimal lp solution cj because cj is a feasible solution to constraint implies the following set see associated proofs we omit the customary to avoid clutter in notation murray khuller and chao si ji x ji k i ji x ji k i pxi x k i k cx k i k i pji k i pxi which in turn implies if all subjobs are released at time zero then we can combine this with lemma and p the fact that pxi pxit to see the following the transition from the first inequality the second inequality uses cx and ri pxi cx ri when one or more subjobs are released after time zero lemma implies that it is sufficient to bound max k i by some constant multiple of cx since is defined by increasing lji cj pji a i b i implies a i b i a i b i a i a b a b a i b i a i b i and so k i pxi cx as before combine this with lemma and p the fact that pxi pxit to yield the following inequalities cx ri j complete our proof for identical machines i theorem if machines are of unit speed then yields an objective that is subjobs subjobs rji op t op t some rji op t op t proof define x cx i and as in theorem when rji one need only give a more careful treatment of the first inequality in using qji i pxi cx similarly when some rji the first inequality in implies the following i cx j the key in the refined analysis of theorem lay in how is used to annihilate while qxi subjobs is sufficient to accomplish this it is not strictly necessary the theorem below shows that we can annihilate the term whenever all tasks of a given subjob are of the same length note that the tasks need not be unit as the lengths of tasks across different subjobs can differ esa scheduling distributed clusters of parallel machines full version i theorem suppose v i if pjit is constant over t tji for all j n and i m then algorithm is a when rji and a otherwise p proof the definition of pxi gives pxi pxit using the assumption that pjit is constant over t tji we see that pxi qxi qxi where qxi apply this to inequality from the proof of theorem some algebra yields qxi the case with some rji uses the same identity for pxi j p sachdeva and saket showed that it is to approximate wj cj with a constant factor less than theorem is significant because it shows that can attain the same guarantee for arbitrary mi provided v i and pjit is constant over combinatorial algorithms in this section we introduce an extremely fast combinatorial algorithm with performance guarantees similar to for unstructured inputs those for which some v i or some tji have pjit over t we call this algorithm uses the mussq algorithm for concurrent open shop from as a subroutine as swag from motivated development of we first address swag s performance a degenerate case for swag as a prerequisite for addressing performance of an existing algorithm we procedure swag n m pji provide psuedocode and an accompanying j verbal description for swag qi m swag computes queue positions for while do every subjob of every job supposing that qi mkspn max j m i each job was scheduled next a job s n j tial makespan mkspn is the largest of nextjob mkspnj the potential finish times of all of its subjobs nextjob considering current queue lengths qi and q i qi pji each subjob s processing time pji once end while potential makespans have been determined return j the job with smallest potential makespan is selected for scheduling at this point all end procedure queues are updated because queues are updated potential makespans will need to be at the next iteration iterations continue until the very last job is scheduled note that swag runs in o m time p i theorem for an instance i of p cj let sw ag i denote the objective function value of swag applied to i and let op t i denote the objective function value of an optimal solution to i then for all l there exists an i p cj such that sw ag i t i proof let l be a fixed but arbitrary constant construct a problem instance ilm as follows murray khuller and chao n where is a set of m jobs and is a set of l jobs job j has processing time p on cluster j and zero all other clusters job j has processing time p on all m clusters is chosen so that see figure figure at left an input for swag example with m and l at right swag s resulting schedule and an alternative schedule it is easy to verify that swag will generate a schedule where all jobs in precede all jobs in due to the savings of for jobs in we propose an alternative solution in which all jobs in preceed all jobs in denote the objective value for this alternative solution alt ilm noting alt ilm op t ilm by symmetry and the fact that all clusters have a single machine we can see that sw ag ilm and alt ilm are given by the following sw ag ilm p l l p lm pm alt ilm p l l pl pm since l is fixed we can take the limit with respect to p lm pm sw ag ilm lim l l m alt i pm l lim the above implies the existence of a sufficiently large number of clusters m such that m m implies sw ag ilm t ilm this completes our proof j theorem demonstrates that that although swag performed well in simulations it may not be reliable the rest of this section introduces an algorithm not only with superior runtime to swag generating a permutation of jobs in o nm time rather than o m time but also a performance guarantee a fast r approximation our combinatorial algorithm for concurrent cluster scheduling exploits an elegant transformation to concurrent open shop once we consider this simpler problem it can be handled with mussq and our contributions are twofold we prove that this intuitive technique yields an approximation algorithm for a decidedly more general problem and we show that a modification can be made that maintains theoretical bounds while improving empirical performance we begin by defining our transformation esa scheduling distributed clusters of parallel machines full version i definition the total scaled processing time tspt transformation let be the p p set of all instances of wj cj and let d be the set of all instances of p wj cj note that d then the total scaled processing time transformation is a mapping p t sp t d with t v w x w xji pjit xji is the total processing time required by subjob j i scaled by the sum of machine speeds at cluster i throughout this section we will use i t v w to denote an arbitrary p instance of wj cj and i x w as the image of i under tspt figure shows the result of tspt applied to our baseline example figure an instance i of wj cj and its image i t sp t i the schedules were constructed with using the same permutation for i and i p we take the time to emphasize the simplicity of our reduction indeed the tspt transformation is perhaps the first thing one would think of given knowledge of the concurrent open shop problem what is surprising is how one can attain performance guarantees even after such a simple transformation algorithm execute mussq on i t sp t i to generate a permutation of jobs list schedule instance i by on each cluster according to towards proving the approximation ratio for we will establish a critical inequality in lemma the intuition behind lemma requires thinking of every job j in i as having a corresponding representation in j in i job j in i will be scheduled in the cc environment while job j in i will be scheduled in the p d environment we consider what results when the same permutation is used for scheduling in both environments cc now the definitions for the lemma let j be the completion time of job j resulting cc from on an arbitrary permutation define j as the completion time of job p d i j in the cc environment in the optimal solution lastly define j as the completion time of job j in i when scheduling by in the p d environment i lemma for i t sp t i let j be the job in i corresponding to job j in i for an p d i cc cc arbitrary permutation of jobs we have j j r j proof after list scheduling has been carried out in the cc environment we may determine cc cc j i the completion time of subjob j i we can bound j i using lemma which implies and the nature of the p d environment which implies pj cc j i j i pj p d i i j if we relax the bound given in inequality and combine it with inequality we see p d i cc that j i j j the last step is to replace the final term with something murray khuller and chao cc more meaningful using j r j which is immediate from the definition of r the desired result follows j while lemma is true for arbitrary now we consider m u ssq x w the proof of mussq s correctness established the first inequality in the chain of inequalities below the second inequality can be seen by substituting pji for xji in i this shows that the constraints in i are weaker than those in i the third inequality follows from the validity lemma p p p i i p d i wj cj wj cj t i j j combining inequality with lemma allows us to bound the objective in a way that does not make reference to i h i p p p d i cc cc w c w c r c j j j j op t i r op t i j inequality completes our proof of the following theorem i theorem algorithm is a r approximation for p wj cj with unit tasks and identical machines consider concurrent cluster scheduling with v i pjit all processing times are unit although the size of the collections tji are unrestricted in keeping with the work of zhang wu and li who studied this problem in the case we call instances with these parameters fully parallelizable and write f ps for graham s taxonomy zhang et al showed that scheduling jobs greedily by largest ratio first decreasing wj results in a where is a tight bound this comes as something p of a surprise since the largest ratio first policy is optimal for wj cj which their p problem very closely resembles we now formalize the extent to which p wj cj p p resembles cj define the time resolution of an instance i of wj cj as pji indeed one can show that as the time resolution increases the p p performance guarantee for lrf on p wj cj approaches that of lrf on wj cj we prove the analogous result for our problem p i theorem for wj cj is a proof applying techniques from the proof of lemma under the hypothesis of this theorem p d i cc op t cc we have j i j next use the fact that for all j n j by the p d i cc definition of these facts together imply j i j c cc op t thus cc wj j p h i p d i cc op t w c c op t op t j i j p j augmenting the lp relaxation cc the proof of theorem appeals to a trivial lower bound on j namely j cc r j we attain performance guarantees in spite of this but it is natural to wonder how the need for such a bound might come with empirical weaknesses indeed tspt can make subjobs consisting of many small tasks look the same as subjobs consisting of a single very long task additionally a cluster hosting a subjob with a single esa scheduling distributed clusters of parallel machines full version extremely long task might be identified as a bottleneck by mussq even if that cluster has more machines than it does tasks to process we would like to mitigate these issues by introducing the simple lower bounds on cj as seen in constraints and this is complicated by the fact that mussq s proof of correctness only allows constraints of the form in for i d this is without loss of generality since in implies cj pji but since we apply to i t sp t i cj xji is equivalent to cj pji a much weaker bound than we desire nevertheless we can bypass this issue by introducing additional clusters and appropriately defined subjobs we formalize this with the augmented total scaled processing time atspt transformation conceptually atspt creates n imaginary clusters where each imaginary cluster has nonzero processing time for exactly one job i definition the augmented tspt transformation let and d be as in the definition for tspt then the augmented tspt transformation is likewise a mapping at sp t d with t v w x w x xt sp t i d where d is a diagonal matrix with djj as any valid lower bound on the completion time of job j such as the right hand sides of constraints and of given that djj is a valid lower bound on the completion time of job j it is easy to verify that for i at sp t i i is a valid relaxation of i because mussq returns a permutation of jobs for use in list scheduling by these imaginary clusters needn t be accounted for beyond the computations in mussq a reduction for minimizing total weighted lateness on identical parallel machines the problem of minimizing total weighted lateness on a bank of identical parallel machines p is typically denoted p wj lj where the lateness of a job with deadline dj is lj p max cj dj the reduction we offer below shows that p wj lj can be stated in p p terms of wj cj at optimality thus while a approximation to wj cj does p not imply a approximation to p wj lj the reduction below nevertheless provides new p insights on the structure of p wj lj i definition total weighted lateness reduction let i p d w m denote an instance p of p wj lj p is the set of processing times d is the set of deadlines w is the set of weights and m is the number of identical parallel machines given these inputs we transform i p wj lj to i in the following way create a total of n clusters cluster has m machines job j has processing time pj on this cluster and clusters through n each consist of a single machine job j has processing time dj on cluster j and zero on all clusters other than cluster and cluster denote this problem i we refer the reader to figure for an example output of this reduction p p i theorem let i be an instance of p wj lj let i be an instance of wj cj resulting from the transformation described above any list schedule that is optimal for i is also optimal for i proof if we restrict the solution space of i to single permutations which we may do without loss of generality then any schedule for i or i produces the same value of murray khuller and chao wj cj dj for i and i the additional clusters we added for i ensure that cj dj p given this the objective for i can be written as wj dj wj cj dj because wj dj is p a constant any permutation to solve i optimally also solves wj cj dj optimally p p since wj cj dj wj lj we have the desired result j p closing remarks we now take a moment to address a subtle issue in the concurrent cluster problem what price do we pay for using the same permutation on all clusters schedules for concurrent open shop it has been shown that schedules may be assumed without loss of optimality as is shown in figure this does not hold for concurrent cluster scheduling in the general case in fact that is precisely why the strong performance guarantees for algorithm rely on clusters having possibly unique permutations p figure an instance of cj wj for which there does not exist a schedule which attains the optimal objective value in the case one of the jobs necessarily becomes delayed by one time unit compared to the case as a result we see a optimality gap even when v i our more novel contributions came in our analysis for and first we could not rely on the processing time of the last task for a job to be bounded above by the job s completion time variable cj in i and so we appealed to a lower bound on cj that was not stated in the lp itself the need to incorporate this second bound is critical in realizing the strength of algorithm and uncommon in lp rounding schemes second is novel in that it introduces constraints that would be redundant for i when i d but become relevant when viewing lp i as a relaxation for i this approach has potential for more broad applications since it represented effective use of a limited constraint set supported by a known algorithm we now take a moment to state some open problems in this area one topic of ongoing research is developing a factor purely combinatorial algorithm for the special case of concurrent cluster scheduling considered in theorem in addition it would be of broad interest to determine the loss to optimality incurred by assuming p schedules for wj cj the simple example above shows that an optimal schedule can have objective times the globally optimal objective meanwhile theorem shows that there always exists a schedule with objective no more than times the globally optimal objective thus we know that the performance ratio is in the interval but we do not know its precise value as a matter outside of scheduling theory it would be valuable to survey algorithms with roots in lp relaxations to determine which have constraint sets that are amenable to implicit modification as in the fashion of esa scheduling distributed clusters of parallel machines full version acknowledgments special thanks to andreas schulz for sharing some of his recent work p with us his thorough analysis of a linear program for p wj cj drives the results in this paper thanks also to hung and leana golubchik for sharing while it was under review and to ioana bercea and manish purohit for their insights on swag s performance lastly our sincere thanks to william gasarch for organizing the reu which led to this work and to the cohort for making the experience an unforgettable one in the words of rick sanchez wubalubadubdub references inc amazon web services aws lambda serverless compute accessed april url https chen and nicholas hall supply chain scheduling assembly systems working naveen garg amit kumar and vinayaka pandit order scheduling models hardness and algorithms fsttcs foundations of software technology and theoretical computer science teofilo gonzalez oscar ibarra and sartaj sahni bounds for lpt schedules on uniform processors siam journal on computing ronald l graham eugene l lawler jan karel lenstra and ahg rinnooy optimization and approximation in deterministic sequencing and scheduling a survey annals of discrete mathematics mohammad hajjat shankaranarayanan p n david maltz sanjay rao and kunwadee sripanidkulchai dealer request splitting for interactive cloud applications conext pages hung leana golubchik and minlan yu scheduling jobs across geodistributed datacenters in proceedings of the sixth acm symposium on cloud computing pages acm y t leung haibing li and michael pinedo scheduling orders for multiple product types to minimize total weighted completion time discrete applied mathematics monaldo mastrolilli maurice queyranne andreas schulz ola svensson and nelson uhan minimizing the sum of weighted completion times in a concurrent open shop operations research letters microsoft azure service fabric accessed april url https maurice queyranne structure of a simple scheduling polyhedron mathematical programming sushant sachdeva and rishi saket optimal inapproximability for scheduling problems via structural hardness for hypergraph vertex cover in ieee conference on computational complexity pages ieee andreas schulz polytopes and scheduling phd thesis andreas s schulz from linear programming relaxations to approximation algorithms for scheduling problems a tour d horizon working paper available upon sriskandarajah and wagneur openshops with jobs overlap european journal of operations research qiang zhang weiwei wu and minming li resource scheduling with supply constraint and linear cost cocoa conference
8
oct sound source localization in a multipath environment using convolutional neural networks eric stefan williams craig jin australian centre for field robotics the university of sydney australia computing and audio research laboratory the university of sydney australia abstract the propagation of sound in a shallow water environment is characterized by boundary reflections from the sea surface and sea floor these reflections result in multiple indirect sound propagation paths which can degrade the performance of passive sound source localization methods this paper proposes the use of convolutional neural networks cnns for the localization of sources of broadband acoustic radiated noise such as motor vessels in shallow water multipath environments it is shown that cnns operating on cepstrogram and generalized inputs are able to more reliably estimate the instantaneous range and bearing of transiting motor vessels when the source localization performance of conventional passive ranging methods is degraded the ensuing improvement in source localization performance is demonstrated using real data collected during an experiment index source localization doa estimation convolutional neural networks passive sonar reverberation introduction sound source localization plays an important role in array signal processing with wide applications in communication sonar and robotics systems it is a focal topic in the scientific literature on acoustic array signal processing with a continuing challenge being acoustic source localization in the presence of interfering multipath arrivals in practice conventional passive narrowband sonar array methods involve beamforming of the outputs of hydrophone elements in a receiving array to detect weak signals resolve sources and estimate the direction of a sound source typically sensors form a linear array with a uniform interelement spacing of half a wavelength at the array s design frequency however this narrowband approach has application over a limited band of frequencies the upper limit is set by the design frequency above which grating lobes form due to spatial aliasing leading to ambiguous source directions the lower limit is set one octave below the design frequency because at lower frequencies the directivity of the array is much reduced as the beamwidths broaden an alternative approach to sound source localization is to measure the time difference of arrival tdoa of the signal at an array of spatially distributed receivers allowing the instantaneous position of the source to be estimated the accuracy of the source position estimates is found to be sensitive to any uncertainty in the sensor positions furthermore reverberation has an adverse effect on time delay estimation which negatively impacts work supported by defence science and technology group australia sound source localization in a approach to broadband source localization in reverberant environments a model of the early reflections multipaths is used to subtract the reverberation component from the signals this decreases the bias in the source localization estimates the approach adopted here uses a minimum number of sensors no more than three to localize the source not only in bearing but also in range using a single sensor the instantaneous range of a broadband signal source is estimated using the cepstrum method this method exploits the interaction of the direct path and multipath arrivals which is observed in the spectrogram of the sensor output as a lloyds mirror interference pattern generalized gcc is used to measure the tdoa of a broadband signal at a pair of sensors which enables estimations of the source bearing furthermore adding another sensor so that all three sensor positions are collinear enables the source range to be estimated using the two tdoa measurements from the two adjacent sensor pairs the range estimate corresponds to the radius of curvature of the spherical wavefront as it traverses the receiver array this latter method is commonly referred to as passive ranging by wavefront curvature however its source localization performance can become problematic in multipath environments when there is a large number of extraneous peaks in the gcc function attributed to the presence of multipaths and when the direct path and multipath arrivals are unresolvable resulting in tdoa estimation bias also its performance degrades as the signal source direction moves away from the array s broadside direction and completely fails at endfire note that this is not the case with the cepstrum method with its omnidirectional ranging performance being independent of source direction recently deep neural networks dnn based on supervised learning methods have been applied to acoustic tasks such as speech recognition terrain classification and source localization tasks a challenge for supervised learning methods for source localization is their ability to adapt to acoustic conditions that are different from the training conditions the acoustic characteristics of a shallow water environment are with high levels of clutter background noise and multiple propagation paths making it a difficult environment for dnn methods a cnn is proposed that uses generalized gcc and cepstral feature maps as inputs to estimate both the range and bearing of an acoustic source passively in a shallow water environment the cnn method has an inherent advantage since it considers all gcc and cepstral values that are physically significant when estimating the source position other approaches involving time delay estimation typically consider only a single value a peak in the gcc or cepstogram the cnns are trained using real acoustic recordings of a surface vessel underway in a quefrency ms time delay ms cepstrogram range output bearing output dense time seconds combined cnn fig a cepstrogram for a surface vessel as it transits over a single recording hydrophone located m above the sea floor and b the corresponding for a pair of hydrophones shallow water environment cnns operating on cepstrum or gcc feature map inputs only are also considered and their performances compared the proposed model is shown to localize sources with greater performance than a conventional passive sonar localization method which uses tdoa measurements generalization performance of the networks is tested by ranging another vessel with different radiated noise characteristics the original contributions of this work are development of a cnn for the passive localization of acoustic broadband noise sources in a shallow water environment where the range and bearing of the source are estimated jointly range and bearing estimates are continuous allowing for improved resolution in position estimates when compared to other passive localization networks which use a discretized classification approach a novel loss function based on localization performance where bearing estimates are constrained for additional network regularization when training and a unified network for passive localization in reverberate environments with improved performance over traditional methods acoustic localization cnn a neural network is a machine learning technique that maps the input data to a label or continuous value through a nonlinear architecture and has been successfully applied to applications such as image and object classification hyperspectral pixelwise classification and terrain classification using acoustic sensors cnns learn and apply sets of filters that span small regions of the input data enabling them to learn local correlations architecture since the presence of a broadband acoustic source is readily observed in a and cepstrogram fig it is possible to create a unified network for estimating the position of a vessel relative to a receiving hydrophone array the network is divided into sections fig the gcc cnn and cepstral cnn operate in parallel and serve as feature extraction networks for the gcc and cepstral feature map inputs respectively next the outputs of the gcc cnn gcc cnn dense cepstral cnn dense dense dense dense gcc input cepstral input multichannel acoustic recording fig network architecture for the acoustic localization cnn and cepstral cnn are concatenated and used as inputs for the dense layers which outputs a range and bearing estimate for both the gcc cnn and cepstral cnn the first convolutional layer filters the input feature maps with kernels the second convolutional layer takes the output of the first convolutional layer as input and filters it with kernels the third layer also uses kernels and is followed by two fullyconnected layers the combined cnn further contains two fullyconnected layers that take the concatenated output vectors from both of the gcc and cepstral cnns as input all the layers have neurons each a single neuron is used for regression output for the range and bearing outputs respectively all layers use rectified linear units as activation functions since resolution is important for the accurate ranging of an acoustic source max pooling is not used in the network s architecture input in order to localize a source using a hydrophone array information about the time delay between signal propagation paths is required although such information is contained in the raw signals it is beneficial to represent it in a way that can be readily learned by the network a cepstrum can be derived from various spectra such as the complex or differential spectrum for the current approach the power cepstrum is used and is derived from the power spectrum of a recorded signal it is closely related to the cepstrum used frequently in automatic speech recognition tasks but has linearly spaced frequency bands rather than bands approximating the human auditory system s response the cepstral representation of the signal is neither in the time nor frequency domain but rather it is in the quefrency domain cepstral analysis is based on the principle that the logarithm of the power spectrum for a signal containing echoes has an additive periodic component due to the echoes from reflections where the original time waveform contained an echo the cepstrum will contain a peak and thus the tdoa between propagation paths of an acoustic signal can be measured by examining peaks in the cepstrum it is useful in the presence of strong multipath reflections found in shallow water environments where time delay estimation methods such as gcc suffer from degraded performance the cepstrum n is obtained by the inverse fourier transform of the logarithm of the power spectrum n f f where s f is the fourier transform of a discrete time signal x n for a given geometry there is a bounded range of quefrencies useful in source localization as the separation distance decreases the tdoa values position of peaks in the cepstrum will tend to a maximum value which occurs when the source is at the closest point of approach to the sensor tdoa values greater than this maximum are not physically realizable and are excluded cepstral values near zero are dominated by source dependent quefrencies and are also excluded gcc is used to measure the tdoa of a signal at a pair of hydrophones and is useful in situations of spatially uncorrelated noise for a given array geometry there is a bounded range on useful gcc information for a pair of recording sensors a zero relative time delay corresponds to a broadside source whilst a maximum relative time delay corresponds to an endfire source tdoa values greater than the maximum bound are not useful to the passive localization problem and are excluded the windowing of cnn inputs has the added benefit of reducing the number of parameters in the network a cepstrogram and an ensemble of cepstrum and gcc respectively as they vary in time is shown in fig output for each example the network predicts the range and bearing of the acoustic source as a continuous value each with a single neuron regression output this differs from other recent passive localization networks which use a classification based approach such that range and bearing predictions are discretized putting a hard limit on the resolution of estimations that the networks are able to provide joint training the objective of the network is to predict the range and bearing of an acoustic source relative to a receiving array from reverberant and noisy input signals since the localization of an acoustic source involves both a range and bearing estimate the euclidean distance between the network prediction and ground truth is minimized when training both the range and bearing output loss components are jointly minimized using a loss function based on localization performance this additional regularization is expected to improve localization performance when compared to minimizing range loss and bearing loss separately the total objective function e minimized during network training is given by the weighted sum of the loss ep and the bearing loss eb such that e eb where ep is the norm of the polar distance given by ep y cos and eb is the norm of the bearing loss only given by eb with the predicted range and bearing output denoted as t and respectively and the true range and bearing denoted as y and respectively the inclusion of the eb term encourages bearing predictions to be constrained to the first turn providing additional regularization and reducing parameter weight magnitudes the two terms are weighted by so each loss term has roughly equal weight training uses batch normalization and is stopped when the validation error does not decrease appreciably per epoch in order to further prevent regularization through a dropout rate of is used in all fully connected layers when training experimental results passive localization on a transiting vessel was conducted using a algorithmic method described in and cnns with cepstral gcc inputs their performances were then compared the generalization ability of the networks to other broadband sources is also demonstrated by localizing an additional vessel with a different radiated noise spectrum and source level dataset acoustic data of a motor boat transiting in a shallow water environment over a hydrophone array were recorded at a sampling rate of khz the uniform linear array ula consists of three recording hydrophones with an interelement spacing of recording commenced when the vessel was inbound m from the sensor array the vessel then transited over the array and recording was terminated when the vessel was m outbound the boat was equipped with a dgps tracker which logged its position relative to the receiving hydrophone array at s intervals bearing labels were wrapped between and radians consistent with bearing estimates available from ulas which suffer from bearing ambiguity transits were recorded over a two day period one hundred thousand training examples were randomly chosen each with a range and bearing label such that examples uniformly distributed in range only a further labeled examples were reserved for cnn training validation the recordings were preprocessed as outlined in section the networks were implemented in tensorflow and were trained with a momentum optimizer using a nvidia geforce gtx gpu the gradient descent was calculated for batches of training examples the networks were trained with a learning rate of weight decay of and momentum of additional recordings of the vessel were used to measure the performance of the methods these recordings are referred to as the test dataset and contain labeled examples additional acoustic data were recorded on a different day using a different boat with different radiated noise characteristics acoustic recordings for each transit started when the inbound vessel was m from the array continued during its transit over the array and ended when the outbound vessel was m away this dataset is referred to as the generalization set and contains labeled examples average bearing error deg combined cnn algorithmic method dgps fig estimates of the range and bearing of a transiting vessel the true position of the vessel is shown relative to the recording array measured by the dgps combined cnn cepstral cnn gcc cnn algorithmic method average range error m average bearing error deg combined cnn cepstral cnn gcc cnn algorithmic method bearing deg bearing deg combined cnn cepstral cnn gcc cnn algorithmic method fig comparison of bearing estimation performance as a function of the vessels true bearing for the a test dataset and b generalization dataset average range error m range m combined cnn cepstral cnn gcc cnn algorithmic method range m fig comparison of range estimation performance as a function of the vessels true range for the a test dataset and b generalization dataset input of network cepstral and gcc feature maps were used as inputs to the cnn and they were computed as follows for any input example only a select range of cepstral and gcc values contain relevant tdoa information and are retained see section cepstral values more than ms are discarded because they represent the maximum multipath delay and occur when the source is directly over a sensor cepstral values less than are discarded since they are highly source dependent thus each cepstrogram input is liftered and samples through are used as input to the network only a cepstral feature vector is calculated for each recording channel resulting in a x cepstal feature map due to array geometry the maximum time delay between pairs of sensors is a gcc feature vector is calculated for two pairs of sensors resulting in a x gcc feature map the gcc map is further to size x which reduces the number of network parameters comparison of localization methods algorithmic passive localization was conducted using the methods outlined in the tdoa values required for algorithmic localization were taken from the largest peaks in the gcc nonsensical results at ranges greater than m are discarded other cnn chitectures are also compared the gcc cnn uses the gcc cnn section of the combined cnn only and the cepstral cnn uses the cepstral cnn section of the combined cnn only both with similar range and bearing outputs fig fig shows localization results for a vessel during one complete transit fig and fig show the performance of localization methods as a function of the true range and bearing of the vessel for the test dataset and the generalization set respectively the cnns are able to localize a different vessel in the generalization set with some impact to performance the performance of the algorithmic method is degraded in the shallow water environment since there are a large number of extraneous peaks in the gcc attributed to the presence of multipaths and when the direct path and multipath arrivals become unresolvable resulting in tdoa estimation bias bearing estimation performance is improved in networks using gcc features showing that time delay information between pairs of spatially distributed sensors is beneficial the networks show improved robustness to interfering multipaths range estimation performance is improved in networks using cepstral features showing that multipath information can be useful in determining the sources range the combined cnn is shown to provide superior performance for range and bearing estimation conclusions in this paper we introduce the use of a cnn for the localization of surface vessels in a shallow water environment we show that the cnn is able to jointly estimate the range and bearing of an acoustic broadband source in the presence of interfering multipaths several cnn architectures are compared and evaluated the networks are trained and tested using cepstral and gcc feature maps as input derived from real acoustic recordings networks are trained using a novel loss function based on localization performance with additional constraining of bearing estimates the inclusion of both cepstral and gcc inputs facilitates robust passive acoustic localization in reverberant environments where other methods can suffer from degraded performance references benesty chen and huang microphone array signal processing vol springer science business media chakrabarty and habets broadband doa estimation using convolutional neural networks trained with noise signals arxiv preprint viberg ottersten and kailath detection and estimation in sensor arrays using weighted subspace fitting ieee trans signal vol no pp takeda and komatani unsupervised adaptation of deep neural networks for sound source localization using entropy minimization in proc ieee int conf speech signal process ieee pp zeng yang chen and jin low angle direction of arrival estimation by time reversal in proc ieee int conf speech signal process ieee pp krizhevsky sutskever and hinton imagenet classification with deep convolutional neural networks in adv in neural information process systems pp capon spectrum analysis proc ieee vol no pp girshick donahue darrell and malik rich feature hierarchies for accurate object detection and semantic segmentation in proc ieee conf computer vision and pattern pp carter time delay estimation for passive sonar signal processing ieee trans speech signal vol pp carter coherence and time delay estimation ieee press new york chan and ho a simple and efficient estimator for hyperbolic location ieee trans on signal vol pp windrim ramakrishnan melkumyan and murphy hyperspectral cnn classification with limited training samples in british machine vision bogert the quefrency alanysis of time series for echoes cepstrum and saphe cracking time series analysis pp benesty chen and huang estimation via linear interpolation and cross correlation ieee trans speech and audio vol no pp lo ferguson gao and maguer aircraft flight parameter estimation using acoustic multipath delays ieee trans on aerospace and electronic systems vol no pp ferguson application of passive ranging by wavefront curvature methods to the localization of biosonar click signals emitted by dolphins in proc of international conf on underwater acoust measurements oppenheim and schafer from frequency to quefrency a history of the cepstrum ieee signal process magazine vol no pp chen benesty and huang performance of gccand estimation in practical reverberant environments eurasip on adv in signal vol no pp jensen nielsen heusdens and christensen doa estimation of audio sources in reverberant environments in proc ieee int conf speech signal process ieee pp ferguson ramakrishnan williams and jin convolutional neural networks for passive monitoring of a shallow water environment using a single sensor in proc ieee int conf speech signal process ieee pp gao clark and cooper time delay estimate using cepstrum analysis in a shallow littoral environment conf undersea defence technology vol pp knapp and carter the generalized correlation method for estimation of time delay ieee trans speech and signal vol no pp ferguson ramakrishnan williams and jin deep learning approach to passive monitoring of the underwater acoustic environment acoust soc vol no pp ioffe and szegedy batch normalization accelerating deep network training by reducing internal covariate shift in international conf on machine learning pp ferguson a modified wavefront curvature method for the passive ranging of echolocating dolphins in the wild acoust soc vol no pp srivastava hinton krizhevsky sutskever and salakhutdinov dropout a simple way to prevent neural networks from j machine learning research vol no pp xiao watanabe erdogan lu hershey seltzer chen zhang mandel and yu deep beamforming networks for speech recognition in proc ieee int conf speech signal process ieee pp schau and robinson passive source localization employing intersecting spherical surfaces from differences ieee trans on speech signal vol no pp heymann drude christoph boeddeker patrick hanebrink and beamnet training of a asr system in proc ieee int conf speech signal process ieee pp valada spinello and burgard deep feature learning for terrain classification in robotics research pp springer
7
ransac algorithms for subspace recovery and subspace clustering nov ery jue wang university of california san diego abstract we consider the ransac algorithm in the context of subspace recovery and subspace clustering we derive some theory and perform some numerical experiments we also draw some correspondences with the methods of hardt and moitra and chen and lerman introduction the random sample consensus with acronym ransac algorithm of fischler and bolles and its many variants and adaptations are in computer vision for their robustness in the presence of gross errors outliers in this paper we focus on the closely related problems of subspace recovery and subspace clustering in the presence of outliers where methods are believed to be optimal yet too costly in terms of computations when the fraction of inliers is small although this is a limitation of the ransac we nevertheless establish this rigorously in the present context in particular we derive the performance and computational complexity of ransac for these two problems and perform some numerical experiments corroborating our theory and comparing the ransac with other methods proposed in the literature the problem of subspace recovery consider a setting where the data consist of n points in dimension p denoted xn rp it is assumed that m of these points lie on a linear subspace l and that the points are otherwise in general position which means that the following assumption is in place assumption a of data points with q p is linearly independent unless it includes at least d points from we say that points are linearly dependent if they are so when seen as vectors the points on l are called inliers and all the other points are called outliers this is the setting of subspace recovery without noise when there is noise the points are not exactly on the underlying subspace but rather in its vicinity in any case the goal is to recover l or said differently distinguish the inliers from the outliers see figure for an illustration in a setting where the subspace is of dimension d in ambient dimension p the goal is to recover the subspace l identify the inlier points the present project was initiated in the context of an independent study for undergraduates math we acknowledge support from the us national science foundation dms this problem is intimately related to the problem of robust covariance estimation which dates back decades huber and ronchetti maronna tyler but has attracted some recent attention we refer the reader to the introduction of zhang and lerman for a comprehensive review of the literature old and new subspace recovery in the presence of outliers as we consider the problem here is sometimes referred to a robust principal components analysis although there are other meanings in the literature more closely related to matrix factorization with a component et wright et a subspace recovery problem b subspace clustering problem figure an illustration of the two settings considered in the paper the problem of subspace clustering consider a setting where the data consist of n points in dimension p denoted xn rp it is assumed that mk of these points lie on a dk linear subspace lk where k k so that there are k subspaces in total the remaining points are in general position assumption a of data points is linearly independent unless it includes at least dk points from lk for some k k in this setting all the points on one of the subspaces are inliers and all the other points are outliers this is the setting of subspace clustering without noise when there is noise the inliers are not exactly on the subspaces but in their vicinity see figure for an illustration in a setting where there is one subspace of dimension and two subspaces of dimension in ambient dimension p the goal is to cluster mk points to their corresponding lk for all k k the problem of subspace clustering has applications in computer vision in particular movement segmentation vidal vidal et contents in section we consider the problem of subspace recovery in section we consider the problem of subspace clustering in both cases we study a canonical ransac algorithm deriving some theory and comparing it with other methods in numerical experiments we briefly discuss our results in section remark linear vs affine throughout we consider the case where the subspaces are linear although some applications may call for affine subspaces this is for convenience because of this we are able to identify a point x rp with the corresponding vector sometimes written x subspace recovery we consider the setting of section and use the notation defined there in particular we work under assumption we consider the noiseless setting for simplicity ransac for subspace recovery we propose a simple ransac algorithm for robust subspace recovery in the present setting in particular under assumption the underlying linear subspace l which we assumed is of dimension d is determined by any d that comes from that subspace the algorithm starts by randomly selecting a d and checking if this tuple forms a linear subspace of dimension if so the subspace is recovered and the algorithm stops otherwise the algorithm continues repeatedly sampling a d at random until the subspace is discovered optionally the algorithm can be made to stop when a maximum number of tuples has been sampled in this formulation detailed in algorithm d is known input data points xn rp dimension d output a linear subspace of dimension d containing at least d points repeat randomly select a d of data points until the tuple is linearly dependent return the subspace spanned by the tuple algorithm ransac subspace recovery by design the procedure is exact again we are in the noiseless setting in a noisy setting the method can be shown to be essentially optimal however researchers have shied away from a ransac approach because of its time complexity we formalize what is in the folklore in the following proposition algorithm is exact and the number of iterations has the geometric distribum n thus the expected number of iterations is with success probability n m which is of order o when d is held fixed note that each iteration requires on the order of o operations as it requires computing the rank of a d matrix proof the algorithm sample a independently and uniformly at random until the tuple is linearly dependent because of assumption a d is linearly dependent if and only if n m d in total only fit the all the points in the tuple are from while there are m n because the draws are bill so that the probability of drawing a suitable tuple is independent the total number of draws until the algorithm stops has the geometric distribution with success probability n m and when d is assumed fixed we know that the mean of this distribution is while n and m are large we have d d here we consider the variant of the geometric distribution that is supported on the positive integers the reader is invited to verify that this still holds true as long as d o in applications where the number of outliers is a fraction of the sample meaning that is not close to the ransac s number of iterations depends exponentially on the dimension of subspace this confirms the folklore at least in such a setting remark for simplicity we analyzed the variant of the algorithm where the tuples are drawn with replacement so that the number of iterations is infinite however in practice one should draw the tuples without replacement which is equally easy to do in the present setting as recommended in schattschneider and green for this variant the time comn n moreover proposition still applies if understood as an upper plexity is the number of iterations has a negative hypergeometric distribution in this case remark if the dimension d is unknown a possible strategy is to start with d run the algorithm for a maximum number of iterations and if no pair of points is found to be aligned with the origin move to d and continue in that fashion increasing the dimension if no satisfactory tuple is found the algorithm would start again at d the algorithm will succeed eventually the algorithm of hardt and moitra for subspace recovery as we said above researchers have avoid ransac procedures because of the running time which as we saw can be prohibitive recently however hardt and moitra have proposed a ransactype algorithm that strikes an interesting compromise between running time and precision their algorithm is designed for the case where the sample size is larger than the ambient dimension namely n it can be described as follows it repeatedly draws a at random until the tuple is found to be linearly dependent when such a tuple is found the algorithm returns a set of linearly dependent points in the tuple see the description in algorithm a virtue of this procedure is that it does not require knowledge of the dimension d of the underlying subspace input data points xn rp output a linear subspace repeat randomly select a of data points until the tuple is linearly dependent return the subspace spanned by any subset of linearly dependent points in the tuple algorithm subspace recovery proposition when n p algorithm is exact and its number of iterations has the geomet np thus the expected number of ric distribution with success probability m k iterations is note that each iteration requires on the order of o operations as it requires computing the rank of a matrix proof with assumption in place a is linearly dependent if and only if it contains at least d points from the subspace thus the repeat statement above stops exactly when it found a that contains at least d points moreover also because of assumption the points within that tuple that are linear dependent must belong to therefore the algorithm returns l and is therefore exact we now turn to the number of iterations the number of iterations is obviously geometric and the success probability is the probability that a drawn uniformly at random contains at least d points from is that probability indeed it is the probability that when drawing p balls without replacement from an urn with m red balls out of n total the sample contains at least d red balls in the present context the balls are of course the points and the red balls are the points on the linear subspace hardt and moitra analyze their algorithm in a slightly different setting and with the goal of finding the maximum fraction of outliers that can be tolerated before the algorithm breaks down in the sense that it does not run in polynomial time in particular they show that if then their algorithm has a number of iterations with the geometric distribution with success probability at least n so that the expected number of iterations is bounded by n which is obviously polynomial in p n in fact it can be better than that the following is a consequence of proposition corollary if in addition to n p it holds that with and for some fixed then is bounded from below by a positive quantity that depends only on consequently algorithm has expected number of iterations of order o proof let u denote a random variable with the hypergeometric distribution with parameters p m n described above then p u d and it depends on d p m n we show that is bounded from below irrespective of these parameters as long as the conditions are met noting that is increasing in d and n and decreasing in p and m it suffices to consider how varies along a sequence where n and d p m all varying with n in such a way that and as this makes the expected number of iterations largest define e u p var u p n p n the condition implies that d and along the sequence of parameters under consideration moreover along such a sequence z u is standard normal in the limit so that p u d p z d p z p n using slutsky s theorem in the last line numerical experiments we performed some numerical experiments comparing ransac in the form of algorithm the hm procedure algorithm and the geometric median subspace gms of zhang and lerman which appears to be one of the best methods on the market we used the code available on teng zhang s website each inlier is uniformly distributed on the intersection of the unit sphere with the underlying subspace each outlier is simply uniformly distributed on the unit sphere the result of each algorithm is averaged over repeats performance is measured by the first principal angle between the returned subspace and the true subspace this is to be fair to gms as the other two algorithms are exact the results are reported in table parameters average system time d p m difference in angle ransac hm gms ransac hm gms table numerical experiments comparing ransac hm and gms for the problem of subspace recovery as in the text d is the dimension of the subspace p is the ambient dimension m is the number of inliers is the number of outliers so that n m is the sample size we performed another set of experiments to corroborate the theory established in proposition for the complexity of ransac the results are shown in figure where each setting has been repeated times as expected as the dimensionality of the problem increases ransac s complexity becomes quickly impractical simulated simulated simulated theory theory theory number of iterations intrinsic dimension figure average number of iterations for ransac in the form of algorithm as a function of the subspace dimension d and the ratio of sample size n to number of inliers the dashed lines are the averages from our simulation while the lines are derived from theory proposition subspace clustering we consider the setting of section and use the notation defined there in particular we work under assumption we consider the noiseless setting for simplicity we also assume that all subspaces are of same dimension denoted d so that dk d for all k ransac for subspace clustering we propose a simple ransac algorithm for subspace clustering as before any of the linear subspaces is determined by any d that comes from that subspace the algorithm starts by randomly selecting a and checking if this tuple forms a linear subspace of dimension if so one of the subspaces is recovered and all the points on the subspace are extracted from the data otherwise the algorithm continues repeatedly sampling a d at random until that condition is met the algorithm continues in this fashion until all the k subspaces have been recovered in this formulation detailed in algorithm both d and k are assumed known input data points xn rp dimension d number of subspace k output k linear subspaces of dimension d each containing at least d points for k k do repeat randomly select a d of data points until the tuple is linearly dependent return the subspace spanned by the tuple remove the points on that subspace from the data end algorithm ransac subspace clustering again the procedure is exact by design since we are in the noiseless setting here too researchers have not embraced ransac approaches because of their running time we confirm this folklore in the following where we assume for simplicity that all subspaces have the same number of points m so that mk m for all k proposition algorithm is exact and the number of iterations is has the distribution of ik where the i s are independent and ij has the geometric distribution with success probability m m this is stochastically bounded by the negative binomial with parameters k j k thus the expected number of iterations is bounded by which is of order o when d and k are held fixed the proof is very similar to that of proposition and is omitted remark when the dimensions of the subspaces are unknown a strategy analogous to that described in remark is of course possible when the number of subspaces is unknown a stopping rule can help decide whether there remains a subspace to be discovered details are omitted as such an approach although natural could prove complicated adapting the algorithm of hardt and moitra for subspace clustering algorithm consists in applying algorithm until a subspace is recovered removing the points on that subspace and then continuing until all k subspaces are recovered an algorithm for subspace clustering can be based on the algorithm of hardt and moitra algorithm instead the resulting algorithm is suited for the case where n mk based on the fact that algorithm has expected number of iterations bounded by n the resulting algorithm for subspace clustering has expected number of iterations bounded by see algorithm where we assume that the number of subspaces is known but do not assume that the dimensions of the subspaces are known and they do not need to be the same input data points xn rp number of subspace k output k linear subspaces each with a number of points exceeding its dimension for k k do repeat randomly select a of data points until the tuple is linearly dependent repeat find the smallest number of linearly dependent points in the tuple return the subspace spanned by these points remove the points on that subspace from the data until there are no more linearly dependent points in the tuple end algorithm subspace clustering based on the algorithm the reason why we extract the smallest number of linearly dependent points at each step is to avoid a situation where a contains dj points from lj and dk points from lk with j k in which case assuming dj dk p these points are linearly dependent but do not span one of the subspaces this particular step is however computationally challenging as it amounts to finding the sparsest solution to a linear system a problem known to be challenging tropp and wright eq one possibility is to replace this will finding the solution with minimum norm tropp and wright eq the use of the constraint is central to the method proposed by elhamifar and vidal the algorithm of chen and lerman the spectral curvature clustering scc algorithm of chen and lerman is in fact of ransac type the method was designed for the noisy setting and is therefore more it is based on a function a rp that quantifies how close a is from spanning a subspace of dimension d or less it is equal to when this is the case and is strictly less than when this is not the case the algorithm draws a number c of at random where the tuple is denoted s xd s and computes the matrix w wij where c wij a xi s xd s a xj s xd s it then applies a form of spectral graph partitioning algorithm to w closely related to method of ng et al the method assumes all subspaces are of same dimension d and both d and k are assumed known in the noiseless setting one could take a to return if the tuple is linearly dependent and otherwise in that case wij is simply the number of among the c that were drawn with whom both xi and xj are linearly dependent chen and lerman analyzes their method in a setting that reduces to this situation and show that the method is exact in this case chen and lerman consider the case where the subspaces are affine but we adapt their method to the case where they area linear parameters average system time rand index d p k m ransac ssc scc tsc ransac ssc scc tsc table numerical experiments comparing ransac ssc scc and tsc for the problem of subspace clustering as in the text d is the dimension of the subspaces assumed to be the same p is the ambient dimension k is the number of subspaces m is the number of inliers per subspace assumed to be the same is the number of outliers so that n km is the sample size numerical experiments we performed some numerical experiments to compare various methods for subspace clustering specifically ransac sparse subspace clustering ssc elhamifar and vidal spectral curvature clustering scc chen and lerman and subspace clustering tsc reinhard heckel each inlier is uniformly distributed on the intersection of the unit sphere with its corresponding subspace each outlier is simply uniformly distributed on the unit sphere the result of each algorithm is averaged over repeats performance is measured by the rand index the results are reported in table discussion and conclusion in our small scale experiments ransac is seen to be competitive with other methods at least when the intrinsic dimensionality is not too large and when there are not too many outliers or too many underlying subspaces present in the data this was observed both in the context of subspace recovery and in the context of subspace clustering references li ma and wright robust principal component analysis journal of the acm jacm chen and lerman foundations of a spectral clustering framework for hybrid linear modeling foundations of computational mathematics chen and lerman spectral curvature clustering scc international journal of computer vision elhamifar and vidal sparse subspace clustering in computer vision and pattern recognition cvpr ieee conference on pp ieee fischler and bolles random sample consensus a paradigm for model fitting with applications to image analysis and automated cartography communications of the acm hardt and moitra algorithms and hardness for robust subspace recovery in conference on learning theory colt volume pp huber and ronchetti robust statistics edition wiley maronna a robust of multivariate location and scatter the annals of statistics ng jordan and weiss on spectral clustering analysis and an algorithm in advances in neural information processing systems pp reinhard heckel helmut bolcskei robust subspace clustering via thresholding ieee transactions on information theory schattschneider and green enhanced ransac sampling based on combinations in proceedings of the conference on image and vision computing new zealand pp acm tropp j and wright computational methods for sparse solution of linear inverse problems proceedings of the ieee tyler a of multivariate scatter the annals of statistics vidal subspace clustering ieee signal processing magazine vidal ma and sastry generalized principal component analysis gpca ieee transactions on pattern analysis and machine intelligence wright ganesh rao peng and ma robust principal component analysis exact recovery of corrupted matrices via convex optimization in advances in neural information processing systems pp zhang and lerman a novel for robust pca the journal of machine learning research
10
n s f p idle programs idle sensors p p p parallel composition sequential composition n termination composition install v method invocation module update p m p r b p m p r b s sensor broadcast sensor sense in p if v then p else p field sensing conditional execution s arxiv dec network sensors and field off m li pi t net this modules v method collection x values variable targets m p field measure position broadcast local b m battery capacity module figure the syntax of csn e n p m p r b e e this section addresses the syntax and the semantics of the calculus for sensor networks the syntax of the calculus is given by the grammar in figure the calculus encompasses a structure networks and programs networks n are flat unstructured collections of sensors and values a sensor p m p r represents an abstraction of a physical sensing device b located at position p and running program p module m is the collection of methods that the sensor makes available for internal and for external usage typically this collection of methods may be interpreted as the library of functions of the tiny operating system installed in the sensor sensors may only broadcast values to its neighborhood sensors radius rt defines the transmitting power of a sensor and specifies the border of communication a circle centered at position p the position of the sensor with radius rt likewise radius rs defines the sensing capability of the sensor meaning that a sensor may only read values inside the circle centered at position p with radius rs values ip define the field of measures that may be sensed a value consists of a tuple denoting the strength of the measure at a given position p of the plane values are managed by the environment in csn there are no primitives for manipulating values besides reading sensing values we assume that the environment inserts these values in the network and update its contents networks are combined using the parallel composition operator processes are built from the inactive process idle and from idle denotes a terminated thread and sensing values from the environment sensed th programs p and q may be combined in sequence p q or in parallel p q the sequential composition p q designates a program that first executes p and then proceeds with the execution of q in contrast p q represents the simultaneous execution of p and q however we consider that sensors support only a very limited form of parallelism p and q do not interact during their execution mutually recursive method definitions makes possible to represent infinite behaviours values are the data exchanged between sensors and are basic values b method labels l positions p and modules m notice that the calculus in not in the sense that communication of modules as an example consider a programming examples in this section we present some examples programmed in csn of typical operations performed on networks of sensors our goal is to show the expressiveness of the csn calculus just presented and also to identify some other aspects of these networks that may be interesting to model in the following examples we denote as msensor and msink the modules installed in any of the anonymous sensors in the network and the modules installed in the sink respectively note also that all sensors are assumed to have a builtin method deploy that is responsible for installing new modules the intuition is that this method is part of the tiny operating system that allows sensors to react when first placed in the field finally we assume in these small examples that the network layer supports scoped flooding we shall see in the next section that this can be supported via software with the inclusion of state in sensors ping we start with a very simple ping program each sensor has a ping method that when invoked calls a method forward in the network with its position and battery charge as arguments when the method forward is invoked by a sensor in the network it just triggers another call to forward in the network the sink has a distinct implementation of this method any incomming invocation logs the position and battery values given as arguments so the overall result of the call in the sink is that all reachable sensors in the network will in principle receive this call and will flood the network with their positions and battery charge values these values eventually reach the sink and get logged msensor p b ping net f o r w a r d p b net p i n g forward x y net f o r w a r d x y m li pi b cin p r this m b pi m p r li dom m this m p r v m p r b this b d p r net b cout p r m p r b s p m net m p r v m r b s p this p r net m p r b s m s install b cin p r m p r b m m b cin sense in p m p r x m p r b p f p s s s s s f s f figure reduction semantics for processes and msink p b forward x y log position and power x y net p i n g msink p b p r b i d l e msensor i d l e msensor pn bn pn rn bn querying this example shows how we can program a network with a sink that periodically queries the network for the readings of the sensors each sensor has a sample method that samples the field using the sense construct and calls the method forward in the neighbourhood with its position and the value sampled as arguments the call then queries the neighbourhood recursively with a replica of the original call the original call is of course made from the sink which has a method start sample that calls the method sample in the network within a cycle note that if the sink had a method named sample instead of start sample it might get a call to sample from elsewhere in the network that could interfere with the sampling control cycle msensor p sample s e n s e x i n net f o r w a r d p x net s a m p l e forward x y net f o r w a r d x y msink p start sample net s a m p l e t h i s s t a r t s a m p l e forward x y log position and value x y t h i s s t a r t s a m p l e msink p p r b i d l e msensor i d l e msensor pn pbnn rn polling in this example the cycle of the sampling is done in each sensor instead of in the sink as in the previous example the sink just invokes the method start sample once this method propagates the call through the network and invokes sample for each sensor this method samples the field within a cycle and forwards the result to the network this implementation requires less broadcasts than the previous one as the sink only has to call start sample on the network once on the other hand it increases the amount of processing per sensor msensor p start sample net s t a r t s a m p l e t h i s s a m p l e sample s e n s e x i n net f o r w a r d p x t h i s s a m p l e forward x y net f o r w a r d x y msink p forward x y log position and value x y net s t a r t e x a m p l e msink p p r b i d l e msensor i d l e msensor pn pbnn rn code deployment the above examples assume we have some means of deploying the code to the sensors in this example we address this problem and show how it can be programmed in csn the code we wish to deploy and execute is the same as the one in the previous example to achieve this goal the sink first calls the deploy method on the network to install the new module with the methods start sample sample and forward as above this call recursively deploys the code to the sensors in the network the sink then calls start sample to start the sampling again as above and waits for the forwarded results on the method forward msensor p deploy x i n s t a l l x net d e p l o y x msink p forward x y log position and value x y net d e p l o y start sample net s t a r t s a m p l e t h i s s a m p l e sample s e n s e x i n net f o r w a r d p x t h i s s a m p l e forward x y net f o r w a r d x y net s t a r t s a m p l e msink p p r b i d l e msensor i d l e msensor pn pbnn rn a refined version of this code one that avoids the start sample method completely can be programmed here we deploy the code for all sensors by sending methods sample and forward to all the sensors in the network by invoking deploy once deployed the code is activated with a call to sample in the sink instead of using the start sample method as above msensor p deploy x i n s t a l l x net d e p l o y x msink p forward x y log position and value x y net d e p l o y sample net s a m p l e i n s t a l l s a m p l e s e n s e x i n net f o r w a r d p x t h i s sample t h i s sample forward x y net f o r w a r d x y net s a m p l e msink p p r b i d l e msensor i d l e msensor pn pbnn rn notice that the implementation of the method sample has changed here when the method is executed for the first time at each sensor it starts by propagating the call to its neighborhood and then it changes itself through an install call the newly installed code of sample is the same as the one in the first implementation of the example the method then continues to execute and calls the new version of sample which starts sampling the field and forwarding values sealing sensors this example shows how we can install a sensor network with a module that contains a method seal that prevents any further dynamic of the sensors preventing anyone from tampering with the installed code the module also contains a method unseal that restores the original deploy method thus allowing dynamic again the sink just installs the module containning these methods in the network by broadcasting a method call to deploy each sensor that receives the call installs the module and floods the neighborhood with a replica of the call another message by the sink then replaces the deploy method itself and it to idle this prevents any further instalation of software in the sensors and thus effectively seals the network from external interaction other than the one allowed by the remainder of the methods in the modules of the sensors msensor deploy x msink net d e p l o y seal unseal net s e a l msink i d l e msensor i n s t a l l x net d e p l o y x i n s t a l l deploy idle i n s t a l l d e p l o y x i n s t a l l x net d e p l o y x p r b i d l e msensor pbnn rn september a calculus for sensor networks miguel francisco and departamento de de computadores liacc faculdade de da universidade do porto portugal arxiv dec departamento de faculdade de da universidade de lisboa portugal abstract we consider the problem of providing a rigorous model for programming wireless sensor networks assuming that collisions packet losses and errors are dealt with at the lower layers of the protocol stack we propose a calculus for sensor networks csn that captures the main abstractions for programming applications for this class of devices besides providing the syntax and semantics for the calculus we show its expressiveness by providing implementations for several examples of typical operations on sensor networks also included is a detailed discussion of possible extensions to csn that enable the modeling of other important features of these networks such as sensor state sampling strategies and network security keywords sensor networks networks ubiquitous computing programming languages i ntroduction a the sensor network challenge sensor networks made of tiny devices capable of sensing the physical world and communicating over radio links are significantly different from other wireless networks a the design of a sensor network is strongly driven by its particular application b sensor nodes are highly constrained in terms of power consumption and computational resources cpu memory and c sensor applications require and distributed software updates without human intervention previous work on fundamental aspects of wireless sensor networks has mostly focused on models in which the sensor nodes are assumed to store and process the data coordinate their transmissions organize the routing of messages within the network and relay the data to a remote receiver see draft fig a wireless sensor network is a collection of small devices that once deployed on a target area organize themselves in an network collect measurements of a physical process and transmit the data over the wireless medium to a data fusion center for further processing and references therein although some of these models provide useful insights into the connectivity characteristics or the overall power efficiency of sensor networks there is a strong need for formal methods that capture the inherent processing and memory constraints and illuminate the massively parallel nature of the sensor nodes processing if well adapted to the specific characteristics of sensor networks a formalism of this kind specifically a process calculus is likely to have a strong impact on the design of operating systems communication protocols and programming languages for this class of distributed systems in terms of hardware development the is well represented by a class of sensor nodes called which were originally developed at uc berkeley and are being deployed and tested by several research groups and companies in most of the currently available trademark of crossbow technology mentations the sensor nodes are controlled by operating systems such as tinyos and programming languages like nesc or in our view the programming models underlying most of these tools have one or more of the following drawbacks they do not provide a rigorous model or a calculus of the sensor network at the programming level which would allow for a formal verification of the correctness of programs among other useful analysis they do not provide a global vision of a sensor network application as a specific distributed application making it less intuitive and error prone for programmers they require the programs to be installed on each sensor individually something unrealistic for large sensor networks they do not allow for dynamic of the network recent middleware developments such as deluge and agilla address a few of these drawbacks by providing higher level programming abstractions on top of tinyos including massive code deployment nevertheless we are still far from a comprehensive programming solution with strong formal support and analytical capabilities the previous observation motivates us to design a sensor network programming model from scratch beyond meeting the challenges of programming and code deployment the model should be capable of producing quantitative information on the amount of resources required by sensor network programs and protocols and also of providing the necessary tools to prove their correctness b related work given the distributed and concurrent nature of sensor network operations we build our sensor network calculus on thirty years of experience gathered by concurrency theorists and programming language designers in pursuit of an adequate formalism and theory for concurrent systems the first steps towards this goal were given by milner with the development of ccs calculus of communicating systems ccs describes computations in which concurrent processes may interact through simple synchronization without otherwise exchanging information allowing processes to exchange resources links memory references sockets code besides synchronizing considerably increases the expressive power of the formal systems such systems known as are able to model the mobility patterns of the resources and thus constitute valuable tools to reason about concurrent distributed systems the first such system built on milner s work was the later developments of this initial proposal allowed for further simplification an provided an asynchronous form of the calculus since then several calculi have been proposed to model concurrent distributed systems and for many there are prototype implementations of programming languages and systems join tyco and nomadic pict previous work by prasad established the first process calculus approach to modeling broadcast based systems later work by prasad and taha established the basis for a calculus for broadcasting systems the focus of this line of work lies in the protocol layer of the networks trying to establish an operational semantics and associated theory that allows assertions to be made about the networks more recently mezzetti and sangiorgi discuss the use of process calculi to model wireless systems again focusing on the details of the lower layers of the protocol stack collision avoidance and establishing an operational semantics for the networks our contributions our main contribution is a sensor network programming model based on a process calculus which we name calculus of sensor networks csn our calculus offers the following features that are specifically tailored for sensor networks approach csn focuses on programming and managing sensor networks and so it assumes that collisions losses and errors have been dealt with at the lower layers of the protocol stack and system architecture this distinguishes csn from the generic wireless network calculus presented in scalability csn offers the means to provide the sensor nodes with and abilities thus meeting the challenges of programming and managing a sensor network broadcast communication instead of the unicast communication of typical process calculi csn captures the properties of broadcast communication as favored by sensor networks with strong impact on their energy consumption topology network topology is not required to be programmed in the processes which would be unrealistic in the case of sensor networks communication constraints due to the power limitations of their wireless interface the sensor nodes can only communicate with their direct neighbors in the network and thus the notion of neighborhood of a sensor node the set of sensor nodes within its communication range is introduced directly in the calculus memory and processing constraints the typical limitations of sensor networks in terms of memory and processing capabilities are captured by explicitly modeling the internal processing or the intelligence of individual sensors local sensing naturally the sensors are only able to pick up local measurements of their environment and thus have geographically limited sensitivity to provide these features we devise csn as a calculus offering abstractions for data acquisition communication and processing the top layer is formed by a network of sensor nodes immersed in a scalar or vector field representing the physical process captured by the sensor nodes the sensor nodes are assumed to be running in parallel each sensor node is composed of a collection of labeled methods which we call a module and that represents the code that can be executed in the device a process is executed in the sensor node as a result of a remote procedure call on a module by some other sensor or seen from the point of view of the callee as a result of the reception of a message sensor nodes are multithreaded and may share state for example in a finally by adding the notions of position and range we are able to capture the nature of broadcast communication and the geographical limits of the sensor network applications the remainder of this paper is structured as follows the next section describes the syntax and semantics of the csn calculus section iii presents several examples of functionalities that can be implemented using csn and that are commonly required in sensor networks in section iv we discuss some design options we made and how we can extend csn to model other aspects of sensor networks finally section v presents some conclusions and directions for future work ii t he c alculus this section addresses the syntax and the semantics of the calculus for sensor networks for simplicity in the remainder of the paper we will refer to a sensor node or a sensor device in a network as a sensor the syntax is provided by the grammar in figure and the operational semantics is given by the reduction relation depicted in figures and n s f p sensors and field idle programs idle p parallel composition sensors p p sequential composition off termination method invocation n composition install v module update p m p r b sensor sense in p field sensing p m p r b s broadcast sensor if v then p else p conditional execution s m li pi t net this fig network modules v values x method collection variable m field measure targets p position broadcast b battery capacity local m module the syntax of csn syntax let denote a possible empty sequence of elements of the syntactic category assume a countable set of labels ranged over by letter l used to name methods within modules and a countable set of variables disjoint from the set of labels and ranged over by letter variables stand for communicated values battery capacity position field measures modules in a given program context the syntax for cns is found in figure we explain the syntactic constructs along with their informal intuitive semantics refer to the next section for a precise semantics of the calculus networks n denote the composition of sensor networks s with a scalar or vector field f a field is a set of pairs position measure describing the distribution of some physical quantity temperature pressure humidity in space the position is given in some coordinate system sensors can measure the intensity of the field in their respective positions sensor networks s are flat unstructured collections of sensors combined using the parallel composition operator a sensor p m p r b represents an abstraction of a physical sensing device and is parametric in its position p describing the location of the sensor in some coordinate system its transmission range specified by the radius r of a circle centered at position p and its battery capacity b the position of the sensors may vary with time if the sensor is mobile in some way the transmission range on the other hand usually remains constant over time a sensor with the battery exhausted is designated by off inside a sensor there exists a running program p and a module m a module is a collection of methods defined as l p that the sensor makes available for internal and for external usage a method is identified by label l and defined by an abstraction p a program p with parameters method names are pairwise distinct within a module mutually recursive method definitions make it possible to represent infinite behavior intuitively the collection of methods of a sensor may be interpreted as the function calls of some tiny operating system installed in the sensor communication in the sensor network only happens via broadcasting values from one sensor to its neighborhood the sensors inside a circle centered at position p the position of the sensor with radius r a broadcast sensor p m p r b s stands for a sensor during the broadcast phase having already communicated with sensors s while broadcasting it is fundamental to keep track of the sensors engaged in communication so far thus preventing the delivery of the same message to the same sensor during one broadcasting operation target sensors are collected in the bag of the sensor emitting the message upon finishing the broadcast the bag is emptied out and the target sensors are released into the network this construct is a construct and is available to the programmer programs are ranged over by p the idle program denotes a terminated thread method invocation selects a method v with arguments either in the local module or broadcasts the request to the neighborhood sensors depending whether t is the keyword this or the keyword net respectively program sense in p reads a measure from the surrounding field and binds it to within p installing or replacing methods in the sensor s module is performed using the construct install v the calculus also offers a standard form of branching through the if v then p else p construct programs p and q may be combined in sequence p q or in parallel p q the sequential composition p q designates a program that first executes p and then proceeds with the execution of q in contrast p q represents the simultaneous execution of p and q values are the data exchanged between sensors and comprise field measures m positions p battery capacities b and modules m notice that this is not a calculus communicating a module means the ability to transfer its code to to retransmit it from or to install it in a remote sensor b examples our first example illustrates a network of sensors that sample the field and broadcast the measured values to a special node known as the sink the sink node may be no different from the other sensors in the network except that it usually possesses a distinct software module that allows it to collect and process the values broadcasted in the network the behavior we want to program is the following the sink issues a request to the network to sample the field upon reception of the request each sensor samples the field at its position and broadcasts the measured value back to the sink the sink receives and processes the values an extended version of this example may be found in section the code for the modules of the sensors msensor p r and for the sink msink p r is given below both modules are parametric in the position and in the broadcasting range of each sensor as for the module equipping the sensors it has a method sample that when invoked propagates the call to its neighborhood samples the field sense x in and forwards the value to the network p x notice that each sensor propagates the original request from the sink this is required since in general most of the sensors in the network will be out of broadcasting range from the sink therefore each sensor echos the request hopefully covering all the network message forwarding will be a recurrent pattern found in our examples another method of the sensors module is forward that simply forwards the values from other sensors through the network the module for the sink contains a different implementation of the forward method since the sink will gather the values sent by the sensors and will log them here we leave unspecified the processing done by the log position and value program the network with all sensors idle except for the sink that requests a sampling msensor p r sample net sample sense x i n net f o r w a r d p x f o r w a r d x y net f o r w a r d x y msink p r forward x y l o g p o s i t i o n a n d v a l u e x y net sample msink p r i d l e msensor p r b i d l e msensor pn rn pn rn bn the next example illustrates the broadcast the deployment and the installation of code the example runs as follows the sink node deploys some module in the network m and then seals the sensors henceforth preventing any dynamic of the network an extended version of the current example may be found in section the code for the modules of the sensors and of the sink is given below the module m is the one we wish to deploy to the network it carries the method seal that forwards the call to the network and installs a new version of deploy that does nothing when executed msensor p r deploy x net deploy x i n s t a l l x msink p r m seal net deploy m net s e a l msink p r i d l e msensor i n s t a l l deploy i d l e net s e a l p r b i d l e msensor pn rn pn rn bn semantics the calculus has two name bindings field sensing and method definitions the displayed occurrence of name xi is a binding with scope p both in sense xi xn in p and in l xi xn p an occurrence of a name is free if it is not in the scope of a binding otherwise the occurrence of the name is bound the set of free names of a sensor s is referred as fn s following milner we present the reduction relation with the help of a structural congruence relation the structural congruence relation depicted in figure allows for the manipulation of term structure adjusting to reduce the relation is defined as the smallest congruence relation on sensors and programs closed under the rules given in figure the parallel composition operators for programs and for sensors are taken to be commutative and associative with idle and off as their neutral elements respectively vide rules monoid rogram and monoid ensor rule idle seq asserts that idle is also neutral with respect to sequential composition of programs rule program stru incorporates structural congruence for programs into sensors when a sensor is broadcasting a message it uses a bag to collect the sensors as they become p idle p monoid rogram s off s monoid ensor idle p p p r p m p r b p m b off fig p r m p r b m b b max cin cout p m p r b off idle seq program stru broadcast exhausted structural congruence for processes and sensors engaged in communication rule broadcast allows for a sensor to start the broadcasting operation a terminated sensor is a sensor with insufficient battery capacity for performing an internal or an external reduction step vide rule exhausted the reduction relation on networks notation s f s f describes how sensors s can evolve reduce to sensors s sensing the field f the reduction is defined on top of a reduction relation for sensors notation s s inductively defined by the rules in figure the reduction for sensors is parametric on field f and on two constants cin and cout that represent the amount of energy consumed when performing internal computation steps cin and when broadcasting messages cout computation inside sensors proceeds by invoking a method either method and rno method broadcast and release by sensing values rule sense and by updating the method collection of the sensor rule install the invocation of a local method li with arguments evolves differently depending on whether or not the definition for li is part of the method collection of the sensor rule method describes the invocation of a method from module m defined as m li pi the result is the program pi where the values are bound to the variables in when the definition for li is not present in m we have decided to actively wait for the definition see rule no method usually invoking an undefined method causes a program to get stuck typed programming languages use a type system to ensure that there are no invocations to undefined methods ruling out all other programs at compile time at runtime another possible choice would be to simply discard invocations to undefined methods our choice provides more resilient applications when coupled with the procedure for deploying code in a sensor m li pi b cin this m p r p v xi m p r f i b method li dom m this m p r v m p r b this b d p r no method b cout broadcast p r net m p r f b s p m net m p r v m r b s p this p r net m p r b s m s release b cin p r m p r b m m install install sense in p s s s s s s b cin p f p m p r s s s s s s s f s f fig sense m p r b parallel structural network reduction semantics for processes and networks network we envision that if we invoke a method in the network after some code has been deployed see example there may be some sensors where the method invocation arrives before the deployed code with the semantics we propose the call actively waits for the code to be installed sensors communicate with the network by broadcasting messages a message consists of a remote method invocation on unspecified sensors in the neighborhood of the emitting sensor in other words the messages are not targeted to a particular sensor there is no communication the neighborhood of a sensor is defined by its communication radius but there is no guarantee that a message broadcasted by a given sensor arrives at all surrounding sensors there might be for instance landscape obstacles that prevent two sensors otherwise within range from communicating with each other also during a broadcast operation the message must only reach each neighborhood sensor once notice that we are not saying that the same message can not reach the same sensor multiple times in fact it might but as the result of the echoing of the message in subsequent broadcast operations we model the broadcasting of messages in two stages rule broadcast invokes method li in the remote sensor provided that the distance between the emitting and the receiving sensors is less that the transmission radius d p r the sensor receiving the message is put in the bag of the emitting sensor thus preventing multiple deliveries of the same message while broadcasting observe that the rule does not enforce the interaction with all sensors in the neighborhood rule release finishes the broadcast by consuming the operation net and by emptying out the contents of the emitting sensor s bag a broadcast operation starts with the application of rule broadcast proceeds with multiple eventually none applications of rule broadcast one for each target sensor and terminates with the application of rule release installing module m in a sensor with a module m rule install amounts to add to m the methods in m absent in m and to replace in m the methods common to both m and m rigorously the operation of installing module m on top of m denoted m m may be defined as m m m m m the operator is reminiscent of abadi and cardelli s operator for updating methods in their imperative object calculus a sensor senses the field in which it is immersed rule sense by sampling the value of the field f in its position p and continues the computation replacing this value for the bound variables in program p rule parallel allows reduction to happen in networks of sensors and rule structural brings structural congruence into the reduction relation the operational semantics illustrated to illustrate the operational semantics of cns we present the reduction steps for the examples discussed at the end of section during reduction we suppress the side annotations when writing the sensors due to space constraints we consider a rather simple network with just the sink and another sensor net sample msink p r i d l e msensor we assume that the sensor is within range from the sink and this network may reduce as follows msink p r idle msensor broadcast msink p r off idle msensor d p r broadcast monoid ensor msink p r idle msensor release monoid rogram idle msink p r msensor method idle msink p r sense x in x msensor broadcast idle msink p r sense x in x msensor off release idle msink p r sense x in x msensor idle msink p r f msensor sense monoid ensor f msensor idle msink p r broadcast f msensor off idle msink p r d p broadcast monoid ensor f msensor f idle msink p r release monoid rogram idle msensor f msink p r idle msensor log position and value f msink p r method monoid ensor log position and value f idle msensor msink p r so after these reduction steps the sink gets the field values from the sensor at position and logs them the sensor at is idle waiting for further interaction following we present the reduction step for our second and last example of section where we illustrate the broadcast the deployment and the installation of code again due to space restrictions we use a very simple network with just the sink and another sensor both within reach of each other net deploy m net s e a l msink p r i d l e msensor this network may reduce as follows m msink p r idle msensor broadcast m msink p r off idle msensor d p r broadcast monoid ensor m msink p r m idle msensor release monoid rogram msink p r m msensor msink p r m install m msensor method broadcast msink p r m install m msensor off release monoid ensor msink p r install m msensor install msink p r idle msensor broadcast msink p r off idle msensor d p r broadcast monoid ensor msink p r idle msensor release monoid rogram idle msink p r msensor method idle msink p r install deploy idle msensor broadcast idle msink p r install deploy idle msensor off release monoid ensor idle msink p r install deploy idle msensor install idle msink p r idle msensor deploy idle after these reductions the sink is idle after deploying the code to the sensor at the sensor at p is also idle waiting for interaction but with the code for the module m installed and with the deploy method disabled iii p rogramming e xamples in this section we present some examples programmed in csn of typical operations performed on networks of sensors our goal is to show the expressiveness of the csn calculus just presented and also to identify some other aspects of these networks that may be interesting to model in the following examples we denote as msensor and msink the modules installed in any of the anonymous sensors in the network and the modules installed in the sink respectively note also that all sensors are assumed to have a builtin method deploy that is responsible for installing new modules the intuition is that this method is part of the tiny operating system that allows sensors to react when first placed in the field finally we assume in these small examples that the network layer supports scoped flooding we shall see in the next section that this can be supported via software with the inclusion of state in sensors ping we start with a very simple ping program each sensor has a ping method that when invoked calls a method forward in the network with its position and battery charge as arguments when the method forward is invoked by a sensor in the network it just triggers another call to forward in the network the sink has a distinct implementation of this method any incomming invocation logs the position and battery values given as arguments so the overall result of the call in the sink is that all reachable sensors in the network will in principle receive this call and will flood the network with their positions and battery charge values these values eventually reach the sink and get logged msensor p b ping net f o r w a r d p b net ping forward x y net f o r w a r d x y msink p b forward x y log position and power x y net ping msink p b i d l e msensor p r b i d l e msensor pn bn pn rn bn querying this example shows how we can program a network with a sink that periodically queries the network for the readings of the sensors each sensor has a sample method that samples the field using the sense construct and calls the method forward in the neighbourhood with its position and the value sampled as arguments the call then queries the neighbourhood recursively with a replica of the original call the original call is of course made from the sink which has a method start sample that calls the method sample in the network within a cycle note that if the sink had a method named sample instead of start sample it might get a call to sample from elsewhere in the network that could interfere with the sampling control cycle msensor p sample sense x i n net f o r w a r d p x net sample forward x y net f o r w a r d x y msink p start sample net sample t h i s s t a r t s a m p l e forward x y log position and value x y t h i s s t a r t s a m p l e msink p i d l e msensor p r b i d l e msensor pn pn rn bn polling in this example the cycle of the sampling is done in each sensor instead of in the sink as in the previous example the sink just invokes the method start sample once this method propagates the call through the network and invokes sample for each sensor this method samples the field within a cycle and forwards the result to the network this implementation requires less broadcasts than the previous one as the sink only has to call start sample on the network once on the other hand it increases the amount of processing per sensor msensor p start sample net s t a r t s a m p l e t h i s sample sample sense x i n net f o r w a r d p x t h i s sample forward x y net f o r w a r d x y msink p x y log position and value x y forward net s t a r t e x a m p l e msink p i d l e msensor p r b i d l e msensor pn pn rn bn code deployment the above examples assume we have some means of deploying the code to the sensors in this example we address this problem and show how it can be programmed in csn the code we wish to deploy and execute is the same as the one in the previous example to achieve this goal the sink first calls the deploy method on the network to install the new module with the methods start sample sample and forward as above this call recursively deploys the code to the sensors in the network the sink then calls start sample to start the sampling again as above and waits for the forwarded results on the method forward msensor p deploy x i n s t a l l x net deploy x x y log position and value x y start sample net s t a r t s a m p l e t h i s sample sample sense x i n net f o r w a r d p x t h i s sample forward x y net f o r w a r d x y msink p forward net deploy net s t a r t s a m p l e msink p i d l e msensor p r b i d l e msensor pn pn rn bn a refined version of this code one that avoids the start sample method completely can be programmed here we deploy the code for all sensors by sending methods sample and forward to all the sensors in the network by invoking deploy once deployed the code is activated with a call to sample in the sink instead of using the start sample method as above msensor p deploy x i n s t a l l x net deploy x msink p forward x y log position and value x y net deploy sample net sample i n s t a l l sample sense x i n net f o r w a r d p x t h i s sample t h i s sample forward x y net f o r w a r d x y net sample msink p i d l e msensor p r b i d l e msensor pn pn rn bn notice that the implementation of the method sample has changed here when the method is executed for the first time at each sensor it starts by propagating the call to its neighborhood and then it changes itself through an install call the newly installed code of sample is the same as the one in the first implementation of the example the method then continues to execute and calls the new version of sample which starts sampling the field and forwarding values sealing sensors this example shows how we can install a sensor network with a module that contains a method seal that prevents any further dynamic of the sensors preventing anyone from tampering with the installed code the module also contains a method unseal that restores the original deploy method thus allowing dynamic again the sink just installs the module containning these methods in the network by broadcasting a method call to deploy each sensor that receives the call installs the module and floods the neighborhood with a replica of the call another message by the sink then replaces the deploy method itself and it to idle this prevents any further instalation of software in the sensors and thus effectively seals the network from external interaction other than the one allowed by the remainder of the methods in the modules of the sensors msensor deploy x i n s t a l l x net deploy x msink net deploy seal i n s t a l l deploy idle uns eal i n s t a l l deploy x i n s t a l l x net deploy x net s e a l msink i d l e msensor p r b i d l e msensor pn rn bn iv d iscussion in the previous sections we focused our attention on the programming issues of a sensor network and presented a core calculus that is expressive enough to model fundamental operations such as local broadcast of messages local sensing of the environment and software module updates csn allows the global modeling of sensor networks in the sense that it allows us to design and implement sensor network applications as distributed applications rather than giving the programmer a view of the programming task it also provides the tools to manage running sensor networks namely through the use of the software deployment capabilities there are other important features of sensor networks that we consciously left out of csn in the sequel we discuss some of these features and sketch some ideas of how we would include support for them a state from a programming point of view adding state to sensors is essential sensors have some limited computational capabilities and may perform some data processing before sending it to the sink this processing assumes that the sensor is capable of buffering data and thus maintain some state in a way csn sensors have state indeed the atributes p b and r may be viewed as sensor state since these are characteristic of each sensor and are usually controlled at the hardware level we chose to represent this state as parameters of the sensors the programmer may read these values at any time through builtin method calls but any change to this data is performed transparently for the programmer by the hardware or operating system as we mentioned before it is clear that the value of b changes with time the position p may also change with time if we envision our sensors endowed with some form of mobility sensors dropped in the atmosphere or flowing in the ocean to allow for a more systematic extension of our sensors with state variables we can assume that each sensor has a heap h where the values of these variables are stored h p m p r b the model chosen for this heap is orthogonal to our sensor calculus and for this discussion we assume that we enrich the values v of the language with a set of keys ranged over by our heap may thus be defined as a map h from keys into values intuitively we can think of it as an associative memory with the usual operations put get lookup and hash programs running in the sensors may share state by exchanging keys we assume also that these operations are atomic and thus no race conditions can arise with this basic model for a heap we can the ping example from section with scoped flooding thus eliminating echos by software we do this by associating a unique key to each remote procedure call broadcast to the network this key is created through the hash function that takes as arguments the position p and the battery b of the sensor each sensor after receiving a call to ping propagates the call to its neighborhood and generates a new key to send with its position and battery charge in a forward call then it stores the key in its heap to avoid forwarding its own forward call on the other hand each time a sensor receives a call to forward it checks whether it has the key associated to the call in its heap if so it does nothing if not it forwards the call and stores the key in the heap to avoid future msensor p b ping net ping l e t k hash p b i n net f o r w a r d p b k put k forward x y k i f lookup k then net f o r w a r d x y k put k msink p b p r b msensor net ping msink idle i d l e msensor pn bn pn rn bn b events another characteristic of sensors is their modus operandi some sensors sample the field as a result of instructions implemented in the software that controls them such is the case with csn sensors the programmer is responsible for controlling the sensing activity of the sensor network it is of course possible for sensor nodes to be activated in different ways for example some may have their sensing routines implemented at hardware or operating system level and thus not directly controllable by the programmer such classes of sensor nodes tipically sample the field periodically and are activated when a given condition arises a temperature above or below a given threshold the detection of above a given threshold the detection of a strong source of infrared light the way in which certain environmental conditions or events can activate the sensor is by triggering the execution of a handler procedure that processes the event support for this kind of sensors in csn could be achieved by assuming that each sensor has a builtin handler procedure say handle for such events the handler procedure when activated receives the value of the field that triggered the event note that from the point of view of the sensor the occurrence of such an event is equivalent to the deployment of a method invocation v in its processing core where v is the field value associated with the event the sensor has no control over this deployment but may be programmed to react in different ways to these calls by providing adequate implementations of the handle routine the events could be included in the semantics given in section with the following rule p r p m p r b this f p p m b event as in the case of the builtin method for code deployment the handler could be programmed to change the behavior of the network in the presence of events one could envision the default handler as handle x idle which ignores all events then we could change this default behavior so that an event triggers an alarm that gets sent to the sink a possible implementation of such a dynamic of the network default handlers can be seen in the code below msensor p handle x idle msink p handle x idle net deploy handle x alarm net alarm p x x y net alarm x y i n s t a l l alarm msink p p r b x y sing bell x y i d l e msensor i d l e msensor pn pn rn bn where the default implementation of the handle procedure is superseded by one that eventualy triggers an alarm in the sink more complex behavior could be modeled for sensors that take multiple readings with a handler associated with each event c security finally another issue that is of outmost importance in the management of sensor networks is security it is important to note that many potential applications of sensor networks are in high risk situations examples may be the monitorization of ecological disaster areas volcanic or sismic activity and radiation levels in contaminated areas secure access to data is fundamental to establish its credibility and for correctly assessing risks in the management of such episodes in csn we have not taken security issues into consideration this was not our goal at this time however one feature of the calculus may provide interesting solutions for the future in fact in csn all computation within a sensor results from an invocation of methods in the modules of a sensor either originating in the network or from within the sensor in a sense the modules m of the sensor work as a firewall that can be used to control incomming messages and implement security protocols thus all remote method invocations and software updates might first be validated locally with methods of the sensor s modules and only then the actions would be performed the idea of equipping sensors or in general domains with some kind of membrane that filters all the interactions with the surrounding network has been explored for instance in in the in the kell calculus in the brane calculi in miko and in one possible development is to incorporate some features of the membrane model into csn the current formulation of the calculus also assumes that all methods in the module m of a sensor p m p r b are visible from the network it is possible to implement an access policy to methods in such a way that some methods are private to the sensor can only be invoked from within the sensor this allows for example the complete encapsulation of the state of the sensor c onclusions and f uture w ork aiming at providing sensor networks with a rigorous and adequate programming model upon which operating systems and programming languages can be built we presented csn a calculus for sensor networks developed specifically for this class of distributed systems after identifying the necessary sensing processing and wireless broadcasting features of the calculus we opted to base our work on a abstraction of physical and link layer communication issues in contrast with previous work on wireless network calculi thus focusing on the system requirements for programming applications this approach resulted in the csn syntax and semantics whose expressiveness we illustrated through a series of implementations of typical operations in sensor networks also included was a detailed discussion of possible extensions to csn to account for other important properties of sensors such as state sampling strategies and security as part of our ongoing efforts we are currently using csn to establish a mathematical framework for reasoning about sensor networks one major objective of this work consists in providing formal proofs of correctness for data gathering protocols that are commonly used in current sensor networks and whose performance and reliability has so far only been evaluated through computer simulations and experiments from a more practical point of view the focus will be set on the development of a prototype implementation of csn this prototype will be used to emulate the behavior of sensor networks by software and ultimately to port the programming model to a natural development architecture for sensor network applications acknowledgements the authors gratefully acknowledge insightful discussions with gerhard maierbacher departamento de de computadores faculdade de universidade do porto r eferences the tinyos documentation project available at http abadi and cardelli an imperative object calculus in tapsoft theory and practice of software development number in lncs pages akyildiz su sankarasubramaniam and cayirci a survey on sensor networks ieee communications magazine barros and servetto network information flow with correlated sources ieee transactions on information theory vol no pp january boudol asynchrony and the technical report inria institut national de recherche en informatique et en automatique boudol a generic membrane model in global computing workshop volume of lncs pages springerverlag cardelli brane calculi interactions of biological membranes in proceedings of cmsb volume of lncs pages culler and mulder smart sensors to network the world scientific american fok roman and lu rapid development and flexible deployment of adaptive wireless sensor network applications in proceedings of the international conference on distributed computing systems icdcs pages ieee june fournet and gonthier the reflexive chemical abstract machine and the in acm symposium on principles of programming languages popl pages acm gay levis von behren welsh brewer and culler the nesc language a holistic approach to network embedded systems in acm sigplan conference on programming language design and implementation pldi gorla hennessy and sassone security policies as membranes in systems for global computing in proceedings of fguc entcs elsevier science honda and tokoro an object calculus for asynchronous communication in proceedings of the ecoop european conference on programming lncs pages hu and li on the fundamental capacity and lifetime limits of wireless sensor networks in proceedings of the ieee and embedded technology and applications symposium rtas pages toronto canada hui and culler the dynamic behavior of a data dissemination protocol for network programming at scale in proceedings of the international conference on embedded networked sensor systems pages acm press pugliese bettini de nicola and klava programming mobile code tosca electronic notes on theoretical computer science elsevier levis and culler a tiny virtual machine for sensor networks in international conference on architectural support for programming languages and operating systems asplos x martins salvador vasconcelos and lopes miko mikado koncurrent objects technical report dagstuhl seminar mezzetti and sangiorgi towards a calculus for wireless systems in proc mfps volume of entcs pages elsevier milner a calculus of communicating systems volume milner parrow and walker a calculus of mobile processes parts i and ii information and computation prasad and taha towards a primitive higher order calculus of broadcasting systems in ppdp international conference on principles and practice of declarative programming prasad a calculus of broadcasting systems in tapsoft volume pages scaglione and servetto on the interdependence of routing and data compression in sensor networks in proc acm mobicom atlanta ga schmitt and stefani the a distributed process calculus in proceedings of popl pages acm press stefani a calculus of kells in proceedings of fgc volume elsevier science vasconcelos lopes and silva distribution and mobility with lexical scoping in process calculi in workshop on high level programming languages hlcl volume of entcs pages elsevier science wojciechowski and sewell nomadic pict language and infrastructure design for mobile agents concurrency ieee a calculus for sensor networks miguel francisco and departamento de de computadores liacc faculdade de da universidade do porto portugal arxiv dec departamento de faculdade de da universidade de lisboa portugal abstract we consider the problem of providing a rigorous model for programming wireless sensor networks assuming that collisions packet losses and errors are dealt with at the lower layers of the protocol stack we propose a calculus for sensor networks csn that captures the main abstractions for programming applications for this class of devices besides providing the syntax and semantics for the calculus we show its expressiveness by providing implementations for several examples of typical operations on sensor networks also included is a detailed discussion of possible extensions to csn that enable the modeling of other important features of these networks such as sensor state sampling strategies and network security keywords sensor networks networks ubiquitous computing programming languages i ntroduction a the sensor network challenge sensor networks made of tiny devices capable of sensing the physical world and communicating over radio links are significantly different from other wireless networks a the design of a sensor network is strongly driven by its particular application b sensor nodes are highly constrained in terms of power consumption and computational resources cpu memory and c sensor applications require and distributed software updates without human intervention previous work on fundamental aspects of wireless sensor networks has mostly focused on models in which the sensor nodes are assumed to store and process the data coordinate their transmissions organize the routing of messages within the network and relay the data to a remote receiver see and references therein although some of these models provide useful insights into the draft fig a wireless sensor network is a collection of small devices that once deployed on a target area organize themselves in an network collect measurements of a physical process and transmit the data over the wireless medium to a data fusion center for further processing connectivity characteristics or the overall power efficiency of sensor networks there is a strong need for formal methods that capture the inherent processing and memory constraints and illuminate the massively parallel nature of the sensor nodes processing if well adapted to the specific characteristics of sensor networks a formalism of this kind specifically a process calculus is likely to have a strong impact on the design of operating systems communication protocols and programming languages for this class of distributed systems in terms of hardware development the is well represented by a class of sensor nodes called which were originally developed at uc berkeley and are being deployed and tested by several research groups and companies in most of the currently available implementations the sensor nodes are controlled by operating systems such as tinyos and programming languages like nesc or in our view the programming models trademark of crossbow technology inc draft underlying most of these tools have one or more of the following drawbacks they do not provide a rigorous model or a calculus of the sensor network at the programming level which would allow for a formal verification of the correctness of programs among other useful analysis they do not provide a global vision of a sensor network application as a specific distributed application making it less intuitive and error prone for programmers they require the programs to be installed on each sensor individually something unrealistic for large sensor networks they do not allow for dynamic of the network recent middleware developments such as deluge and agilla address a few of these drawbacks by providing higher level programming abstractions on top of tinyos including massive code deployment nevertheless we are still far from a comprehensive programming solution with strong formal support and analytical capabilities the previous observation motivates us to design a sensor network programming model from scratch beyond meeting the challenges of programming and code deployment the model should be capable of producing quantitative information on the amount of resources required by sensor network programs and protocols and also of providing the necessary tools to prove their correctness b related work given the distributed and concurrent nature of sensor network operations we build our sensor network calculus on thirty years of experience gathered by concurrency theorists and programming language designers in pursuit of an adequate formalism and theory for concurrent systems the first steps towards this goal were given by milner with the development of ccs calculus of communicating systems ccs describes computations in which concurrent processes may interact through simple synchronization without otherwise exchanging information allowing processes to exchange resources links memory references sockets code besides synchronizing considerably increases the expressive power of the formal systems such systems known as are able to model the mobility patterns of the resources and thus constitute valuable tools to reason about concurrent distributed systems the first such system built on milner s work was the later developments of this initial proposal allowed for further simplification an provided an asynchronous form of the calculus since then several calculi have been proposed to model concurrent distributed systems and for many there are draft prototype implementations of programming languages and systems join tyco and nomadic pict previous work by prasad established the first process calculus approach to modeling broadcast based systems later work by prasad and taha established the basis for a calculus for broadcasting systems the focus of this line of work lies in the protocol layer of the networks trying to establish an operational semantics and associated theory that allows assertions to be made about the networks more recently mezzetti and sangiorgi discuss the use of process calculi to model wireless systems again focusing on the details of the lower layers of the protocol stack collision avoidance and establishing an operational semantics for the networks our contributions our main contribution is a sensor network programming model based on a process calculus which we name calculus of sensor networks csn our calculus offers the following features that are specifically tailored for sensor networks approach csn focuses on programming and managing sensor networks and so it assumes that collisions losses and errors have been dealt with at the lower layers of the protocol stack and system architecture this distinguishes csn from the generic wireless network calculus presented in scalability csn offers the means to provide the sensor nodes with and abilities thus meeting the challenges of programming and managing a sensor network broadcast communication instead of the unicast communication of typical process calculi csn captures the properties of broadcast communication as favored by sensor networks with strong impact on their energy consumption topology network topology is not required to be programmed in the processes which would be unrealistic in the case of sensor networks communication constraints due to the power limitations of their wireless interface the sensor nodes can only communicate with their direct neighbors in the network and thus the notion of neighborhood of a sensor node the set of sensor nodes within its communication range is introduced directly in the calculus memory and processing constraints the typical limitations of sensor networks in terms of memory and processing capabilities are captured by explicitly modeling the internal processing or the intelligence of individual sensors draft local sensing naturally the sensors are only able to pick up local measurements of their environment and thus have geographically limited sensitivity to provide these features we devise csn as a calculus offering abstractions for data acquisition communication and processing the top layer is formed by a network of sensor nodes immersed in a scalar or vector field representing the physical process captured by the sensor nodes the sensor nodes are assumed to be running in parallel each sensor node is composed of a collection of labeled methods which we call a module and that represents the code that can be executed in the device a process is executed in the sensor node as a result of a remote procedure call on a module by some other sensor or seen from the point of view of the callee as a result of the reception of a message sensor nodes are multithreaded and may share state for example in a finally by adding the notions of position and range we are able to capture the nature of broadcast communication and the geographical limits of the sensor network applications the remainder of this paper is structured as follows the next section describes the syntax and semantics of the csn calculus section iii presents several examples of functionalities that can be implemented using csn and that are commonly required in sensor networks in section iv we discuss some design options we made and how we can extend csn to model other aspects of sensor networks finally section v presents some conclusions and directions for future work ii t he c alculus this section addresses the syntax and the semantics of the calculus for sensor networks for simplicity in the remainder of the paper we will refer to a sensor node or a sensor device in a network as a sensor the syntax is provided by the grammar in figure and the operational semantics is given by the reduction relation depicted in figures and syntax let denote a possible empty sequence of elements of the syntactic category assume a countable set of labels ranged over by letter l used to name methods within modules and a countable set of variables disjoint from the set of labels and ranged over by letter variables stand for communicated values battery capacity position field measures modules in a given program context the syntax for cns is found in figure we explain the syntactic constructs along with their informal intuitive semantics refer to the next section for a precise semantics of the calculus draft n s f p sensors and field idle programs idle p parallel composition sensors p p sequential composition off termination method invocation n composition install v module update p m p r b sensor sense in p field sensing p m p r b s broadcast sensor if v then p else p conditional execution s m li pi t net this fig network modules v method collection values x variable m field measure targets p position broadcast b battery capacity local m module the syntax of csn networks n denote the composition of sensor networks s with a scalar or vector field f a field is a set of pairs position measure describing the distribution of some physical quantity temperature pressure humidity in space the position is given in some coordinate system sensors can measure the intensity of the field in their respective positions sensor networks s are flat unstructured collections of sensors combined using the parallel composition operator a sensor p m p r b represents an abstraction of a physical sensing device and is parametric in its position p describing the location of the sensor in some coordinate system its transmission range specified by the radius r of a circle centered at position p and its battery capacity b the position of the sensors may draft vary with time if the sensor is mobile in some way the transmission range on the other hand usually remains constant over time a sensor with the battery exhausted is designated by off inside a sensor there exists a running program p and a module m a module is a collection of methods defined as l p that the sensor makes available for internal and for external usage a method is identified by label l and defined by an abstraction p a program p with parameters method names are pairwise distinct within a module mutually recursive method definitions make it possible to represent infinite behavior intuitively the collection of methods of a sensor may be interpreted as the function calls of some tiny operating system installed in the sensor communication in the sensor network only happens via broadcasting values from one sensor to its neighborhood the sensors inside a circle centered at position p the position of the sensor with radius r a broadcast sensor p m p r b s stands for a sensor during the broadcast phase having already communicated with sensors s while broadcasting it is fundamental to keep track of the sensors engaged in communication so far thus preventing the delivery of the same message to the same sensor during one broadcasting operation target sensors are collected in the bag of the sensor emitting the message upon finishing the broadcast the bag is emptied out and the target sensors are released into the network this construct is a construct and is available to the programmer programs are ranged over by p the idle program denotes a terminated thread method invocation selects a method v with arguments either in the local module or broadcasts the request to the neighborhood sensors depending whether t is the keyword this or the keyword net respectively program sense in p reads a measure from the surrounding field and binds it to within p installing or replacing methods in the sensor s module is performed using the construct install v the calculus also offers a standard form of branching through the if v then p else p construct programs p and q may be combined in sequence p q or in parallel p q the sequential composition p q designates a program that first executes p and then proceeds with the execution of q in contrast p q represents the simultaneous execution of p and q values are the data exchanged between sensors and comprise field measures m positions p battery capacities b and modules m notice that this is not a calculus communicating a module means the ability to transfer its code to to retransmit it from or to install it in a remote sensor b examples our first example illustrates a network of sensors that sample the field and broadcast the measured values to a special node known as the sink the sink node may be no different from the other sensors draft in the network except that it usually possesses a distinct software module that allows it to collect and process the values broadcasted in the network the behavior we want to program is the following the sink issues a request to the network to sample the field upon reception of the request each sensor samples the field at its position and broadcasts the measured value back to the sink the sink receives and processes the values an extended version of this example may be found in section the code for the modules of the sensors msensor p r and for the sink msink p r is given below both modules are parametric in the position and in the broadcasting range of each sensor as for the module equipping the sensors it has a method sample that when invoked propagates the call to its neighborhood samples the field sense x in and forwards the value to the network p x notice that each sensor propagates the original request from the sink this is required since in general most of the sensors in the network will be out of broadcasting range from the sink therefore each sensor echos the request hopefully covering all the network message forwarding will be a recurrent pattern found in our examples another method of the sensors module is forward that simply forwards the values from other sensors through the network the module for the sink contains a different implementation of the forward method since the sink will gather the values sent by the sensors and will log them here we leave unspecified the processing done by the log position and value program the network with all sensors idle except for the sink that requests a sampling msensor p r sample net sample sense x i n net f o r w a r d p x f o r w a r d x y net f o r w a r d x y msink p r forward x y l o g p o s i t i o n a n d v a l u e x y net sample msink p r i d l e msensor p r b i d l e msensor pn rn pn rn bn the next example illustrates the broadcast the deployment and the installation of code the example runs as follows the sink node deploys some module in the network m and then seals the sensors henceforth preventing any dynamic of the network an extended version of the current example may be found in section the code for the modules of the sensors and of the sink is given below the module m is the one we wish to deploy to the network it carries the method seal that forwards the call to the network and installs a new version of deploy that does nothing when executed draft p idle p monoid rogram s off s monoid ensor p r m p r b m b idle p p p r p m p r b p m b off fig b max cin cout p m p r b off idle seq program stru broadcast exhausted structural congruence for processes and sensors msensor p r deploy x net deploy x i n s t a l l x msink p r m seal net deploy m net s e a l msink p r i d l e msensor i n s t a l l deploy i d l e net s e a l p r b i d l e msensor pn rn pn rn bn semantics the calculus has two name bindings field sensing and method definitions the displayed occurrence of name xi is a binding with scope p both in sense xi xn in p and in l xi xn p an occurrence of a name is free if it is not in the scope of a binding otherwise the occurrence of the name is bound the set of free names of a sensor s is referred as fn s following milner we present the reduction relation with the help of a structural congruence relation the structural congruence relation depicted in figure allows for the manipulation of term structure adjusting to reduce the relation is defined as the smallest congruence relation on sensors and programs closed under the rules given in figure the parallel composition operators for programs and for sensors are taken to be commutative and associative with idle and off as their neutral elements respectively vide rules monoid rogram and monoid ensor rule idle seq asserts that idle is also neutral with respect to sequential composition of programs rule program stru incorporates structural congruence for programs into sensors when a sensor is broadcasting a message it uses a bag to collect the sensors as they become engaged in communication rule broadcast allows for a sensor to start the broadcasting operation draft m li pi b cin this m p r p v xi m p r f i b method li dom m this m p r v m p r b this b d p r no method b cout broadcast p r net m p r f b s p m net m p r v m r b s p this p r net m p r b s m s release b cin p r m p r b m m install install sense in p s s s s s s b cin p f p m p r s s s s parallel structural s s s f s f fig sense m p r b network reduction semantics for processes and networks a terminated sensor is a sensor with insufficient battery capacity for performing an internal or an external reduction step vide rule exhausted the reduction relation on networks notation s f s f describes how sensors s can evolve reduce to sensors s sensing the field f the reduction is defined on top of a reduction relation for sensors notation s s inductively defined by the rules in figure the reduction for sensors is parametric on field f and on two constants cin and cout that represent the amount of energy consumed when performing internal computation steps cin and when broadcasting messages cout computation inside sensors proceeds by invoking a method either method and rno method broadcast and release by sensing values rule sense and by updating the method collection of the sensor rule install draft the invocation of a local method li with arguments evolves differently depending on whether or not the definition for li is part of the method collection of the sensor rule method describes the invocation of a method from module m defined as m li pi the result is the program pi where the values are bound to the variables in when the definition for li is not present in m we have decided to actively wait for the definition see rule no method usually invoking an undefined method causes a program to get stuck typed programming languages use a type system to ensure that there are no invocations to undefined methods ruling out all other programs at compile time at runtime another possible choice would be to simply discard invocations to undefined methods our choice provides more resilient applications when coupled with the procedure for deploying code in a sensor network we envision that if we invoke a method in the network after some code has been deployed see example there may be some sensors where the method invocation arrives before the deployed code with the semantics we propose the call actively waits for the code to be installed sensors communicate with the network by broadcasting messages a message consists of a remote method invocation on unspecified sensors in the neighborhood of the emitting sensor in other words the messages are not targeted to a particular sensor there is no communication the neighborhood of a sensor is defined by its communication radius but there is no guarantee that a message broadcasted by a given sensor arrives at all surrounding sensors there might be for instance landscape obstacles that prevent two sensors otherwise within range from communicating with each other also during a broadcast operation the message must only reach each neighborhood sensor once notice that we are not saying that the same message can not reach the same sensor multiple times in fact it might but as the result of the echoing of the message in subsequent broadcast operations we model the broadcasting of messages in two stages rule broadcast invokes method li in the remote sensor provided that the distance between the emitting and the receiving sensors is less that the transmission radius d p r the sensor receiving the message is put in the bag of the emitting sensor thus preventing multiple deliveries of the same message while broadcasting observe that the rule does not enforce the interaction with all sensors in the neighborhood rule release finishes the broadcast by consuming the operation net and by emptying out the contents of the emitting sensor s bag a broadcast operation starts with the application of rule broadcast proceeds with multiple eventually none applications of rule broadcast one for each target sensor and terminates with the application of rule release installing module m in a sensor with a module m rule install amounts to add to m the methods in m absent in m and to replace in m the methods common to both m and m rigorously the operation of installing module m on top of m denoted m m may be defined as m m draft m m m the operator is reminiscent of abadi and cardelli s operator for updating methods in their imperative object calculus a sensor senses the field in which it is immersed rule sense by sampling the value of the field f in its position p and continues the computation replacing this value for the bound variables in program p rule parallel allows reduction to happen in networks of sensors and rule structural brings structural congruence into the reduction relation the operational semantics illustrated to illustrate the operational semantics of cns we present the reduction steps for the examples discussed at the end of section during reduction we suppress the side annotations when writing the sensors due to space constraints we consider a rather simple network with just the sink and another sensor net sample msink p r i d l e msensor we assume that the sensor is within range from the sink and this network may reduce as follows draft msink p r idle msensor broadcast msink p r off idle msensor d p r broadcast monoid ensor msink p r idle msensor release monoid rogram idle msink p r msensor method idle msink p r sense x in x msensor broadcast idle msink p r sense x in x msensor off release idle msink p r sense x in x msensor idle msink p r f msensor sense monoid ensor f msensor idle msink p r broadcast f msensor off idle msink p r d p broadcast monoid ensor f msensor f idle msink p r release monoid rogram idle msensor f msink p r idle msensor log position and value f msink p r method monoid ensor log position and value f idle msensor msink p r so after these reduction steps the sink gets the field values from the sensor at position and logs them the sensor at is idle waiting for further interaction following we present the reduction step for our second and last example of section where we illustrate the broadcast the deployment and the installation of code again due to space restrictions we use a very simple network with just the sink and another sensor both within reach of each other net deploy m net s e a l msink p r i d l e msensor draft this network may reduce as follows m msink p r idle msensor broadcast m msink p r off idle msensor d p r broadcast monoid ensor m msink p r m idle msensor release monoid rogram msink p r m msensor msink p r m install m msensor method broadcast msink p r m install m msensor off release monoid ensor msink p r install m msensor install msink p r idle msensor broadcast msink p r off idle msensor d p r broadcast monoid ensor msink p r idle msensor release monoid rogram idle msink p r msensor method idle msink p r install deploy idle msensor broadcast idle msink p r install deploy idle msensor off release monoid ensor idle msink p r install deploy idle msensor install idle msink p r idle msensor deploy idle after these reductions the sink is idle after deploying the code to the sensor at the sensor at p is also idle waiting for interaction but with the code for the module m installed and with the deploy method disabled draft iii p rogramming e xamples in this section we present some examples programmed in csn of typical operations performed on networks of sensors our goal is to show the expressiveness of the csn calculus just presented and also to identify some other aspects of these networks that may be interesting to model in the following examples we denote as msensor and msink the modules installed in any of the anonymous sensors in the network and the modules installed in the sink respectively note also that all sensors are assumed to have a builtin method deploy that is responsible for installing new modules the intuition is that this method is part of the tiny operating system that allows sensors to react when first placed in the field finally we assume in these small examples that the network layer supports scoped flooding we shall see in the next section that this can be supported via software with the inclusion of state in sensors ping we start with a very simple ping program each sensor has a ping method that when invoked calls a method forward in the network with its position and battery charge as arguments when the method forward is invoked by a sensor in the network it just triggers another call to forward in the network the sink has a distinct implementation of this method any incomming invocation logs the position and battery values given as arguments so the overall result of the call in the sink is that all reachable sensors in the network will in principle receive this call and will flood the network with their positions and battery charge values these values eventually reach the sink and get logged msensor p b ping net f o r w a r d p b net ping forward x y net f o r w a r d x y msink p b forward x y log position and power x y net ping msink p b i d l e msensor p r b i d l e msensor pn bn pn rn bn querying this example shows how we can program a network with a sink that periodically queries the network for the readings of the sensors each sensor has a sample method that samples the field using the sense draft construct and calls the method forward in the neighbourhood with its position and the value sampled as arguments the call then queries the neighbourhood recursively with a replica of the original call the original call is of course made from the sink which has a method start sample that calls the method sample in the network within a cycle note that if the sink had a method named sample instead of start sample it might get a call to sample from elsewhere in the network that could interfere with the sampling control cycle msensor p sample sense x i n net f o r w a r d p x net sample forward x y net f o r w a r d x y msink p start sample forward x y log position and value x y net sample t h i s s t a r t s a m p l e t h i s s t a r t s a m p l e msink p i d l e msensor p r b i d l e msensor pn pn rn bn polling in this example the cycle of the sampling is done in each sensor instead of in the sink as in the previous example the sink just invokes the method start sample once this method propagates the call through the network and invokes sample for each sensor this method samples the field within a cycle and forwards the result to the network this implementation requires less broadcasts than the previous one as the sink only has to call start sample on the network once on the other hand it increases the amount of processing per sensor msensor p start sample net s t a r t s a m p l e t h i s sample sample sense x i n net f o r w a r d p x t h i s sample forward x y net f o r w a r d x y msink p forward x y log position and value x y net s t a r t e x a m p l e msink p p r b draft i d l e msensor i d l e msensor pn pn rn bn code deployment the above examples assume we have some means of deploying the code to the sensors in this example we address this problem and show how it can be programmed in csn the code we wish to deploy and execute is the same as the one in the previous example to achieve this goal the sink first calls the deploy method on the network to install the new module with the methods start sample sample and forward as above this call recursively deploys the code to the sensors in the network the sink then calls start sample to start the sampling again as above and waits for the forwarded results on the method forward msensor p deploy x i n s t a l l x net deploy x x y log position and value x y start sample net s t a r t s a m p l e t h i s sample sample sense x i n net f o r w a r d p x t h i s sample forward x y net f o r w a r d x y msink p forward net deploy net s t a r t s a m p l e msink p i d l e msensor p r b i d l e msensor pn pn rn bn a refined version of this code one that avoids the start sample method completely can be programmed here we deploy the code for all sensors by sending methods sample and forward to all the sensors in the network by invoking deploy once deployed the code is activated with a call to sample in the sink instead of using the start sample method as above msensor p deploy x i n s t a l l x net deploy x msink p forward x y log position and value x y draft net deploy sample net sample i n s t a l l sample sense x i n net f o r w a r d p x t h i s sample t h i s sample forward x y net f o r w a r d x y net sample msink p i d l e msensor p r b i d l e msensor pn pn rn bn notice that the implementation of the method sample has changed here when the method is executed for the first time at each sensor it starts by propagating the call to its neighborhood and then it changes itself through an install call the newly installed code of sample is the same as the one in the first implementation of the example the method then continues to execute and calls the new version of sample which starts sampling the field and forwarding values sealing sensors this example shows how we can install a sensor network with a module that contains a method seal that prevents any further dynamic of the sensors preventing anyone from tampering with the installed code the module also contains a method unseal that restores the original deploy method thus allowing dynamic again the sink just installs the module containning these methods in the network by broadcasting a method call to deploy each sensor that receives the call installs the module and floods the neighborhood with a replica of the call another message by the sink then replaces the deploy method itself and it to idle this prevents any further instalation of software in the sensors and thus effectively seals the network from external interaction other than the one allowed by the remainder of the methods in the modules of the sensors msensor deploy x i n s t a l l x net deploy x msink net deploy seal i n s t a l l deploy idle uns eal i n s t a l l deploy x i n s t a l l x net deploy x draft net s e a l msink i d l e msensor p r b i d l e msensor pn rn bn iv d iscussion in the previous sections we focused our attention on the programming issues of a sensor network and presented a core calculus that is expressive enough to model fundamental operations such as local broadcast of messages local sensing of the environment and software module updates csn allows the global modeling of sensor networks in the sense that it allows us to design and implement sensor network applications as distributed applications rather than giving the programmer a view of the programming task it also provides the tools to manage running sensor networks namely through the use of the software deployment capabilities there are other important features of sensor networks that we consciously left out of csn in the sequel we discuss some of these features and sketch some ideas of how we would include support for them a state from a programming point of view adding state to sensors is essential sensors have some limited computational capabilities and may perform some data processing before sending it to the sink this processing assumes that the sensor is capable of buffering data and thus maintain some state in a way csn sensors have state indeed the atributes p b and r may be viewed as sensor state since these are characteristic of each sensor and are usually controlled at the hardware level we chose to represent this state as parameters of the sensors the programmer may read these values at any time through builtin method calls but any change to this data is performed transparently for the programmer by the hardware or operating system as we mentioned before it is clear that the value of b changes with time the position p may also change with time if we envision our sensors endowed with some form of mobility sensors dropped in the atmosphere or flowing in the ocean to allow for a more systematic extension of our sensors with state variables we can assume that each sensor has a heap h where the values of these variables are stored h p m p r b the model chosen for this heap is orthogonal to our sensor calculus and for this discussion we assume that we enrich the values v of the language with a set of keys ranged over by our heap may thus be defined as a map h from keys into values intuitively we can think of it as an associative memory with the usual operations put get lookup and hash programs running in the sensors may share state by exchanging keys we assume also that these operations are atomic and thus no race conditions can arise draft with this basic model for a heap we can the ping example from section with scoped flooding thus eliminating echos by software we do this by associating a unique key to each remote procedure call broadcast to the network this key is created through the hash function that takes as arguments the position p and the battery b of the sensor each sensor after receiving a call to ping propagates the call to its neighborhood and generates a new key to send with its position and battery charge in a forward call then it stores the key in its heap to avoid forwarding its own forward call on the other hand each time a sensor receives a call to forward it checks whether it has the key associated to the call in its heap if so it does nothing if not it forwards the call and stores the key in the heap to avoid future msensor p b ping net ping l e t k hash p b i n net f o r w a r d p b k put k forward x y k i f lookup k then net f o r w a r d x y k put k msink p b p r b msensor net ping msink idle i d l e msensor pn bn pn rn bn b events another characteristic of sensors is their modus operandi some sensors sample the field as a result of instructions implemented in the software that controls them such is the case with csn sensors the programmer is responsible for controlling the sensing activity of the sensor network it is of course possible for sensor nodes to be activated in different ways for example some may have their sensing routines implemented at hardware or operating system level and thus not directly controllable by the programmer such classes of sensor nodes tipically sample the field periodically and are activated when a given condition arises a temperature above or below a given threshold the detection of above a given threshold the detection of a strong source of infrared light the way in which certain environmental conditions or events can activate the sensor is by triggering the execution of a handler procedure that processes the event support for this kind of sensors in csn could be achieved by assuming that each sensor has a builtin handler procedure say handle for such events the handler procedure when activated receives the value of the field that triggered the event note that draft from the point of view of the sensor the occurrence of such an event is equivalent to the deployment of a method invocation v in its processing core where v is the field value associated with the event the sensor has no control over this deployment but may be programmed to react in different ways to these calls by providing adequate implementations of the handle routine the events could be included in the semantics given in section with the following rule p r p m p r b this f p p m b event as in the case of the builtin method for code deployment the handler could be programmed to change the behavior of the network in the presence of events one could envision the default handler as handle x idle which ignores all events then we could change this default behavior so that an event triggers an alarm that gets sent to the sink a possible implementation of such a dynamic of the network default handlers can be seen in the code below msensor p handle x idle msink p handle x idle net deploy handle x alarm net alarm p x x y net alarm x y i n s t a l l alarm msink p p r b x y sing bell x y i d l e msensor i d l e msensor pn pn rn bn where the default implementation of the handle procedure is superseded by one that eventualy triggers an alarm in the sink more complex behavior could be modeled for sensors that take multiple readings with a handler associated with each event c security finally another issue that is of outmost importance in the management of sensor networks is security it is important to note that many potential applications of sensor networks are in high risk situations examples may be the monitorization of ecological disaster areas volcanic or sismic activity and radiation levels in contaminated areas secure access to data is fundamental to establish its credibility and for correctly assessing risks in the management of such episodes in csn we have not taken security issues into consideration this was not our goal at this time however one feature of the calculus may provide interesting solutions for the future in fact in csn all computation within a sensor draft results from an invocation of methods in the modules of a sensor either originating in the network or from within the sensor in a sense the modules m of the sensor work as a firewall that can be used to control incomming messages and implement security protocols thus all remote method invocations and software updates might first be validated locally with methods of the sensor s modules and only then the actions would be performed the idea of equipping sensors or in general domains with some kind of membrane that filters all the interactions with the surrounding network has been explored for instance in in the in the kell calculus in the brane calculi in miko and in one possible development is to incorporate some features of the membrane model into csn the current formulation of the calculus also assumes that all methods in the module m of a sensor p m p r are b visible from the network it is possible to implement an access policy to methods in such a way that some methods are private to the sensor can only be invoked from within the sensor this allows for example the complete encapsulation of the state of the sensor c onclusions and f uture w ork aiming at providing sensor networks with a rigorous and adequate programming model upon which operating systems and programming languages can be built we presented csn a calculus for sensor networks developed specifically for this class of distributed systems after identifying the necessary sensing processing and wireless broadcasting features of the calculus we opted to base our work on a abstraction of physical and link layer communication issues in contrast with previous work on wireless network calculi thus focusing on the system requirements for programming applications this approach resulted in the csn syntax and semantics whose expressiveness we illustrated through a series of implementations of typical operations in sensor networks also included was a detailed discussion of possible extensions to csn to account for other important properties of sensors such as state sampling strategies and security as part of our ongoing efforts we are currently using csn to establish a mathematical framework for reasoning about sensor networks one major objective of this work consists in providing formal proofs of correctness for data gathering protocols that are commonly used in current sensor networks and whose performance and reliability has so far only been evaluated through computer simulations and experiments from a more practical point of view the focus will be set on the development of a prototype implementation of csn this prototype will be used to emulate the behavior of sensor networks by software and ultimately to port the programming model to a natural development architecture for sensor draft network applications acknowledgements the authors gratefully acknowledge insightful discussions with gerhard maierbacher departamento de de computadores faculdade de universidade do porto draft this figure is available in jpg format from http this figure is available in jpg format from http
2
filter and smoother with improved covariance matrix approximation aug henri nurminen tohid ardeshiri robert and fredrik gustafsson count and smoothing algorithms for linear models with measurement noise are presented the presented algorithms use a variational bayes based posterior approximation with coupled location and skewness variables to reduce the error caused by the variational approximation although the variational update is done suboptimally our simulations show that the proposed method gives a more accurate approximation of the posterior covariance matrix than an earlier proposed variational algorithm consequently the novel filter and smoother outperform the earlier proposed robust filter and smoother and other existing alternatives in accuracy and speed we present both simulations and tests based on navigation data in particular gps data in an urban area to demonstrate the performance of the novel methods moreover the extension of the proposed algorithms to cover the case where the distribution of the measurement noise is multivariate is outlined finally the paper presents a study of theoretical performance bounds for the proposed algorithms error m fig the error histogram in an uwb ranging experiment described in shows positive skewness the edge bars show the errors outside the figure limits anchor range true position likelihood index skew t skewness robust filtering kalman filter variational bayes rts smoother truncated normal distribution lower bound i ntroduction asymmetric and noise processes are present in many inference problems in radio signal based distance estimation for example obstacles cause large positive errors that dominate over symmetrically distributed errors from other sources an example of this is the error histogram of in distance measurements collected in an indoor environment given in fig the asymmetric outlier distributions can not be predicted by the normal distribution that is equivalent in second order moments because normal distributions are symmetric distributions the skew is a generalization of the that has the modeling flexibility to capture both skewness and of such noise processes to illustrate this fig shows the contours of the likelihood function for three range measurements where some of the measurements are positive outliers in this example t and normal measurement noise models are compared due to the additional modeling flexibility the based likelihood provides a more apposite spread of the probability mass than the normal and t based likelihoods nurminen and are with the department of automation science and engineering tampere university of technology tut po box tampere finland nurminen receives funding from tut graduate school the foundation of nokia corporation and tekniikan ardeshiri was with the division of automatic control department of electrical engineering university sweden and received funding from swedish research council vr project scalable kalman filters for this work ardeshiri is currently with the department of engineering university of cambridge trumpington street cambridge uk gustafsson is with the division of automatic control department of electrical engineering university sweden email fredrik fig the contours of the likelihood function for three range measurements for the normal left t middle and right measurement noise models the t and based likelihoods handle one outlier upper row while only the model handles the two positive outlier measurements bottom row due to its asymmetry the measurement model parameters are selected such that the values and the first two moments coincide the applications of the skew distributions are not limited to radio signal based localization in biostatistics skewed distributions are used as a modeling tool for handling heterogeneous data involving asymmetric behaviors across subpopulations in psychiatric research skew normal distribution is used to model asymmetric data further in economics skew normal and skew are used as models for describing claims in insurance more examples describing approaches for analysis and modeling using multivariate skew normal and skew in econometrics and environmetrics are presented in there are various algorithms dedicated to statistical inference of time series when the data exhibit asymmetric distribution particle filters can easily be adapted to skew noise distributions but the computational complexity of these filters increases rapidly as the state dimension increases a skew kalman filter is proposed in and in this filter is extended to a robust filter using monte carlo integration these solutions are based on models where the measurement noise is a dependent process with skewed marginals the article proposes filtering of independent skew measurement and process noises with the cost of increasing the filter state s dimension over time in all the skew filters of sequential processing requires numerical evaluation of multidimensional integrals the inference problem with skew likelihood distributions can also be cast into an optimization problem proposes an approach to model the measurement noise in an uwb based positioning problem using a tailored distribution skewness can also be modeled by a mixture of normal distributions gaussian mixtures gm there are many filtering algorithms for gm distributions such as gaussian sum filter and interactive multiple model imm filter however gms have exponentially decaying tails and can thus be too sensitive to outlier measurements furthermore in order to keep the computational cost of a gaussian sum filter practicable a mixture reduction algorithm mra is required and these mras can be computationally expensive and involve approximations to the posterior density filtering and smoothing algorithms for linear discretetime models with measurement noise using a variational bayes vb method are presented in in tests with real uwb indoor localization data this filter was shown to be accurate and computationally inexpensive this paper proposes improvements to the robust filter and smoother proposed in analogous to the measurement noise is modeled by the skew and the proposed filter and smoother use a vb approximation of the filtering and smoothing posteriors however the main contributions of this paper are a new factorization of the approximate posterior distribution derivation of rao lower bound crlb for the proposed filter and smoother the application of an existing method for approximating the statistics of a truncated multivariate normal distribution tmnd and a proof of optimality for a truncation ordering in approximation of the moments of the tmnd a tmnd is a multivariate normal distribution whose support is restricted truncated by linear constraints and that is renormalized to integrate to unity the aforementioned contributions improve the estimation performance of the filter and smoother by reducing the covariance underestimation common to most vb inference algorithms chapter to our knowledge vb approximations have been applied to the skew only in our earlier works and by wand et al the rest of this paper is structured as follows in section ii the filtering and smoothing problem involving the univariate skew is posed in section iii a solution based on vb for the formulated problem is proposed the proposed solution is evaluated using simulated data as well as realworld data in sections iv and v respectively the essential expressions to extend the proposed filtering and smoothing algorithms to problems involving multivariate mvst distribution are given in section vi a performance bound for time series data with measurement noise is derived and evaluated in simulation in section vii the concluding remarks are given in section viii ii i nference p roblem formulation consider the linear and gaussian state evolution model p n axk wk iid wk n q where n denotes the probability density function pdf of the multivariate normal distribution with mean and covariance matrix a rnx is the state transition matrix xk rnx indexed by k k is the state to be estimated with initial prior distribution where the subscript is read at time a using measurements up to time b and wk rnx is the process noise further the measurements yk rny are assumed to be governed by the measurement equation yk cxk ek ny where c r is the measurement matrix and the measurement noise ek is independent of the process noise and has the product of independent univariate skew as the pdf iid ek i st rii the model can also be nonstationary but for the sake of lighter notation the k subscripts on a q c rii and are omitted the univariate skew st is parametrized by its location parameter r spread parameter shape parameter r and degrees of freedom and has the pdf st z t z t e z where t z z is the pdf of student s is the gamma tion and ze also t denotes the cumulative distribution function cdf of student s with degrees of freedom expressions for the first two moments of the univariate skew can be found in the model with independent univariate measurement noise components is justified in applications where data from different sensors can be assumed to have statistically independent noise extension and comparison to multivariate noise will be discussed in section vi the independent univariate noise model induces the hierarchical representation of the measurement likelihood yk uk n cxk k r uk ii k g where r rny is a diagonal matrix whose diagonal elements square roots rii are the spread parameters of the skew in rny is a diagonal matrix whose diagonal elements are the shape parameters ny is a vector whose elements are the degrees of freedom the operator ij gives the i j entry of its argument is a diagonal matrix with a priori independent random diagonal elements ii also is the tmnd with closed positive orthant as support location parameter and matrix furthermore g is the gamma distribution with shape parameter and rate parameter bayesian smoothing means finding the smoothing posterior p k k k k in the smoothing posterior is approximated by a factorized distribution of the form q qx k qu k k subsequently the approximate posterior distributions are computed using the vb approach the vb approach minimizes the r q x dx of divergence kld dkl q x log p x the true posterior from the factorized approximation that is dkl q k k k k is minimized in the numerical simulations in manifest covariance matrix underestimation which is a known weakness of the vb approach chapter one of the contributions of this paper is to reduce the covariance underestimation of the filter and smoother proposed in by removing independence approximations of the posterior approximation the proposed filter and smoother are presented in section iii iii p roposed f ilter and s moother using bayes theorem the state evolution model and the likelihood the joint smoothing posterior pdf can be derived as in this posterior is not analytically tractable we propose to seek an approximation in the form p k k k k qxu k k k where the factors in are specified by argmin dkl qn k k k k qxu and where qn qxu k k k hence k and k are not approximated as independent as in because they can be highly correlated a posteriori the analytical solutions for and are obtained by cyclic iteration of log qxu e log p k k k k cxu log e log p k k k k qxu where the expected values on the right hand sides are taken with respect to the current qxu and chapter also cxu and are constants with respect to the variables k k and k respectively furthermore the joint pdf p k k k k can be written as p k k k k p y p k y p yk uk p uk p y n axl q k y n yk cxk k r uk ny k y y g ii computation of the expectation in which is relegated to appendix a requires the first two moments of a tmnd because the support of k is the orthant these moments can be computed using the formulas presented in they require evaluating the cdf of general multivariate normal distributions the m atlab function mvncdf implements the numerical quadrature of in and dimensional cases and the carlo method of for the dimensionalities however these methods can be prohibitively slow therefore we approximate the tmnd s moments using the fast sequential algorithm suggested in the method is initialized with the original normal density whose parameters are then updated by applying one linear constraint at a time for each constraint the mean and covariance matrix of the normal distribution are computed analytically and the distribution is approximated by a normal with the updated moments this method is illustrated in fig where a bivariate normal distribution truncated into the positive quadrant is approximated with a normal distribution the result of the sequential truncation depends on the order in which the constraints are applied finding the optimal order of applying the truncations is a problem that has combinatorial complexity hence we adopt a greedy approach whereby the constraint to be applied is chosen from among the remaining constraints so that the resulting normal is closest to the true tmnd by lemma the optimal constraint to select is the one that truncates the most probability the optimality is with respect to a kld as the measure for example in fig the vertical constraint truncates more probability so it is applied first lemma let p z be a tmnd with the support z and q z n z then argmin dkl p z q z zi argmin i i ii where is the ith element of is the ith r diagonal element of is the iverson bracket and ci q z zi dz proof dkl p z q z zi z p z log q z zi dz z log ci p z log q z dz log ci where means equality up to an additive constant since ci is an increasing function of the proof follows ii the obtained algorithm with the optimal processing sequence for computing the mean and covariance matrix of a given multivariate normal distribution truncated to the positive orthant is given in table i in many programming languages a numerically robust method to implement the line of the algorithm in table i is using the scaled complementary error function erfcx through p erfcx the recursion is convergent to a local optimum chapter however there is no proof of convergence available when the moments of the tmnd are approximated in spite of lack of a convergence proof the iterations did not diverge in the numerical simulations presented in section iv in the smoother the update includes a forward filtering step of the smoother rtss to compute an approximate filtering posterior for k k this posterior is a tmnd where only the variables k are restricted to the positive orthant the tmnd is approximated as a multivariate normal distribution whose parameters contour of normal under truncation contour of normal approximation linear constraint truncated area a b c d fig the sequential truncation method for approximating a truncated normal distribution with a normal distribution a the original normal distribution s contour ellipse that contains of the probability and the truncated area in gray b the first applied truncation in gray and the contour of the resulting normal approximation c the second applied truncation in gray and the contour of the normal approximation d the final normal approximation table ii s moothing for measurement noise table i o ptimal sequential truncation to the positive orthant inputs and the set of the truncated components indices t while t do k argmin i i t if does not underflow to then is the pdf of n its cdf else end if t t k end while outputs and t are obtained using the sequential truncation this approximation enables recursive forward filtering and the use of rtss s backward smoothing step that gives normal approximations to the marginal smoothing posteriors qxu xk uk n uxkk after the iterations converge the variables k are integrated out to get the approximate smoothing posteriors n xk where the parameters and are the output of the smoother sts algorithm in table ii sts can be restricted to an online recursive algorithm to synthesize a filter which is summarized in table iii in the filter the output of a filtering step is also a tmnd which in analogy to sts is approximated by a multivariate normal distribution to have a recursive algorithm using sequential truncation the tmnd qxu xk uk is approximated by a normal distribution and the parameters of the marginals n xk are the outputs of the filter stf algorithm in table iii iv s imulations our numerical simulations use satellite navigation pseudorange measurements of the model iid yk i ksi xk xk ek i ek i st m m where si is the ith satellite s position xk r is bias with prior n m and r is a parameter the model is linearized using the first order taylor polynomial approximation and the linearization error is negligible because the satellites are far the satellite constellation of the global positioning system from the first second of the year provided by the international gnss service is used with visible satellites the error rmse is computed for xk the computations are made with m atlab inputs a c q r and k az a cz c initialization iny for k k repeat update qxu k k given k for k to k do blockdiag kz czt c t r h i kz yk i kz cz z nx nx ny e z nx nx nx at q end for for k k down to do gk az gk az gk gt k nx nx nx nx ny nx ny nx ny end for update k given qxu k k for k to k do yk yk t czt ut end for for i to ny do ii i k ii end for until converged outputs and for k k computation of tmnd statistics in this subsection we study the computation of the moments of the untruncated components of a tmnd one state and one measurement vector per monte carlo replication are generated from the model with degrees of freedom corresponding to likelihood prior x n diag m m m m and replications the compared methods are sequential truncations with the optimal truncation order topt and with random order trand the variational bayes vb and the analytical formulas of using m atlab function mvncdf mvncdf in trand any of the constraints is chosen randomly at each truncation vb is an update of the skew t variational bayes filter stvbf where i and the vb iteration is terminated when the position estimate changes less than m or at the iteration the reference solution for the expectation value is a bootstrap particle filter pf update with samples table iii f iltering for measurement noise mvncdf vb trand topt inputs a c q r and k cz c for k to k do initialization iny repeat update qxu xk uk given blockdiag kz czt c t r h i kz yk e i kz cz nx nx ny e z nx nx nx nx ny nx ny nx ny update given qxu xk uk yk cz yk cz t cz czt ut end for for i to ny do ii i k ii until converged at q end for outputs and for k k c fig topt outperforms trand when one negative outlier is added to the measurement noise vector because there is one truncation that truncates much more probability than the rest fig shows the distributions of the estimates differences from the pf estimate the errors are given per cent of the pf s estimation error the box levels are and quantiles and the asterisks show minimum and maximum values topt outperforms trand in the cases with high skewness which reflects the result of lemma mvncdf is more accurate than topt in the cases with high skewness but mvncdf s computational load is roughly times that of the topt this justifies the use of the sequential truncation approximation the order of the truncations in the sequential truncation algorithm affects the performance only when there are clear differences in the amounts of probability mass that each truncation truncates fig presents an example where and each measurement noise realization e has been generated from the skew normal distribution and then modified by p ej min min ny c where j is a random index and c is a parameter a large c generates one negative outlier to each measurement vector which results in one truncation with significantly larger truncated probability mass than the rest of the truncations fig shows the difference of trand error from topt error a positive difference means that topt is more accurate the errors here refer to distance from the pf estimate the figure shows that with large c topt is more accurate than trand thus the effect of the truncation ordering on the accuracy of the sequential truncation approximation is more pronounced when there is one truncation that truncates much more than the rest this justifies our greedy approach and the result of lemma for ordering the truncations the approximation of the posterior covariance matrix is tested by studying the normalized estimation error squared nees values ch nees values are shown by fig if the covariance matrix is correct the expected value of nees is the dimensionality ch vb gets large nees values when is large which indicates that vb underestimates the covariance matrix apart from mvncdf topt and trand give nees values closest to so the sequential truncation provides a more accurate covariance matrix approximation than vb inference in this section the proposed filter stf is compared with filters using numerical simulations of a trajectory the state model is a random walk with process noise covariance q diag q m q m m where q is a parameter the compared methods are a bootstraptype pf stvbf t variational bayes filter tvbf and kalman filter kf with measurement validation gating ch that discards the measurement components whose normalized innovation squared is larger than the distribution s quantile the used kf parameters are the mean and variance of the used skew and the tvbf parameters are obtained by matching the degrees of freedom with that of the skew and computing the maximum likelihood location and scale parameters for a set of numbers generated from the skew the results are based on monte carlo replications mvncdf vb trand topt nees xk t xk fig with large values topt outperforms trand mvncdf is accurate but computationally heavy average nees trand error topt error m error from pf of pf error where and are the filter s output mean and covariance matrix and xk is the true state the distributions of the filters fig topt s nees is closer to the optimal value than that of vb so sequential truncation gives a more accurate posterior covariance matrix corresponds to the symmetric hence the comparison methods become identical pf kf tvbf stvbf stf rmse difference kf tvbf stvbf median rmse m median rmse m np nvb fig stf converges in five iterations the required number of pf particles can be more than left q right q fig stf outperforms the comparison methods with noise rmse differences from stf s rmse per cent of the stf s rmse the differences increase as is increased upper q lower q the used state evolution model is the velocity model for both the user position lk and the receiver clock error bk r used in section iv thus the filter t state is xk lkt bk and the state evolution model is dk xk wk dk where q dk wk n q dk q dk sb dk sf dk sf dk sf dk sf dk and dk is the time difference of the measurements the used parameter values are q m sb ms and sf the initial prior is a normal distribution with mean given by the method with the first measurement and a large covariance matrix the measurement model is the same pseudorange model that is used in the simulations of section iv yk i ksi k xk k xk ek i median rmse m t ests with real data pseudorange positioning two gnss global navigation satellite system positioning data sets were collected in central london uk to test the filters performance in a challenging satellite positioning scenario with numerous measurements the data include toa based pseudorange measurements from gps global positioning system satellites each set contains a trajectory that was collected by car using a gnss receiver the lengths of the tracks are about km and km the durations are about an hour for each and measurements are received at about onesecond intervals the first track is used for fitting the filter parameters while the second track is used for studying the filters positioning errors a ground truth was measured using an applanix system that improves the gnss solution with tactical grade inertial measurement units the gps satellites locations were obtained from the broadcast ephemerides provided by the international gnss service the algorithms are computed with m atlab np nvb fig illustrates the filter iterations convergence when the measurement noise components ek i in are generated independently from the univariate skew the figure shows that the proposed stf converges within five vb iterations and outperforms the other filters already with two vb iterations except for pf that is the solution furthermore fig shows that stf s converged state is close to the pf s converged state in rmse and pf can require as many as particles to outperform stf stf also converges faster than stvbf when the process noise variance parameter q is large fig shows the distributions of the rmse differences from the stf s rmse as percentages of the stf s rmse stf and tvbf use five vb iterations and stvbf uses vb iterations stf clearly has the smallest rmse when unlike stvbf the new stf improves accuracy even with small q which can be explained by the improved covariance matrix approximation the proposed smoother is also tested with measurements generated from the compared smoothers are the proposed smoother sts variational bayes smoother stvbs t variational bayes smoother tvbs and the rtss with measurement validation gating fig shows that sts has lower rmse than the smoothers based on symmetric distributions furthermore sts s vb iteration converges in five iterations or less so it is faster than stvbs rtss tvbs stvbs sts median rmse m rmse difference vb vb fig five sts iterations give the converged state s rmse left q right q normal t rmse m rmse m p ek ekf tvbf stf nvb ek m fig measurement error distributions fitted to the real gnss data for normal t and error models the modes are fixed to zero normal m m empirical cdf f ilter parameters for real gnss data t m m ekf tvbf stf error m where si k is the position of the ith satellite at the time of transmission the measurement model is linearized with respect to xk at each prior mean using the first order taylor series approximation the compared filters are based on three different models for the measurement noise ek where ek i st ek i t ek i n fig rmse of horizontal left and vertical right position for real gnss data as a function of the number of vb iterations table iv m m m nvb empirical cdf the model is the basis for stf and stvbf the t model is the basis for tvbf and the normal model is the basis for the extended kf ekf with measurement validation gating the pseudoranges are unbiased in the case so the location parameters are fixed to furthermore the degrees of freedom are fixed to to compromise between outlier robustness and performance based on inlier measurements the deviation parameter of the normal model was then fitted to the data using the algorithm ch and the parameter of the t model as well as the parameters and of the model were fitted with the metropolis algorithm ch the location parameter was obtained by numerically finding the point that sets the mode of the noise distribution to zero these three error distributions parameters are given in table iv and the pdfs are plotted in fig fig shows the filter rmses as a function of the number of vb iterations both stf and tvbf converge within five vb iterations the empirical cdf graphs of the user position errors with five vb iterations are shown in fig and the rmses as well as the relative running times are given in table the results show that modelling the skewness improves the positioning accuracy and is important especially for the accuracy in vertical direction this can be explained by the sensitivity of the vertical direction to large measurement errors due to bad measurement geometry the accuracy in the vertical direction is low even with measurements so correct downweighting of erroneous altitude information requires careful modelling of the noise distribution s tails the computational burden of our stf implementation with five vb iterations is about three times that of tvbf but fig shows that two stf iterations would already be enough to match tvbf s average rmse error m fig empirical error cdfs for the real gnss data for the horizontal error left and the vertical error right band indoor positioning another application for stf is indoor localization using the toa measurements of band uwb radios we collected five test tracks in a laboratory environment with a optical reference positioning and three test tracks in a real university campus indoor environment the measurement equipment is a spoonphone smartphone with android operating system and uwb channel pulse radio mhz mhz bandwidth and six bespoon uwb tags the system uses toa ranging thus no clock synchronization is required the uwb measurement update and localization error computation is done with hz frequency the algorithms are computed with m atlab the novelty of this article compared to our previous article is the stf algorithm here only toa measurements are used and the state evolution model is the random walk xk wk wk n blockdiag q where the state xk is position in coordinates and q s and qa s are the process noise parameters the initial position is assumed to be known scaled by the speed of light the toa gives a direct measurement of the distance between the uwb beacon and the user thus the measurement model is yk i kbi xk k ek i ny where yk r is the distance vector bi is the position of the ith uwb beacon and ek is measurement noise the measurement function is linearized at each prior mean we test the three alternative models for the measurement noise the estimation algorithm for the skewt model is the stf in table iii the filter for the model is the tvbf and for the normally distributed high accuracy reference measurements are provided through the use of the vicon tracking system courtesy of the uas technologies lab artificial intelligence and integrated computer systems division aiics at the department of computer and information science ida http rmsehorizontal m rmsevertical m running time ekf tvbf rmse m table v t he rmse s and relative running times for real gnss data stf ekf tvbf stf histogram distr normal t p ek nvb fig filter rmses for uwb indoor positioning as a function of the number of vb iterations table vii t he average rmse s in meters for uwb indoor positioning route laboratory campus campus campus ek m ekf tvbf stf fig the real uwb ranging s measurement error histogram distribution and the distributions fitted to the data for normal t and error models noise the ekf with measurement validation gating the degrees of freedom parameters and are fixed to and the parameters are optimized to maximize the likelihood of the laboratory measurements the maximum likelihood parameter values are given in table vi and the pdfs of the fitted distributions are compared with the error histogram in fig the laboratory data are used for both parameter calibration and positioning tests to obtain a fair comparison of the optimal of each filtering algorithm this eliminates the effect of possible differences in calibration and positioning data to evaluate the performances at an independent data set we also measured three test tracks in the campus of tampere university of technology with a rough reference position based on interpolation between timestamped locations table vi distribution families which include the multivariate canonical fundamental skew cfust and the multivariate unified skew a comprehensive review on the different variants of the mvst is given in the mvst variant used in this article is based on the cfust discussed in and it is the most general variant of the mvst in this variant the parameter matrix r rnz is a square matrix and rnz is an arbitrary matrix the pdf is mvst z r t z t z l nz where l inz r t z nz det z z t z t he maximum likelihood parameter values of uwb positioning based on the laboratory data set m m m t m m normal m m the filters also have the number of vb iterations nvb as a parameter fig shows the filters rmses averaged over all the data sets with different values of nvb five iterations is sufficient for stf in uwb indoor positioning but stf matches tvbf s accuracy already with two vb iterations the rmses of the filters are given in table vii tvbf and stf use five vb iterations and show significantly lower rmse that ekf with measurement validation gating furthermore the proposed stf has a lower rmse than the tvbf the campus track was measured avoiding condition so the difference between stf and tvbf is small at this track vi e xtension to mvst the skew has several multivariate versions in the pdf of the multivariate skew mvst involves the cdf of a univariate while the definition of skew given in involves the cdf of a multivariate these versions of mvst are special cases of more general multivariate is the pdf of the nz and t z its cdf and q z z z t the inference algorithms proposed in this paper can be extended to cover the case where the elements of the measurement noise vector are not statistically independent but jointly multivariate when the measurement noise follows a mvst ek mvst r the filtering and smoothing algorithms presented in tables iii and ii apply with slight modifications at the core of this convenient extension is the fact that the mvst can be represented by a similar hierarchical model as in however the shape matrices and r are not required to be diagonal and the matrix has the form iny where is a scalar with the prior g notice that when admits a small value all the measurement components can potentially be outliers simultaneously unlike with the independent univariate components model this difference is illustrated by the pdf contour plots in fig this coincides with the covariance matrix update of smoother s backward recursion the fisher information matrix for the multivariate measurement noise of is the specific modification required by mvst measurement noise to the sts algorithm in table ii is replacing the line by iny tr i ny tr vii p erformance b ound lower bound the bayesian lower bound crlb b is a lower bound for the mse matrix of the state estimator of the random variable x using the observations y t e x x p x y in the sense that the matrix difference m b is positive semidefinite for any state estimator ch the regularity conditions necessary for the to hold ch are the integrability of the first two partial derivatives of the joint distribution p k k for an asymptotically unbiased estimator these conditions are satisfied by the likelihood and the normal prior distribution even though they do not hold for p k k k k of the hierarchical model used in the proposed variational estimator due to restriction of k to the positive orthant the filtering crlb for the model follows the recursion at q e p xk e h p r t r i ny t t r rr i xk er r ert with r mvst iny r a is a matrix such that a a t a t a a t and q er t r in ny r y r iny t t r rr t u iny ny similarly the specific modification required by mvst measurement noise to the stf algorithm in table iii is replacing the line by where a b fig pdf of bivariate measurement noise from a independent univariate components model with r and b mvst model with r t i x c t r e r c r r t r where is the gradient with respect to u the derivation is given in appendix b the evaluation of the expectation in is challenging with measurements due to the requirement to evaluate the cdf of the multivariate tdistribution and its partial derivatives by the woodbury matrix identity the recursion is equivalent to the covariance matrix update of the kalman filter with the measurement noise covariance r e r t in the model the measurement noise components are independently univariate in this case the fisher information is obtained by applying to each conditionally independent measurement component and summing the resulting formula matches with the matrix e now being a diagonal matrix with the diagonal entries eii e p ri i i ri q where ri st is a univariate distributed random variable rii and x t x x substituted into this formula matches the fisher information formula obtained for the univariate skew in in this case only integrals with respect to one scalar variable are to be evaluated numerically simulation where i ek is the fisher information matrix of the measurement noise distribution furthermore the smoothing crlb for the model follows the recursion we study the crlb in of a linear model with measurement noise by generating realizations of the model gk gt k xk wk wk n q yk xk ek ek st h i where x is the state q is the process noise covariance matrix yk r is the measurement and where gk at t a q and are parameters that determine other parameters by the formulas q thus the measurement noise distribution is and has the variance we generate realizations of a process and compute the crlb and mse of the bootstrap pf with particles and the stf the crlb and the mses were computed for the first component of the state at the last time instant fig shows the crlb of the model the figure shows that increase in the skewness as well as can decrease the crlb significantly which suggests that a nonlinear filter can be significantly better than the kf which gives mse for all and fig shows the mses of pf and stf as expected when and the pf s mse approaches the crlb stf is only slightly worse than pf the figures also show that although the crlb becomes looser when the distribution becomes more skewed it correctly indicates that modeling the skewness still improves the filtering performance crlb fig the crlb of the time instant for the model with a fixed measurement noise variance skewness and decreases the crlb significantly fig mse the mses of pf left and stf s right are close to each other viii c onclusions we have proposed a novel approximate filter and smoother for linear models with and skewed measurement noise distribution and derived the rao lower bounds for the filtering and smoothing estimators the algorithms are based on the variational bayes approximation where some posterior independence approximations are removed from the earlier versions of the algorithms to avoid significant underestimation of the posterior covariance matrix removal of independence approximations is enabled by the sequential truncation algorithm for approximating the mean and covariance matrix of truncated multivariate normal distribution an optimal processing sequence is given for the sequential truncation simulations and tests with gnss outdoor positioning and uwb indoor positioning data show that the proposed algorithms outperform the methods r eferences gustafsson and gunnarsson mobile positioning using wireless networks possibilities and fundamental limitations based on available wireless network measurements ieee signal processing magazine vol no pp july chen yang liao and liao mobile location estimator in a rough wireless environment using extended imm and data fusion ieee transactions on vehicular technology vol no pp march kok hol and indoor positioning using ultrawideband and inertial measurements ieee transactions on vehicular technology vol no kaemarungsi and krishnamurthy analysis of wlan s received signal strength indication for indoor location fingerprinting pervasive and mobile computing vol no pp special issue vehicular sensor networks and mobile sensing branco and dey a general class of multivariate skewelliptical distributions journal of multivariate analysis vol no pp october azzalini and capitanio distributions generated by perturbation of symmetry with emphasis on a multivariate skew journal of the royal statistical society series b statistical methodology vol no pp gupta multivariate skew statistics vol no pp nurminen ardeshiri and gustafsson a toa positioning filter based on a measurement noise model in international conference on indoor positioning and indoor navigation ipin october pp and pyne bayesian inference for finite mixtures of univariate and multivariate and distributions biostatistics vol no pp counsell lehtonen and stein modelling psychiatric measures using distributions european psychiatry vol no pp eling fitting insurance claims to skewed distributions are the and good models insurance mathematics and economics vol no pp marchenko multivariate distributions in econometrics and environmetrics dissertation texas a m university december doucet godsill and andrieu on sequential monte carlo sampling methods for bayesian filtering statistics and computing vol no pp july naveau genton and shen a skewed kalman filter journal of multivariate analysis vol pp kim ryu mallick and genton mixtures of skewed kalman filters journal of multivariate analysis vol pp rezaie and eidsvik kalman filter variants in the closed skew normal setting computational statistics and data analysis vol pp alspach and sorenson nonlinear bayesian estimation using gaussian sum approximations ieee transactions on automatic control vol no pp and fortmann tracking and data association ser mathematics in science and engineering series academic press williams and maybeck hypothesis control techniques for multiple hypothesis tracking mathematical and computer modelling vol no pp may nurminen ardeshiri piche and gustafsson robust inference for models with skewed measurement noise ieee signal processing letters vol no pp november bishop pattern recognition and machine learning springer wand ormerod padoan and mean field variational bayes for elaborate distributions bayesian analysis vol no pp lee and mclachlan finite mixtures of canonical fundamental skew the unification of the restricted and unrestricted skew models statistics and computing no pp cover and thomas elements of information theory john wiley and sons tzikas likas and galatsanos the variational approximation for bayesian inference ieee signal processing magazine vol no pp beal variational algorithms for approximate bayesian inference dissertation gatsby computational neuroscience unit university college london tallis the moment generating function of the truncated multinormal distribution journal of the royal statistical society series b methodological vol no pp genz numerical computation of rectangular bivariate and trivariate normal and t probabilities statistics and computing vol pp genz and bretz comparison of methods for the computation of multivariate t probabilities journal of computational and graphical statistics vol no pp and positioning filters with floor plan information in international conference on advances in mobile computing and multimedia momm new york ny usa acm pp simon and simon constrained kalman filtering via density function truncation for turbofan engine health estimation international journal of systems science vol no pp rauch striebel and tung maximum likelihood estimates of linear dynamic systems journal of the american institute of aeronautics and astronautics vol no pp august dow neilan and rizos the international gnss service in a changing landscape of global navigation satellite systems journal of geodesy vol no february li and kirubarajan estimation with applications to tracking and navigation theory algorithms and software john wiley sons and hartikainen recursive filtering and smoothing for nonlinear systems using the multivariate distribution in ieee international workshop on machine learning for signal processing mlsp september axelrad and brown gps navigation algorithms in global positioning system theory and applications i parkinson and spilker eds washington aiaa ch and hartikainen on gaussian optimal smoothing of nonlinear state space models ieee transactions on automatic control vol no pp august the spoonphone online available http sahu dey and branco a new class of multivariate skew distributions with applications to bayesian regression models canadian journal of statistics vol no pp and genton on fundamental skew distributions journal of multivariate analysis no pp multivariate extended distributions and related families metron international journal of statistics vol no pp van trees detection estimation and modulation theory part i detection estimation and linear modulation theory new york john wiley sons and filtering predictive and smoothing bounds for nonlinear dynamic systems automatica vol pp j di ciccio and monti inferential aspects of the skew tdistribution quaderni di statistica vol pp bayesian filtering and smoothing cambridge uk cambridge university press lower bound for linear filtering with measurements in international conference on information fusion fusion july pp a ppendix a d erivations for the smoother derivations for qxu eq gives log qxu k k log n x k x log n axl q e log n yk cxk r log uk k c log n x log n axl q k t e yk r yk ut k uk c log n x log n axl q k yk t yk ut k uk c log n x log n axl q k x log n yk axk r log n uk c o x log n o x q o a o xl log n o o o ul x log n yk c k r c k uk where is derived in section and k means that all the components of all uk are required to be nonnegative for each k up to the truncation of the u components qxu k k has thus the same form as the joint smoothing posterior of a linear model with the e a o process noise covariance state transitionh matrix a oo i q o e fk measurement model matrix c matrix q o e c and measurement noise covariance matrix r we denote the pdfs related to this model with pe it would be possible to compute the truncated multivariate normal posterior of the joint smoothing distribution pe k k and account for the truncation of k to k the positive orthant using the sequential truncation however this would be impractical with large k due to the large dimensionality k nx ny a feasible solution is to approximate each filtering distribution in the striebel smoother s rtss forward filtering step with a multivariate normal distribution by xk uk pe xk uk k n uk c xk uk for each k k where uk is the iverson bracket notation c is the normalization factor and x x epe ukk k and varpe ukk k are approximated using the sequential truncation given the multivariate normal approximations of the filtering posteriors pe xk uk k by lemma the backward recursion of the rtss gives multivariate normal approximations of the smoothing posteriors pe xk uk k the quantities required in the derivations of section are the expectations of the smoother posteriors eqxu xk eqxu uk and the covariance matrices varqxu uxkk and varqxu uk lemma let zk k be a process and yk k a measurement process such that n zk n q yk a known distribution with the standard markovianity assumptions then if the filtering posterior p zk k is a multivariate normal distribution for each k then for each k k zk k n zk because zk k n and because implies the statement holds by the induction argument derivations for eq gives k where gk gk at q gt k t t gk a a q and and are the mean and covariance matrix of the filtering posterior p zk k proof the proof is mostly similar to the proof of theorem first assume that k n for some k the joint conditional distribution of zk and is then p zk k p p zk k markovianity n azk q n zk at zk at q log e tr yk yk t qxu ny x t uk uk log ii ii c tr yk yk t h t t c c r t gk a q gt k we use this formula in p zk k zk k p k zk k p k markovianity zk gk log ii ii c ny x ii log ii ii c where yk yk t h ti c c ut t therefore ny y k ii g ii gk at q gt k n gk zk t t gk gt k gk a q gk z gk zk gk at q gt k so p zk k zk gk gk at q gt k ny x e log p yk uk log p uk qxu log p qk thus k in the model with independent univariate measurement noise components the diagonal entries of are separate random variables as given in therefore at at q zk gk k x so by the conditioning rule of the multivariate normal distribution p zk k zk gk in the derivations of section is required is a diagonal matrix with the diagonal elements ii ii in the model with multivariate measurement noise is of the form there is a scalar random variable and there is just one parameter as given in therefore log e tr yk yk t qxu e tr uk ut log c k qxu log where is given in thus k g y so the required expectation is iny a ppendix b d erivation for the f isher information of mvst consider the multivariate measurement model mvst cx r where c rny r rny rny and the logarithm of the pdf of is log p log det log t r iny q t log t r l ny where r y cx is a function of x and y r l iny and t and t denote the pdf and cdf of the scaled multivariate with degrees of freedom a is a root matrix such that a a t a a and t a t the hessian matrix of the term log t r iny is derived in and it is log t r iny t t c t c t rt r i ny rrt r t r c t r rrt c q t the term log t r l ny can be differentiated twice using the chain rule log f df df d f dx f dx f dx dx which gives q t log t r l ny q t t r l ny g r q t t t dr pr pr dr t r l ny where the function g is antisymmetric because it is the second derivative of a function that is antisymmetric up to an additive constant pr d du t u l ny t r r t r rregularity conditions given in ch the integral g r t r dy exists because the terms of g are products of positive powers of rational expressions where the denominator is of a higher degree than the nominator and q derivat tives of t u ny evaluated at r which is a bounded continuous function of y the integral z q t t t dr pr pr dr t r l ny det q t also exists because t r l ny and pr are bounded and continuous and dr is a positive power of a rational expression where the denominator is of a higher degree than the nominator similar arguments show the integrability of the first and second derivative of the likelihood p which guarantees that the regularity conditions of the crlb are satisfied thus the expectation of is q t r r l ny e log t p z z t r i dy g r t r iny ny det det q t t r l ny drt prt pr dr dy z z r t r iny dr t r iny q t t t t r l ny dr pr pr dr dr q t t e t r l ny dr pr pr dr p q t d dr dx r q t t r rrt iny r because the function g is antisymmetric g r p r dy for any symmetric function p for which the integral exists we now outline the proof of integrability of certain functions to show that the crlb exists and fulfils the t where and mvst iny because z mvst r implies az mvst arat this gives q t r r l ny e log t p h i t e e t c t e r r rr c p where er t r r q t r l d du t u l and t r iny dy ny iny ny r r t t r rr t t r where l iny thus the fisher information for the measurement model mvst cx r is h i i x e dx log p p h i t er r ert c t e iny r rrt r p t where mvst iny er is defined in r and r
3
based consensus analysis of networks with link failures may xue lin yuanshi zheng and long wang abstract in this paper a system is presented which is formulated in terms of the delta operator the proposed system can unify and systems in a network in practice the communication among agents is acted upon by various factors the communication network among faulty agents may cause link failures which is modeled by randomly switching graphs first we show that the delta representation of system reaches consensus in mean in probability and almost surely if the expected graph is strongly connected the results induce that the system with random networks can also reach consensus in the same sense second the influence of faulty agents on consensus value is quantified under original network by using matrix perturbation theory the error bound is also presented in this paper finally a simulation example is provided to demonstrate the effectiveness of our theoretical results index terms consensus systems delta operator link failures error bound i ntroduction distributed cooperative control problem of systems has captured great attention this interest is motivated by its diverse applications in various fields from biology and sociology this work was supported by nsfc grant nos and the fundamental research funds for the central universities grant no and the young talent fund of university association for science and technology in shaanxi of china grant no corresponding author long wang lin and zheng are with the center for complex systems school of engineering xidian university xi an china xuelinxd wang is with center for systems and control college of engineering peking university beijing china email longwang may draft to control engineering and computer science in order to finish different cooperative tasks a variety of protocols have been established for systems lots of criteria concerning coordination have been provided etc as a fundamental problem of coordination consensus characterizes a phenomenon that multiple agents achieve a common decision or agreement for consensus problem it has been studied for a long time in management science the rise of consensus problem in control filed is influenced by vicsek model which is a model of multiple agents and each agent updates its state by using average of its own state as well as its neighbors the theoretical analysis of consensus for vicsek model was finished in and then in the authors proposed classical consensus protocols for systems and provided several sufficient conditions to solve the consensus problem inspired by these results many researchers devoted themselves to studying consensus problems for a system it can be analyzed from two perspectives one is dynamic model and the other is interaction network from the viewpoint of dynamic model the related researches include dynamics dynamics hybrid dynamics switched dynamics heterogeneous dynamics etc from the viewpoint of interaction network the related researches have fixed networks switching networks antagonistic networks random networks and so on with the development of digital controller in many cases a system only obtains input signal at the discrete sampling instants according to actual factor researchers investigated the sampled control and the control for systems it is well known that some systems are obtained directly from systems based on sampled control which are described by the shift operator however some applications may possess higher sampling rate which will lead to problems when the shift operator is applied to represent the system and the shift operator can t show the intuitive connection between the system and the system to overcome these limitations goodwin et al used the delta operator to represent the dynamics of sampled data system compared with shift operator approach delta operator has several advantages such as superior finite world length coefficient representation and convergence to its counterpart as the sampling period tends to zero it is worth pointing out that the delta operator makes may draft the smooth transition from the representation to the underlying system as sampled period tends to zero therefore it can be used to unify and systems due to these advantages of the delta operator there have existed many related research results inspired by these researches we apply the delta operator to describe the system with sampled data a representation is proposed for systems it is well known that the communication may be destroyed in realistic network due to link failures node failures etc thus the consensus of systems with random networks was also studied in based on the delta operator we consider the consensus of systems with random networks in this paper we assume that there exist faulty agents that only receive information or send information which lead to link failures of the network the original network without faulty agents is an undirected connected graph this phenomenon often occurs in practice for instance the receiver emitter of the agent is failure which leads to the link failure of the communication network different from we consider the consensus of system with directed random networks due to the variation of networks however the consensus value is changed therefore we analyze the influence of faulty agents on the original network the main contribution of this paper is twofold first we show that the delta representation of system reaches consensus in different sense in mean in probability and almost surely if the expected graph is strongly connected based on the delta operator we get that the consensus conditions are also appropriate for the system with random networks second we analyze the influence of faulty agents on the consensus value under original network by using matrix perturbation theory the error bound between consensus values under network with link failures and original network is presented the structure of this paper is given as follows in section based on the delta operator a system is established in section consensus in different sense is studied in section we provide the error bound caused by faulty agents in section a simulation example is presented finally we give a short conclusion in section notation let r and denote the column vector of all ones the column vector of all zeros the set of real numbers and the n n real matrices respectively the ith eigenvalue of matrix a can be denoted as a k denotes the standard euclidean norm kx t may draft p xt t x t we write kx t k xt t x t for the vector the induced matrix norm p at a we write kak at a b bij is max kaxk b if all bij we say that b is a nonnegative matrix if b moreover if all its row sums are b is said to be a row stochastic matrix for a given vector or matrix a at denotes its transpose let max dii h tk max xi tk h tk min xi tk a max a i and max a denotes the group inverse of matrix a ii p reliminaries graph theory the communication relationship between agents is modeled as a graph g v e a with vertex set v edge set e eij v v and nonnegative matrix a aij if ei agents i and j are adjacent and aij the set of neighbors of agent i is denoted by ni the degree matrix d dij is a diagonal matrix p with dii aij the laplacian matrix of the graph is defined as l lij d a with i p aij and lij the eigenvalues of l can be denoted as l l lii l graph g is said to be strongly connected if there exists a path between any two distinct vertices a path that connects vi and vj in the directed graph g is a sequence of distinct vertices vim where vi vim vj and vir e r m when g is an undirected connected graph then l is positive and has a simple zero eigenvalue moreover there exists min t l for any rn throughout this paper we always assume that g is a undirected connected graph if there does not exist the faulty agent agent not be able to receive or send information b problem statement in this paper we consider a system which consists of n agents the continuoustime dynamics of the ith agent is described by t ui t i n where xi t r and ui t r are the state and control input of ith agent respectively may draft for system a representation can be obtained by using a traditional shift operator it is given by xi tk h xi tk hui tk i n where h is the sampling period it is worth noting that as sampling period h we lose all information about the underlying system moreover it is difficult to describe the next value of xi tk this difficulty can be avoided using the delta operator introduced in the delta operator is defined as follows t h t x t h x t h h then by using the delta operator the representation of system is described by tk ui tk it can be seen that tk tk as h we know that tk tk when h hence there is a smooth transition from tk to tk as h which ensures that discretetime system converges to system as h p for system we apply the classic consensus protocol ui t aij xj t xi t by using hold the protocol is given as x aij xj tk xi tk t tk tk h ui tk denote x t t t xn t t system with protocol can be represented by tk tk based on above discussion and analysis we know that system converges to the system t t as h in this paper the original network is undirected connected we know that each agent is influenced by the information of its neighbours however there may exist the agent that may draft is unable to receive or send information in the network we call this type of agent as faulty agent in this paper without loss of generality we assume that agents and are faulty agent while other agents are normal that is they can always receive and send information in the network two scenarios are considered scenario i four cases are considered only agent can t receive information only agent can t receive information agents and can not receive information simultaneously all agents are normal networks and correspond to the four cases and respectively we assume that gi randomly switches among distinct networks gi networks and correspond to the occurrence probabilities and respectively moreover scenario ii four cases are considered only agent can t send information only agent can t send information agents and can not send information simultaneously all agents are normal networks and correspond to the four cases and respectively we assume that randomly switches among distinct networks networks and correspond to the occurrence probabilities and respectively moreover system under scenario i or ii can be written as tk x tk where ltk is the laplacian matrix at time point tk note that the graph gtk is invariant during the time interval corresponding adjacent matrix at time point tk is atk throughout this paper the sampling period satisfies when h two main objectives are considered in this paper first the consensus of system is considered second the error bound between consensus values of system and system is presented remark for simplicity we focus on two faulty agents however the analytical methods concerning error bound in this paper can be extended to the scenario of more than two faulty agents which is left to the interested readers as an exercise definition system reaches consensus a in mean if for any rn it holds that lim e x t v tk may draft b in probability if and any rn it holds that lim p h tk h tk tk c almost surely if for any rn it holds that p lim h tk h tk tk definition let w denote the transition matrix of markov chain then the markov chain is called the regular chain if there exist k such that w k has only positive elements p lemma if t is the transition matrix of a regular chain then t k t where a i t lemma if c and are ergodic chains with transition matrices t and t e and limiting probability vectors s and respectively then s i where and a i t lemma the property of delta operator for any time function x tk and y tk can be represented as x tk y tk x tk y tk x tk y tk x tk y tk lemma assume that the sampling period h dmax then system can reach average consensus if the graph is undirected connected proof let tk x tk n x due to one has tk tk consider v tk tk k as a lyapunov function by lemma it holds that tk t tk tk t tk tk t tk tk t tk hlt l tk t tk tk since the graph is undirected connected which implies that the laplacian matrix l is positive hence the eigenvalues of are repsented by l l from gersgorin disk theorem we get l then l min t dmax owing to l then tk l l tk k may draft where l l min l l l l due to l and l this proves that tk therefore tk is converge to that is system can achieve average consensus asymptotically remark from lemma there exists tk v tk h v tk h which implies that v tk h v tk t h lim tk lim it can be seen that tk can be reduced to the t as h note that system converges to system as h consequently system reaches average consensus under undirected connected graph iii c onsensus analysis in this section it is shown that system reaches consensus despite the existence of faulty agents supposed that scenario i and scenario ii have the same expression pattern for the network hence the following results can be viewed as the unified conclusions of system under scenarios i and ii theorem assume that the sampling period h dmax then system reaches consensus in mean if the expected graph is strongly connected furthermore lim e x tk t x tk where t lim e i hltk t lim w k e h vectors and are left eigenvectors of the matrices and respectively such that and t proof as pointed out in the solution to system is x t i hl h x due to the invariance of graph gtk during the time interval it can be get that x tk i may draft hltk h x tk then lim e x tk lim e i hltk k x lim i i i i k x lim x according to h dmax we have i hli i hdi hai with positive diagonal elements since and it is immediate that e i hltk is also nonnegative matrix with positive diagonal elements it follows that e i hltk where is a positive diagonal matrix since the expected graph is strongly connected matrix e i hltk is a nonnegative irreducible with positive diagonal elements moreover it is easy to verify that e i hltk then by disc theorem one has e i hltk hence by theorem it can be deduced that e i hltk is an algebraically simple eigenvalue consequently matrix is a primitive by virtue of theorem in we obtain that lim e x tk t x hence system tk reaches consensus in mean next we give the consensus value of system as h by proposition in it follows that x tk i hltk h x tk as h hence lim e x tk tk lim e i hltk k x lim k x lim x let ltk dmax i then i for where atk hence e that is matrix e is a nonnegative irreducible with positive diagonal elements similar to the previous discussion we have lim e x tk t x as h tk theorem assume that the sampling period h dmax then system reaches consensus in probability if the expected graph is strongly connected may draft proof since the expected graph is strongly connected by theorem it follows that lim e h tk tk n p wij xj tk since matrix h tk let i hltk wij we have xi tk i hltk is a row stochastic matrix we get h tk h tk and h tk h tk it can be verified that the h tk h tk is nonincreasing let tk hence h h h tk h tk which yields e h h e h tk h tk h h lim e h h hence tk as a result of chebyshevs inequality for any it follows that p h tk h tk e h tk h tk therefore lim p h tk h tk tk it is shown from theorem that lim i h matrix is also a row stochastic matrix similar to the above proof it can be proved that also holds as h theorem assume that the sampling period h dmax then system reaches consensus almost surely if the expected graph is strongly connected proof it follows from theorem that h tk h tk in probability by theorem in there exists a subsequence of h tk h tk that converges almost surely to hence for any there exists tl such that for tl h h almost surely since h tk h tk is nonincreasing it holds that h h h h almost surely therefore for any tk there holds h tk h tk almost surely this implies that system reaches consensus almost surely remark as pointed out in theorem one has x tk x tk as h since the network is invariant during time interval partial state of system can be represented by x tk x tk it is shown from theorem that the sequence x tk achieves consensus in mean then using we can conclude that x t achieves consensus in mean therefore system with random networks reaches consensus in mean if the expected graph is strongly connected may draft this indicates that the consensus result of system with random networks reduces to the consensus result of system under random networks as h moreover theorems and are also appropriate for the system as h iv e rror analysis in this section we consider the error bound on the consensus value lim e x tk and the tk consensus value under original network n x lim e x tk n x t x tk n t n x x to solve this problem the matrix perturbation theory and the property of finite markov chains are applied on the analysis of the consensus problem we apply e x tk e x tk suppose that the expected graph is strongly connected and h dmax then theorem shows that is a row stochastic matrix such that by property of row stochastic matrix each element wij of matrix satisfies wij hence by definition can be regarded as the transition matrix of a regular chain moreover is a transition matrix of ergodic chain it is noteworthy that the following analysis results are appropriate for scenario i and scenario ii theorem assume that the sampling period h dmax and the expected graph is strongly connected then n k k t and where i p n k k t h i hli i pi p pi correspond to respectively proof it is pointed out that i i i i can be written as from theorem we can derive that lim n and is a row stochastic matrix hence it proves that the row sums of are all equal to moreover it follows from theorem that lim let e n we analyze the error bound of kek may draft using lemmas and it holds that where f p holds n n f i f by some algebraic manipulations for the following equation n f e f n f e f this implies that kek f k f due to and we get kek f p it follows from f p n that kek f k p hence we analyze to solve this problem we introduce a vector y such that ky k and y k y it is obvious that y k y therefore y k y k y k k t owing to n kk n kk k kki n n y k n y k kky t by theorem in we have t t there exists the eigenvector corresponding to and which n implies and n i moreover by a t t similar analysis we have i and i i due to h it can be deduced that we know that matrix consequently t n max t n y k t n t n n dmax is it follows that kky then p y k y k k on account of k p kd y ky ky max n and n n ky we obtain kek t may draft when h from theorem we know that lim e x tk x then we analyze tk kek n it is clear that e x tk e x tk for h where lim similar to can be regarded as the transition matrix of a regular chain moreover and when h by using lim n n we have lim k hence kek p k k n similar to the above analysis we get that k t n n therefore we have kek remark agent that can t receive information is considered in scenario i while agent that can t send information is considered in scenario ii we assume that there exist agents which can not receive or send information in scenario iii for this scenario similar to theorem error bound on consensus value can be calculated dmax theorem assume that the sampling period h and the expected graph is strongly connected in scenario i then n where i and c max max n p n p n proof due to h then we have i i i i matrix is expressed as where l l h similar to the analysis of theorem we have kek f k f k calculate p may p to k we introduce a vector y such that ky k and y k y draft by utilizing the following equation is obtained y k k k k yn k k k yn k due to n p n p and y k k k k k yn k k k k yn k let k n y substituting k into equation leads to that y k k k k k k k k k k n n n n p p p p yi k k yi k k n p ky k k k max ky k k n p ky k k k by using ky k k k k t n y k k we get ky k k k where n n hence n y k k max due to p y k p p ky w k y k it holds that w k y k p y k k therefore kek ky k p k max n max t corollary assume that the sampling period h ky dmax and the expected graph is strongly connected in scenario ii then may n max n draft where i and max n p n p proof due to h then we have i i i i matrix can be expressed as where l h by utilizing we have y k k yn yn yn yn n n n n p p p p yi k k yi k k max ky k k similar to the analysis of theorem we have x max k kek n s imulation in this section a simulation is presented to illustrate the effectiveness of our theoretical results example we consider that the communication network is chosen as in figure the interaction topology among agents randomly switches among and networks and correspond to the occurrence probabilities respectively by calculation we can get the sampling period h we choose h and initial value x t figure depicts the state trajectories of system with random networks it can be seen that all the agents reach consensus the may draft fig network topologies and state x t time fig state trajectories of system with random networks original network is denoted by graph the state trajectories of system under network are shown in figure it is shown that all the agents reach consensus by calculation we obtain k obtain t n t therefore based on theorem we can k it follows from that when tk ke x tk n x k t from figures and it is easy to verify that error bound ke x tk x k is less than may draft state x t time fig state trajectories of system with network vi c onclusion in this paper based on the delta operator a system is proposed it is pointed out that the proposed system can converge to the continuoustime system as the sampling period tends to zero we assume that there exist faulty agents that only send or receive information in the network the communication network is described by randomly switching networks under the random networks it is proved that the consensus in mean in probability and almost surely can be achieved when the expected graph is strong connected furthermore the influence of faulty agents on the consensus value is analyzed the error bound between consensus values under network with link failures and original network is presented in the future based on the delta operator we will consider the formation control and containment control of systems etc r eferences and murray consensus problems in networks of agents with switching topology and ieee transactions on automatic control vol no pp zheng and wang a novel group consensus protocol for heterogeneous systems international journal of control vol no pp gao and wang based consensus of systems with topology ieee transactions on automatic control vol no pp lin and zheng consensus of switched multiagent systems ieee transactions on systems man and cybernetics systems to be published doi may draft wang and xiao consensus problems for networks of dynamic agents ieee transactions on automatic control vol no pp zheng and wang containment control of heterogeneous systems international journal of control vol no pp xiao wang chen and gao formation control for systems automatica vol no pp degroot reaching a consensus journal of the american statistical association vol no pp vicsek czirok jacob cohen and schochet novel type of phase transition in a system of particles physical review letters vol no pp jadbabaie lin and morse coordination of groups of mobile autonomous agents using nearest neighbor rules ieee transactions on automatic control vol no pp ren and beard consensus seeking in multiagent systems under dynamically changing interaction topologies ieee transactions on automatic control vol no pp xiao and wang asynchronous consensus in systems with switching topology and delays ieee transactions on automatic control vol no pp xie and wang consensus control for a class of networks of dynamic agents international journal of robust and nonlinear control vol no pp zheng ma and wang consensus of hybrid systems ieee transactions on neural networks and learning systems to be published zheng and wang consensus of switched systems ieee transactions on circuits and systems ii express briefs vol no pp zheng zhu and wang consensus of heterogeneous systems iet control theory and applications vol no pp altafini consensus problems on networks with antagonistic interactions ieee transactions on automatic control vol no pp hatano and mesbahi agreement over random networks ieee transactions on automatic control vol no pp lin zheng and wang consensus of switched systems with random networks international journal of control vol no pp xie liu wang and jia consensus in networked systems via sampled control fixed topology case in proceedings of american control conference pp cao xiao and wang consensus control for systems via synchronous periodic event detection ieee transactions on automatic control vol no pp goodwin middleton and poor digital signal processing and control proceedings of the ieee vol no pp feuer and goodwin sampling in digital signal processing and control boston middleton and goodwin improved finite word length characteristics in digital control using delta operators ieee transactions on automatic control vol no pp may draft bayat and johansen explicit model predictive control formulation and approximation ieee transactions on automatic control vol no pp ginoya shendge and phadke extended disturbance observer and its applications ieee transactions on industrial electronics vol no pp kar and moura sensor networks with random links topology design for distributed consensus ieee transactions on signal processing vol no pp fagnani and zampieri randomized consensus algorithms over large scale networks ieee journal on selected areas in communications vol no pp meyer the role of the group generalized inverse in the theory of finite markov chains siam review vol no pp goodwin yuz and cea sampling and models in american control conference pp buckley and eslami fuzzy markov chains uncertain probabilities mathware and soft computing vol no pp meyer the condition of a finite markov chain and perturbation bounds for the limiting probabilities siam journal of algebraic discrete methods vol no pp horn and johnson matrix analysis cambridge university press bernstein matrix mathematics theory facts and formulas princeton university press ash and probability and measure theory academic press may draft
3
identifiability of skew normal mixtures with one known component shantanu jaina michael levineb predrag radivojaca michael trossetc dec a department of computer science indiana university bloomington indiana b department of statistics purdue university west lafayette indiana c department of statistics indiana university bloomington indiana abstract we give sufficient identifiability conditions for estimating mixing proportions in mixtures of skew normal distributions with one known component we consider the univariate case as well as two multivariate extensions a multivariate skew normal distribution msn by azzalini and dalla valle and the canonical fundamental skew normal distribution cfusn by and genton the characteristic function of the cfusn distribution is additionally derived introduction we study identifiability of the mixing proportion for a mixture of two skew normal distributions when one of the components is known this problem has direct implications for the estimation of mixing proportions given a sample from the mixture and a sample from one of the components a sample from the mixture is typically collected for a set of objects under study whereas the component sample is collected for a set of objects verified to satisfy some property of interest this setting is common in domains where an absence of the property can not be easily verified due to practical or systemic constraints in social networks molecular biology etc in social networks for example users may only be allowed email addresses shajain shantanu jain mlevins michael levine predrag predrag radivojac mtrosset michael trosset preprint submitted to elsevier january to click like for a particular product and thus the data can be collected only for one of the component samples a sample from the users who clicked like and the mixture a sample from all users accurate estimation of mixing proportions in this setting has fundamental implications for false discovery rate estimation storey storey and tibshirani and in the context of classification for estimating posterior distributions ward et jain et a and recovering true classifier performance menon et jain et identifiability and estimation of mixing proportions have been extensively studied yakowitz and sprag dempster et tallis and chesson mclachlan and peel more recently the case with one known component has been considered in the nonparametric setting bordes et ward et blanchard et jain et patra and sen though the nonparametric formulation is highly flexible it can also be problematic due to the issues or when the irreducibility assumption is violated blanchard et jain et patra and sen in addition it is often reasonable in practice to require unimodality of density components which is difficult to ensure in a nonparametric formulation to guarantee unimodality of components and allow for skewness we model the components with a skew normal sn family a generalization of the gaussian family with good theoretical properties and tractability of inference genton although the sn family has been introduced only recently see azzalini and azzalini and dalla valle it has gained practical importance in econometrics and financial domains genton until recently the literature on identifiability of parametric mixture models emphasized identifiability with respect to a subset of parameters cases in which only a single location parameter or location and scale parameters can change furthermore most previous results only address the case of univariate mixture distributions few studies have considered identifiability of mixtures of general multivariate densities with respect to all of their parameters holzmann et browne and mcnicholas our work concerns identifiability with respect to mixing proportions in mixtures of two skew normal distributions with one known component we show in section that in this setting identifiability with respect to mixing proportions is equivalent to identifiability with respect to all parameters we consider both univariate and multivariate families of skewnormal distributions establishing identifiability with respect to all of their parameters we begin with a univariate skew normal family sn introduced by azzalini then extend our results to two forms of multivariate skew normal families msn and cfusn introduced by azzalini and dalla valle and and genton respectively these families are further discussed in section our main contribution is theorems which state sufficient conditions for identifiability of the mixing proportion of the mixture with sn msn and cfusn components respectively we also derive a concise formula for the characteristic function of cfusn in appendix a problem statement let and be families of probability density functions pdfs on rk let f be a family of pdfs having the form f where and densities and will be referred to as component pdfs f will be referred to as the mixture pdf and will be referred to as the mixing proportion f therefore is a family of mixtures in this setting we study identifiability of the density mixture with respect to the parameter when and are first univariate and then two different multivariate families all of these distribution families are defined in genton to do so we start first with studying some general identifiability conditions in section identifiabilty of mixtures with a known component in this section we discuss identifiability of the mixtures in the context of our problem we will show that the general notion of identifiability is equivalent to identifiability of the mixing proportion lemma however our main contribution in this section is lemma that gives a useful technique to prove identifiability tailored to this setting and will be applied to skew normal mixtures later in the paper lemma and lemma are restatements of results in jain et al in terms of densities instead of measures consider a mixture distribution f from equation and let be the known component distribution this is equivalent to restricting to a singleton set with a minor abuse of notation we denote the family of mixtures f as f note that f in equation can be treated as a pdf parametrized by and to reflect this parameterization we rewrite f as a function of and f f given by f a family of distributions g is said to be identifiable if the mapping from to is therefore f is identifiable if b and f a f b a b the lack of identifiability means that even if a and b are different the target density f contains no information to tell them apart if we are only interested in estimating we need f to be identifiable in that is b and f a f b a b identifiability of f in might seem to be a weaker requirement as compared to identifiability of f in however lemma shows that the two notions of identifiability are equivalent lemma f is identifiable if and only if f is identifiable in proof by definition identifiability of f in is a necessary condition for f to be identifiable now we prove that it is a sufficient condition as well let us assume that f is identifiable in also suppose that b and such that f a f b then from the definition of identifiability in it follows that a b therefore we have a b which implies that thus f is identifiable technically we require bijection but ignore the obvious onto requirement for simplicity consider now the largest possible that contains all pdfs in rk except or any pdf equal to almost everywhere then f contains all non trivial two component mixtures on rk with as one of the components lemma section shows that this family is not identifiable next we establish the necessary and sufficient condition for identifiability of f lemma f is identifiable if and only if f proof first we will prove that f f is identif iable we give a proof by contradiction suppose f but that f is not identifiable by lemma f is not identifiable in thus b and such that f a f b but a b now without loss of generality we can assume a b therefore from the equality of f a and f b we obtain using simple algebra that this means in turn that f because and it follows that f since has been selected from we conclude that f contains and is not empty this completes the proof of statement now we will prove that f is identif iable f we give a proof by contradiction suppose that f is identifiable but f let be a common member of and f as f it follows that and such that f c let a c and b as a b and it follows that both f a and f b belong in f we will show that f a f b indeed f b b b f c b c this immediately implies that f b f a where a b c c b and is therefore greater than b thus a b it follows that f is not identifiable the lemma follows from statements and the next lemma gives a sufficient condition for identifiability that is mathematically convenient it relies on the notion of span of a set of functions p denoted by span p that contains all finite linear combinations of functions in that is k x span p ai fi k n ai r fi p lemma consider the family of pdfs f assume that for any pair of pdfs there exists a linear transformation possibly depending on the choice of that maps any function f span to a or function on some domain we denote the value of transformation of the function f span thus is a function we denote t the value of this function for any t then if there exists a sequence tn in s s s such that tn tn and lim tn tn lim f is identifiable proof we give a proof by contradiction suppose conditions of the theorem are satisfied but f is not identifiable from lemma it follows that f there exists a common element in f and say because is in f there exists such that f a for some a since and are in there exists a linear transform and a sequence tn satisfying condition it follows that f a a and so t t a t now for all t t t a t we have t t and consequently tnn contradiction because tn satisfies tn tn from condition we will invoke this lemma later in this paper with two linear transforms namely the moment generating function mgf transform and the characteristic function cf transform we observe that s rk for both transforms the main ideas in this lemma linear transforms and limits come from theorem in teicher on identifiability of finite mixtures skew normal mixtures when contains all pdfs on rk with or without the family f is not identifiable lemma section it is therefore desirable to choose a smaller family that makes the mixture model identifiable and that is rich enough to model real life data in this paper we take a parametric approach the normal family presents a limited option since normal mixtures typically require a large number of components to capture asymmetry in real life data the skew normal asymmetric a convenient alternative in both univatiate and multivariate settings thus we restrict our attention to the mixture families where both unknown and known components are skew normal our contribution in this section are theorem theorem and theorem that give a rather large identifiable family of skew normal mixtures a similar approach has been reported by ghosal and roy for mixtures of normal and skew normal distributions our result however results in a much more extensive family f before giving these results we first introduce the univariate skew normal family as well as its two most common multivariate generalizations univariate skew normal family azzalini introduced the skew normal sn family of distributions as a generalization of the normal family that allows for skewness it has a location a scale and a shape parameter where controls for skewness the distribution is right skewed when left skewed when and reduces to a normal distribution when for x sn its pdf is given by x fx x x r where r and are the probability density function pdf and the cumulative distribution function cdf of the standard normal distribution n respectively pewsey genton kim and genton derived the cf and mgf of the sn family table multivariate skew normal families azzalini and dalla valle proposed an extension of the skew normal family to the multivariate case this particular generalization has a very useful property in that its marginals are skew normal as well more recently several other families of multivariate skew normal distributions have been proposed as discussed by lee and mclachlan in this paper we consider an alternate parametrization of azzalini s multivariate skew normal family denoted by msn for x msn pdf of x is fx x x x rk where is a k k covariance matrix rk is the location parameter rk is the parameter is the pdf of a normal distribution with mean zero and covariance and is the cdf of a standard univariate normal azzalini and dalla valle kim and genton derived the mgf and cf of this distribution table table alternate parametrization the identifiability results and the algorithms are better formulated in terms of the alternate parameters the table gives the relationship between the alternate and the canonical parameters as well as some other related quantities here ik is a k k identity matrix family sn msn cfusn alternate parametrization canonical alternate alternate canonical sign related quantities ik ik lin studied maximum likelihood estimation of finite multivariate skew normal mixtures with another family the canonical fundamental skew normal distribution cfusn introduced by and genton for x cfusn the pdf of x is fx x x x rk where is a is a k k covariance matrix is defined in table rk is the location parameter is a k k and is the cdf of a multivariate normal distribution with zero mean and covariance the cf and mgf are given in table the mgf was obtained from lin to the best of our knowledge the expression for the cf was not available in the literature we derived it in theorem appendix for the purposes of this study identifiability when is some subset of the family of univariate skew normal pdfs and is also a univariate skew normal pdf written as sn r and sn contains only univariate skew normal mixtures theorem gives a sufficient condition for such a family to be identifiable and genton define a more general form of the cfusn family that allows nonsquare matrices table skew normal families expression for the characteristic function and moment generating function the parameters are defined in table here denotes the imaginary number and rxp x exp u du and denote the cdfs of the standard univariate and multivariate normal distributions respectively the in the expression for cfusn cf is the ith column of family sn msn cfusn mgf t exp exp t t exp t cf t exp t exp qk exp t notation notation for theorem lemma lemma let p u denote a partition defined on a multiset of column vectors u such that column vectors that are in the same direction are in the same set this relationship is formally defined by the following equivalence relationship t l ct l for some c in r let pc denote the canonical vector direction of the vectors in p p u defined as pc when a t p is not and pc when a t p is let be the space orthogonal to a vector let null m be the null space of matrix let be the complement of a set theorem the family of pdfs f with sn and sn is defined in table is identifiable proof consider a partition of by sets defined as follows sn sn we now show that for a given pair of pdfs from the conditions of lemma be the parameters corresponding to as defined in are satisfied let table if is from we use lemma statements and to prove our statement first select some t in applying lemma statements we ct cf ct obtain cf and cf therefore the sequence cf ct ct t tn tn nt satisfies the conditions of lemma if is from we use choose lemma statement as the basis of our proof first we select some t in r with t applying lemma statement m gf ct moreover owing to the fact that an mgf is always we obtain m gf ct m gf ct the sequence t tn tn nt positive we know that m gf ct satisfies the conditions of lemma thus all the conditions of lemma are satisfied and consequently f is identifiable theorem the family of pdfs f with msn and msn is defined in table is identifiable proof consider a partition of by sets defined as follows msn where is the standard partial order relationship on the space of matrices more specifically a b implies that a b is positive definite note that also contains pdfs whose matrix is unrelated to by the partial ordering we now show that for a given pair of pdfs from the conditions of lemma are be the parameters corresponding to as defined in table satisfied let if is from we choose the characteristic function transform as the linear transform we pick some t rk with t and t existence of such a t is guaranteed by lemma applying lemma statements and we cf ct cf ct and cf notice that the sequence obtain cf ct ct t tn tn nt satisfies the conditions of lemma if is from we choose the moment generating function transform as the linear transform we pick some l in rk such that l existence of such an l is guaranteed by if the scalar value l we choose t l otherwise we choose t it is easy to see that t and t applying gf ct lemma statement we obtain m moreover owing to the m gf ct gf ct the fact that an mgf is always positive we know that m m gf ct sequence t tn tn nt satisfies the conditions of lemma thus all the conditions of lemma are satisfied and consequently f is identifiable theorem let give a concise representation of the cfusn parameters the family of pdfs f with cfusn is identifiable when cfusn kvv for any v and any k where is defined in table here in addition to representing the skewness matrix also represents the multiset containing its column vectors exp t exp t where t i t i the set indexes containing entries of note that v c t v c t v c t for an arbitrary property used multiple times in the proof we also compute the limit of v c t as c note that the limit is primarily determined by the sign of the quadratic form t and is either or however if t then the limit is determined by the sign of t t and is still or if t t as well then v c t oscillates between and undefined limit unless t in which case the limit is we use notation throughout the proof we give a proof by contradiction supposing that the family is not identifiable then lemma implies that there exists and in such that with the characteristic function as the linear transform proof first we define v c t c t t ct ct a ct cf rk r and a we will show that equation leads to a contradiction for all possible values of and from consider a partition of by sets defined as follows cfusn where is the standard partial order relationship on the space of matrices precisely a b implies that a b is positive now consider the following cases which cover all the contingencies if is from we proceed as follows equation implies that for ct ct cf ct a ct ct ct cf ct v c t ct ct v c t ct cf ct v c t ct ct v c t v c t v c t v c t z z z b a c if v c t term c goes to as c since applying lemma statement the limit of term b as c exists in c so does the limit of the entire rhs and consequently the lhs it follows that since limit of term a as c exists in c v c t should also exist in c so that the limit of entire lhs can exist in c summarizing lim v c t lim c v c t now we pick some t rk with t and t existence of such a t is guaranteed by and as shown in lemma because t v c t but v c t is either or as t which contradicts equation if is from we proceed as follows if we use equation to get ct cf ct a ct ct ct cf ct v c t ct ct v c t v c t ct cf ct ct ct a v c t v c t v c t v c t z z b a z c if v c t term c goes to as c since the limit of term b exists in c by lemma statement since applying lemma statement the limit of rhs as c exists in c so does the limit of the entire lhs and consequently term a v c t c summarizing c v c t lim v c t lim and v c t v c t t lim v c t from lemma statement lim v c t lim ct cf ct lim now t t lim v c t c v c t lim from equation t t and t where the last step follows because v c t consequently exp t c t t when t v c t t from equation summarizing t t t a since from lemma statement it follows that kvv for some v and some k thus and hence the contradiction if equation implies that for ct ct cf ct a a ct ct ct cf ct ct a ct a v c t v c t v c t ct cf ct a b a ct ct a v c t v c t v c t v c t z z z c notice that if v c t then term c goes to since applying lemma statement the limit of term b as c exists in c so is the limit of the entire rhs and consequently the lhs it follows that since limit of term a as c exists in c v c t should also exist in c so that the limit of entire lhs exists in c summarizing c v c t lim v c t lim now then we pick some t rk with t and t existence of such an t is guaranteed by lemma t ensures that v c t but v c t is either or as t which contradicts equation comment extension of theorem we speculate that theorem can be further strengthened by removing the condition kvv for any v and any k from the definition of removal of this condition breaks the current proof only in the case when and notice that this case implies that for any t rk such that t satisfies v c t ck exp t for some integer k from the definition of v v c t and t as shown in equation these implications reduce equation to a ct ct v c t cf ct ct t v c t ck exp t t q c r c t n i exp t t c t rn c i t exp t i z v c t a using lemma statement p n where for a positive integer n rn c x n o exp o c as c looking at the definition of rn it seems that the term a should be for some negative integer except in a few special cases this would imply that the rhs is c exp t t which still goes to as c yet the lhs is in c which leads to a contradiction auxiliary results lemma if contains all pdfs on rk except then f f is not identifiable proof because contains all pdfs on rk except we have f note that f either since can not be let a and b a and f as is a mixture in f and f it follows that is also in consequently the mixture f b is in f therefore f b b b f the last expression is equivalent to f a thus we have f a f b however b a and hence f is not identifiable lemma for k k symmetric matrices a and b if either a or a then there exists a vector t rk such that bt and at proof suppose there does not exist any vector l rk such that al thus for all l rk al this immediately contradicts a hence a implies that their exists l rk such that al on the other hand a implies al for all l rk this in combination with al for all l rk implies that that al for all l rk this however is impossible since a summarizing there exists l rk such that al when a and either of a or a is true now we give a recipe to find t rk with bt and at let l be some vector in rk with al existence of l already proved if bl then choose t l else bl let rk be such that existence of such is guaranteed because b we choose t l where is picked so that bt and at to see that such an exists notice first that bt l b l bl bl bl for any bl second at l a l al for a small enough thus bl picking a small enough ensures bt and at lemma let u v be k k matrices and s be a k k symmetric positive semidefinite matrix let u v and s also denote the multiset containing the column vectors of u v and s respectively and using notation let p p u v let u v t q u t v t q v t where v t i t i is the imaginary number and u t u t i t rk assume u t v t s then the following statements are true p p p p has even number of elements with equal contribution from u and v y q rk u v l p pc if u v t r s and some constant r r then a r s kvv for some v v and some constant k b u v l null s proof first we partition the elements of p into three sets p p pc p p pc s for some s in s p notice that is either singleton or empty because all the vectors in u v are collected in a single component set in if then a vector w in is equivalent to all column vectors in s which implicitly means that all column vectors in s are in the same direction equivalent and consequently s is matrix having column vectors and row vectors as s is symmetric equivalent to in other words s can be expressed as s ww for some constant ensures s is positive summarizing s ww for a w from and moreover any other vector that can appear inside is equivalent to w and consequently is also singleton set if not empty these properties are implicitly used in the rest of the proof next we show the following result which will be used multiple times in the proof a for a given vector e and a finite multiset of vectors m in rk e m m rk such that t and t to prove a notice that choosing t from guarantees t choosing t from ensures t it follows that if the set d obtained by removing for all m m from is then any t d satisfies both t and t to see that d is indeed notice that removing s finite number of k dimensional linear spaces from either k dimensional when e or k dimensional when e reduces it only by lebesgue measure set provided does not coincide with any of the s guaranteed by e m for all m using result a we show the existence of two vectors let be a vector whose existence is shown by using result a with e and s m pc p p s for some s in it follows that and s for a given let s in s be such that s such an s exists by existence a definition of let be a vector whose s is shown by using result with e and m pc p s it follows that p and s to prove the statement we break the argument into three exhaustive cases picking p from or or as follows since s u v the only source of s in u and v are column vectors in and consequently u v follows since s u v there are two possibilities for the source of s in u and v a column vectors in only when thus to satisfy u v u v must be true b column vectors in and the only element in when is singleton we already know from case that u v is true and consequently to satisfy u v u v must be true as well since is the only element in all either or and are covered by cases and a consequence being the only remaining set because both u and v have the equal number of n o other sets p p belong to p p as u v must be true column vectors this proves statement to prove statement we rewrite the formula for u v l rk as follows q y vl p u v l q ul p pc p q y l p q l p pc q r y q pc l p pc q y q because of statement p pc which proves statement let and for a given be as defined earlier since and are in s r u v u v r q q p q q q p q q now u v l y q rk u v l rk u v t s from equation thus r s ww for a w from and some s kvv for some v v and some k from equation existence of v is justified by statement and the fact that is contains w this proves statement to prove statement notice that if then null s and for the only set l from the definition of it follows that null s q y rk u v l because either or l for the only notation landau s notation we use landau s asymptotic notation in the next few lemmas defined as follows for functions g and h defined on some subset of g c and g c h c as c if r g c o h c as c if lim h c lim g c h c lemma consider two univariate skew normal distributions sn and sn let be related to and as given in table let c r and t r let cf and cf be the characteristic functions corresponding to the two distributions refer table a cf ct cf ct lim b cf ct cf ct lim provided the limit exists in r the extended real number line let mgf and mgf be the moment generating functions corresponding to the two distributions refer table for mgf ct mgf ct lim proof here we use landau s o and notation defined in notation cf ct statement instead of working directly with cf which can be complex we circum ct vent the complication by working with the ratio s absolute value squared which is always real multiplying the ratio with its conjugate we obtain an expression of its absolute value squared as follows cf ct cf ct cf ct cf ct cf ct cf ct cf ct cf ct property of complex conjugate of a fraction cf ct cf ct exp exp t from the previous expression using the asymptotic upperconsider the ratio bound for the numerator and lower bound for the denominator obtained in lemma statement and we get thus o c exp c c o exp cf ct cf ct consequently exp o exp o exp o exp lim cf ct cf ct when and cf ct when cf ct lim follows statement similar to the derivation of the asymptotic for the ratio in equation we derive the asymptotic by using lemma statement cf ct cf ct exp consequently cf ct lim cf ct when and cf ct when cf ct lim follows provided the limit exists in combining the result with statement proves statement statement from the definition of sn mgf table we get mgf ct exp c t t mgf ct from the previous expression we apply the asymptotic upperconsider the ratio bound for the numerator and lower bound for the denominator obtained in lemma statement and because the asymptotic is applicable c o c exp c o c exp t thus c mgf ct exp c t t o c exp t mgf ct c o c exp c t t o c exp c t t because term dominates the c term in the exponential above the asymptotic goes to when irrespective of the relation between and consequently lim mgf ct when mgf ct lemma consider two skew normal distributions msn and msn let be related to and as given in table let c r and t rk let cf and cf be the characteristic functions corresponding to the two distributions refer table a cf ct cf ct t lim b cf ct cf ct t lim provided the limit exists in r the extended real number line let mgf and mgf be the moment generating functions corresponding to the two distributions refer table for t mgf ct mgf ct t lim proof here we use landau s o and notation defined in notation statement we use the approach in lemma the expression for the squared absolute value of the characteristic function ratio obtained by multiplying the ratio with its conjugate is given by exp t cf ct cf ct exp t t from the previous expression using the asymptotic upperconsider the ratio t bound for the numerator and lower bound for the denominator obtained in lemma statement and we get t thus o c exp c t c t t o exp t cf ct cf ct consequently exp t o exp t o exp t o exp t lim cf ct cf ct when t and cf ct when t cf ct lim follows statement similar to the derivation of the asymptotic for the ratio in equation we derive the asymptotic by using lemma statement and cf ct exp c t t cf ct c consequently cf ct lim cf ct when t and cf ct when t cf ct lim follows provided the limit exists in combining the result with statement proves statement statement from the definition of msn mgf table we get t mgf ct exp c t t t mgf ct t t consider the ratio from the previous expression we apply the asymptotic t bound for the numerator and lower bound for the denominator obtained in lemma statement because t the asymptotic is applicable c t o c exp t t t c o c exp t t thus c mgf ct exp c t t t o c exp t t mgf ct c o c exp c t t t o c exp c t t t because term dominates the c term in the exponential above the asymptotic goes to irrespective of the relation between and consequently lim mgf ct when t mgf ct lemma consider two skew normal distributions cfusn and cfusn let be related to and as given in table let c r and t rk let cf and cf be the characteristic functions corresponding to the two distribuexp t exp t tions refer table let v c t where t t t c q u t v t q v t where is the v i t i let u v t u t u t i nary number and ui vi is the ith column of u v then a lim cf ct cf ct v c t t b using landau s o notation defined in notation q rn c t c exp t t t t q c v c t t r c n i t exp t cf ct cf ct i where for a positive integer n rn c x as c pn n exp proof cf ct cf ct k y t t t exp t t i v c t exp t t t k t exp t y t t exp t t t k t y exp t c t t exp t c t t exp t i q q c t t exp t q c t exp t i q c t exp t c t t exp t t exp i using lemma statement we get t t lim v c t t t cf ct cf ct q i t this proves statement using equation q c t exp t i c t t t t q c c v c t t exp t exp t t p q c r c t n i t exp t t i p q t c r c t n i t exp t using lemma statement q c t rn c t exp t i t t q c t r c n i t exp t q c t rn c t exp t t q c t rn c t exp t cf ct cf ct exp i this proves statement rxp lemma let be the standard normal cdf and x exp du let x be finite then using landau s o and notation defined in notation as c a for all x r exp cx c b when x cx o exp a for all x p cx exp c x x lim b for all x cx exp c r n x o n o exp c where is defined recursively for an integer a as follows a a when a and a a when a or a c for all x r d for all x r cx o exp exp cx c proof statement consider the function g c c cx exp c to prove the statements we first derive the limits of g c as c to evaluate the limit when x we apply the l s rule because both the numerator and denominator go to to this end we take the limit of ratio of the derivative of the numerator and the denominator c and applying leibniz integral rule we get lim g c lim lim d dc d cx dc exp c c exp c exp c lim exp and consequently cx o exp which thus for x cx o c also holds true when x thus cx o exp when x exp ment moreover it follows from equation that for x cx c and since it is true for x well because and cx approaches when c x exp x cx for all x this proves statement c statement performing integration by parts on cx for x gives z x r exp du z x r z r exp du exp du x r z z x d exp du exp du u du r z z x exp exp exp du exp du x x r z z x exp d exp exp du exp du x x u du r z z x exp exp exp exp exp du exp du x x x r z x n exp x exp exp n du x u z n x exp exp du x r x exp r n x x exp du x x exp r n du x exp exp exp x x thus a z r r exp cx n x exp du cx cx cx exp b c z z r n x exp du exp cx exp notice that term a is of order o n since lim a n lim x lim x r cx exp exp r exp cx d du cx dc applying l s rule d exp dc c exp exp exp lim lim du applying leibniz integral rule n exp n exp lim x x term b is o exp and so is term c since c lim x lim exp x lim x lim x r d dc exp du exp r d dc c exp du exp c c exp exp x lim applying l s rule applying leibniz integral rule consequently cx exp r n x o n o exp x c which proves statement and consequently statement exp c statement implies that cx is o when x thus cx exp is o o and consequently o exp when x notice that the cx is trivially o exp when x as well which completes the proof of statement exp when x thus cx is statement also implies that cx is c exp exp and consequently when x notice that cx is exp when x as well which completes the proof of statement trivially conclusions we give meaningful sufficient conditions that ensure identifiability of mixtures with sn msn and cfusn components we proved identifiability in terms of the parameter that contains both the scale and the skewness information and has a consistent interpretation across the three skew normal families our results are strong in the sense that the set of parameter values not covered by the sufficient condition is a lebesgue measure set in the parameter space ghosal and roy study the identifiability of a mixture with the standard normal as one of the components and the second component itself given by a uncountable mixture of skew normals treating g from their work as a point distribution we can make a valid comparison between our identifiability result and theirs concluding the superiority of our results owing to a larger coverage of the parameter space by our conditions references references and genton on fundamental skew distributions j multivar anal azzalini a class of distributions which includes the normal ones scand j stat azzalini further results on a class of distributions which includes the normal ones statistica azzalini and dalla valle the multivariate distribution biometrika blanchard lee and scott novelty detection j mach learn res bordes delmas and vandekerkhove semiparametric estimation of a mixture model where one component is known scand j stat browne and mcnicholas a mixture of generalized hyperbolic distributions can j stat dempster laird and rubin maximum likelihood from incomplete data via the em algorithm j r stat soc b pages genton distributions and their applications a journey beyond normality crc press ghosal and roy identifiability of the proportion of null hypotheses in models for the distribution electron j statist holzmann munk and gneiting identifiability of finite mixtures of elliptical distributions scand j stat jain white and radivojac estimating the class prior and posterior from noisy positives and unlabeled data in advances in neural information processing systems nips pages jain white trosset and radivojac nonparametric learning of class proportions arxiv preprint url http jain white and radivojac recovering true classifier performance in learning in proceedings of the aaai conference on artificial intelligence aaai pages kim and genton characteristic functions of scale mixtures of multivariate distributions j multivar anal lee and mclachlan on mixtures of skew normal and skew adv data anal classif lin maximum likelihood estimation for multivariate skew normal mixture models j multivar anal mclachlan and peel finite mixture models john wiley sons menon van rooyen ong and williamson learning from corrupted binary labels via estimation in proceedings of the international conference on machine learning icml pages patra and estimation of a mixture model with applications to multiple testing j r statist soc b pewsey the characteristic functions of the and wrapped distributions in congreso nacional de estadistica e operativa pages storey a direct approach to false discovery rates j r stat soc b storey the positive false discovery rate a bayesian interpretation and the ann stat storey and tibshirani statistical significance for genomewide studies proc natl acad sci u s a tallis the moment generating function of the truncated distribution j r stat soc b pages tallis and chesson identifiability of mixtures j austral math soc ser a teicher identifiability of finite mixtures ann math stat ward hastie barry elith and leathwick data and the em algorithm biometrics yakowitz and spragins on the identifiability of finite mixtures ann math statist appendix cfusn characteristic function theorem the characteristic function of cfusn is given by k cf t exp t t k is the dimensionality of the cfusn distribution is the imaginary number x rwhere x th exp u du and is the i column of proof we use the stochastic representation of x cf usn obtained from lin k given by x where h tn ik the standard multivariate normal distribution truncated below in all the dimensions and g nk for symmetric positive matrix it follows that the cf of x can be expressed in terms of cf s of normal distribution and truncated normal distribution precisely cfx t cfg t t using the expression for the cf of multivariate normal cfx t exp t t basic properties of a cf and its connection with the corresponding mgf gives t cfh t mgfh t using the expression for mgfh t derived in tallis and replacing t by t and r the covariance matrix in tallis by ik k k identity matrix we get r w t w t dw rk r t exp u du rk k where is the pdf of n ik k x k wi t dw exp exp t k where w wi k k y k exp t t exp wi t dw rk k z y k exp wi t dwi exp t t applying the substitution ui t for the integral in the numerator changes the domain of the integration from the real line to the complex plane to define such an integral correctly one needs to specify the path in the complex plane across which the integration is performed using the path from t to t parallel to the real line we get k z y t t exp t t exp ui dui i k z y t k ui dui exp t t i k using lemma from kim and genton to simplify the integral term we get z t k t exp t exp dui z t r k v exp t exp i dvi substituting vi k y t exp t t substituting the expression for t in equation completes the proof
10
complexity of frictional interfaces a complex network perspective department of civil engineering and lassonde institute university of toronto canada sharifzadeh faculty of engineering kyushu university hakozaki japan evgin department of civil engineering university of ottawa ontario canada abstract the shear strength and behavior of a rough rock joint are analyzed using the complex network approach we develop a network approach on correlation patterns of void spaces of an evolvable rough fracture crack type ii correlation among networks properties with the hydro attributes obtained from experimental tests of fracture before and after slip is the direct result of the revealed networks joint distribution of locally and globally filtered correlation gives a close relation to the contact zones detachment sequences through the evolution of shear strength of the rock joint especially spread of node s degree rate to spread of clustering coefficient rate yielded possible stick and slip sequences during the displacements our method can be developed to investigate the complexity dept civil engineering u of t college of behavior of faults as well as energy localization on crumpled in which ridge networks are controlling the energy distribution key words frictional interface rock joint shear strength complex networks correlation contact areas introduction during the last decade complex networks have been used increasingly in different fields of science and technology initial applications of complex networks in geosciences were mostly related to earthquakes complexity of such recursive events has been the main objective of the related research understanding of topological complexity of events based on field measurements can disclose some other facets of these woven events characterization of spatial and temporal structural studies pertaining to the topological complexity and its application in some geoscience fields reveals that acquisition and gathering of direct information especially in temporal scale is difficult and in many cases are were impossible at least with current technologies in addition to complex earthquake networks recently the analysis of climate networks volcanic networks river networks and highway networks as the large scale measurements have been taken into account geoscience fields such as the gradation of soil particles fracture networks aperture of fractures and granular materials the initial step refers to organizational step which tries to find out possible dominant structures within the system next step in the most of the mentioned works is to provide a suitable and simple method to yield a similar structure such algorithm may support the evolution of structure in spatial temporal cases in small scales topological complexity has been evaluated in relation to may be the most important structural complexity in geological fields is related to fracture networks fracture networks with dilatancy joint networks in excavation damaged zones cracking in pavements or other structures and fault networks in large scale have been recognized in the analysis of these networks the characterization of fractures in a proper space such as space is an essential step furthermore with taking the direct relationship between void spaces and contact areas in to account one may interest in considering the induced topological complexity of the opening elements frictional contacts into the fracture behavior using linear elastic fracture mechanics we know aperture or aspect ratio is generally the index to available energy in growth of rupture crack like behavior of rupture in frictional interfaces also support the role of contact areas and equivalently apertures in addition the variations of fluid flow features such as permeability and tortuosity directly are controlled with aperture spaces in order to characterize the main attributes of the fractured systems mechanical and hydraulic properties several methods have been suggested in the literature recently the authors have proposed the implementation of a complex network analysis for the evolution of apertures in a rough rock fracture based on a euclidean measure the results confirmed the dependency of properties to the attributes of characterized aperture networks the present study is also related to the complex aperture networks however the current study presents the analysis of frictional forces during shearing based on the correlation of apertures in a rock joint the analysis is associated with set up a network on an attribute such as aperture distribution in an area the aforementioned method has also been employed in the analysis of the coupled partial differential equations which was related to flow with respect to behavior of collective motion of the ensemble discrete contacts in the vicinity of a phase transition step we try to characterize the collective behavior of aperture strings using networks in this paper we will answer the following three questions is there any hidden complex structure in the experimentally observed apertures what is the effect of specific structural complexity of apertures on mechanical response of a fracture how do apertures regulate with each other to show curve in other words can we relate the topological complexity of apertures to the evolution path of the fracture the organization of the paper is as follows section includes a brief description of networks and their characterization in addition the construction procedure of aperture networks is explained section covers a summary of the experimental procedure the last section presents the evaluation of the and behavior of a rock joint which is followed by the analysis of the constructed network network of evolving apertures in this section we describe a general method of setting up a network on a fracture surface while the surface property is a superposition of very narrow profiles ribbons of one attribute of the system in other words one attribute of the system is granulated over strings profiles or ribbons the relationship between the discrete strings from long range correlation or elastic results an interwoven network topological complexity of interactions the frictional behavior including the response of a joint is related to the sum of real contact areas which fluctuates with the changes in apertures it also occurs based on the collective motion and spatially coupled of contact zones it is shown that the structural complexity of the dynamic aperture changes is controlling and regulating the joint behavior and its unstable response in order to explain the details of our work we need to characterize the topological complexity a network consists of nodes and edges connecting the to set up a nondirected network we considered each string of measured aperture as a node each aperture string has n pixels where each pixel shows the void size of that cell depending on the direction of strings the length of the profiles varies the maximum numbers of strings in our cases are in the perpendicular direction to the shear while the minimum one is in the parallel direction to make an edge between two nodes a correlation measurement cij over the aperture profiles was used the main point in the selection of each space is to explore the explicit or implicit hidden relations among different distributed elements of a system for each pair of signals profiles vi and vj containing n elements pixels the correlation coefficient can be written as n cij v k v k v k i i n j j n v k v k v k i i k j j n where v i v k k i obviously it should be noted that n cij is restricted to cij where and are related to perfect correlations no correlations and perfect respectively selection of a threshold to make an edge can be seen from different views choosing a constant value may be associated with the current accuracy of accumulated data where after a maximum threshold the system loses its dominant order in fact there is not any unique way in the selection of a constant value however preservation of the general pattern of evolution must be considered while the hidden patterns can be related to the several characters of the network these characters can express different facets of the relations connectivity assortivity hubness centrality grouping and other properties of nodes edges generally it seems obtaining stable patterns of evolution not absolute over a variation of can give a suitable and reasonably formed network also different approaches have been used such as density of links the dominant correlation among nodes space and distribution of edges or clusters in this study we set cij considering with this definition we are filtering uncorrelated profiles over the metric space in the previous study the sensitivity of the observed patterns associated with the euclidean distance of profiles max has been distinguished the clustering coefficient describes the degree to which k neighbors of a particular node are connected to each other what we mean by neighbors is the connected nodes to a particular node the clustering coefficient shows the collaboration between the connected nodes assume that the i th node to have k i neighboring nodes there can exist at most k i k i edges between the neighbors we define ci actual number of ci as the ratio edges between the neighbors k i k i then the clustering coefficient is given by the average of network of the i th node ci over all the nodes in the c n n i i for k i we define c the closer c is to one the larger is the interconnectedness of the network the connectivity distribution or degree distribution p k is the probability of finding nodes with k edges in a network in large networks there will always be some fluctuations in the degree distribution the large fluctuations from the average value k refers to the highly heterogeneous networks while homogeneous networks display low fluctuations from another perspective clustering in networks is closely related to degree correlations vertex degree correlations are the measures of the statistical dependence of the degrees of neighbouring nodes in a network correlation is the criterion in complex networks as it can be related to network assortativity the concept of correlation can be included within the conditional probability distribution p k k that a node of degree k is connected to a node of degree k in other words the degrees of neighbouring nodes are not independent the meaning of degree correlation can also be defined by the average degree of nearest neighbours knn k high degree nodes hubs tend to make a link to high degree nodes otherwise if knn disassortative from the point of view of fractal complex networks the degree correlation may be used as a tool to distinguish the of network structures in fact in fractal networks large degree nodes hubs tend to connect to small degree nodes and not to each other fractality and disassortativity also the clustering nature of a network can be drawn as the average over all nodes of degree k giving a clustering distribution or spectrum in many k k if knn k increases with decreases with k high degree nodes hubs tend to make a link with low degree nodes networks such as the internet the clustering spectrum is a decreasing function of degree which may be interpreted as the hierarchical structures in a network in contrast some other networks such social networks and scientific collaborations and also we will see complex aperture networks are showing assortative behaviour it will be shown that spreading of crack like behaviour due to shearing a fracture can be followed with the patterns of proper spectrum similarly by using the degree correlation one may define the virtual weight of an edge as an average number of edges connected to the nodes the average characteristic path length l is the mean length of the shortest paths connecting any two nodes on the graph the shortest path between a pair i j of nodes in a network can be assumed as their geodesic distance gij with a mean geodesic distance l given as below gij n n i j where gij is the geodesic distance shortest distance between node i and j and n is the number of nodes we will use a well known algorithm in finding the shortest paths presented by dijkstra based on the mentioned characteristics of networks two lower and upper bounds of networks can be recognized regular networks and random networks or networks regular networks have a high clustering coefficient c and a long average path length random networks construction based on random connection of nodes have a low clustering coefficient and the shortest possible average path length however watts and strogatz introduced a new type of networks with high clustering coefficient and small much smaller than the regular ones average path length this is called small world property summary of laboratory tests to study the small world properties of rock joints the results of several laboratory tests were used the joint geometery consisting of two joint surfaces and the aperture between these two surfaces were measured the shear and flow tests were performed later on the rock was granite with a unit weight of and uniaxial compressive strength of mpa an artificial rock joint was made at mid height of the specimen by splitting and using special joint creating apparatus which has two horizontal jacks and a vertical jack the sides of the joint are cut down after creating the joint the final size of the sample is mm in length mm in width and mm in height using special mechanical units various mechanical parameters of this sample were measured a virtual mesh having a square element size of mm was spread on each surface and the height at each position was measured by a laser scanner the details of the procedure can be found in different cases of the normal stress mpa were used while the variation of surfaces were recorded shows the shear strength evolution under different normal loads in this study we focus on the patterns obtained from the test with a mpa normal stress implementation and analysis of complex aperture networks in this section we set up the designated complex network over the aperture profiles which are perpendicular to the shear direction by using the correlation measure the distribution of correlation values along profiles and during the successive shear displacements were obtained plotting the correlation distribution shows the transition from a near poisson distribution to a gaussian distribution the change in the type of distribution is followed by the phenomena of the tailing which is inducing the homogeneity of the correlation values towards high and correlation values in other words tailing procedure is tied with the residual part states of the joint thus this can be described by reducing the entropy of the system where the clusters of information over correlation space are formed from another point of view with considering the correlation patterns it can be inferred that throughout the shear procedure there is a relatively high correlation between each profile and the profiles at a certain neighborhood radius this radius of correlation is increasing anisotropic development during shear displacement by using the method described in the previous section a complex aperture network is developed from the correlation patterns as it can be seen in this figure the formation of highly correlated nodes clusters is distinguishable near the peak point it can be estimated that the controlling factor in the evolution path of the system is related to the formation of cliques communities we will show locality properties of the clusters intera structures are much more discriminated at last displacements rather than initial time steps while global variations of the structures are more sensitive to reduction in the shear stress in fact forming hubs in the constructed networks may give the key element of synchronization of aperture profiles or collective motion of discrete contact zones along the shear process in other words reaching to one or multiple attractors and the rate of this reaching after peak point are organized by the spreading and stabilizing the clusters unfortunately due to a low rate of data sampling the exact evolution of patterns before is not possible however during the discussion on the joint degree correlations a general concept will be proposed the three characteristics of the constructed networks namely total degree of nodes clustering coefficient and mean shortest path length are depicted as a function of shear displacement in figure it can be followed there is a nearly monotonic of the parameters a considerable sharp change in transition from shear displacement to mm is observed for all three illustrated parameters this transition is assumed as state transition from the to post peak state while with taking into account the rate of the variation of the parameters the transformation step is discriminated also despite of clustering coefficient trend which show a shape the number of edges and mean short length after a shear displacement of mm roughly exhibiting a trend these results provide the necessary information for the classification of the aperture networks in our rock joint the high clustering coefficient and low average characteristic path length clearly show that our aperture networks have properties the development of shear stress over the networks is much faster after the peak point than the states this feature can be explained by understanding the concept of the contact areas at interlocking of asperities maximum static the point correlation shows a relatively more uniform shape rather than former and later cases also the current configuration implies that the homogeneity of the revealed network where the nodes with high degree are tending to absorb nodes with low edges this indicates the property of similarity within the network structures the shear displacements immediately after or near peak figure point destroy the homogeneity of the network and spreading slow fronts and dropping of the frictional coefficient is accompanied with a trial to make stable cliques inducing the heterogeneity to the network structures using a microscopic analysis it can be proven that for homogenous topologies many small clusters spread over the network and merge together to form a giant synchronized cluster this event is predicted before reaching to the peak threshold in heterogeneous graphs however one or more central cores hubs are driving the evolutionary path and are figuring out the synchronization patterns by absorbing the small clusters as can be seen in figure and figure two giant groups are recognizable after mm displacement this shows the attractors states in a dynamic system however two discriminated clusters are not showing the structures within the proper networks hubs with high degree nodes are separated from the hubs with low degree nodes in general one may overestimate the of internal structures of the networks which means that in the entire steps at least a small branch of fractility can be followed the attributed weight distribution associated with the correlation concept shows as if the virtual heaviness of edges are increasing simultaneously the joint degree distribution is also growing which indicates the networks are assortative the distribution of the weights from unveiled hubs also clearly can be followed in figure while two general discriminated patterns are recognizable on the contrary if the patterns of correlation of clustering coefficients are drawn the eruption of local synchronization is generally closed out after or at least near peak point while again during and after dropping shear strength the variation of local clusters will continue especially at the point near to critical step the local clusters present much more uniform percolation rather than the other states while at final steps the stable state or regime of regional structures is not clear it is worth stressing the rate of variation of local joint clustering patterns at apparently are much higher than the global patterns joint degree distribution also it must be noticed that before peak point the structures of joint triangles density is approximately unchangeable then as a conclusion burst of much dense local hubs is scaled with disclosing of slow fronts spreading following the spectrum of the networks in a collective view shows a nearly uniform growing trend where a third degree polynomial may be fitted however with respect to individual analysis local analysis of ci k i a negative trend can be pursued the spectrum of the networks can be related to correlation concept which expresses the probability of selecting a node with a certain degree so that it is connected to other two nodes with the definite degrees the evolution of spectrum of aperture networks in a euclidean space and using a clustering analysis on the accumulated objects has come out the details of the fracture evolution either in the mechanical or analysis but in our case detecting such explicit scaling is difficult let us transfer all of the calculated network properties in a variation rate space depicting the clustering coefficient and mean degree rates shows a similar trend with the evolution of shear strength however after mm displacement the variation of edges and clustering coefficient unravels the different fluctuations the negative scaling for large anisotropy in dci dki space can be expressed by dt dt dki dc i as it can be followed in figure the congestion of objects makes a dt dt general elliptic which approximately covers all of points where the details of the correlation among two components presents how the expansion and contraction of patterns fall into the final attractors thus such emerged patterns related to the correlation of variation rate of edges and rate of clustering coefficient are proposing a certain core in each time step so that the absorbing of objects within a black hole at residual part is much more obvious rather dt than other states with definition of anisotropy by s deviation the rate changes of profiles in a new space and with reference to the pre and post peak behaviours are obtained transferring from interlocking step to coulomb threshold level is accompanying with the maximum anisotropy and immediate dropping and then dc is standard dt starting to fluctuate until reaching to a uniform decline the fluctuation of anisotropy from to mm may be associated with the behaviour of the rock joint as the main reason of shallow earthquakes it should be noticed that the results of the later new space is completely matching with the analysis of joint degree and joint clustering distribution in figure we have illustrated a new variable with regard to durability and entropy of the system dc d k dt dt while initiating the post behavior is scaled with the minus or zero variation of the parameter in we analyzed the structures and frequencies over parallel and perpendicular aperture networks also a directed network based on contact strings and preferentiality of possible energy flow in rupture tips has been introduced we also inspected the synchronization of strings using a kuromoto model in fact with definition of such parameter the fluctuation in anisotropy is filtered conclusions in this study we presented a special type of complex aperture network based on correlation measures the main purpose of the study was to make a connection between the apparent mechanical behavior of a rock joint and the characterized network the incorporation of the correlation of apertures and the evaluation of continuously changing contact areas growth of aperture within the networks showed the effects of structural complexity on the evolution path of a rock joint our results showed that the main characteristics of aperture networks are related to the shear strength behavior of a rock joint the residual shear strength corresponded to the formation of giant groups of nodes in the networks in addition based on the joint correlation upon edges and triangles the and post peak behaviour of a rock joint under shear were analyzed our results may be used as an approach to insert the complex aperture networks into the surface growth methods or general understanding of the conditions for a sudden movement shock in a fault references newman j the structure and function of complex networks siam review baiesi m paczuski free networks of earthquakes and aftershocks physical review e xie f levinson topological evolution of surface transportation networks computers environment and urban systems dorogovetev goltsev phenomena in complex networks modern boccaletti latora moreno chavez hwang networks structure and dynamics physics reports abe s suzuki geophys complex network description of seismicity nonlin process a tiampo posadas and donner analysis of complex networks associated to seismic clusters near the itoiz reservoir dam eur phys j special topics doering complex systems stochastic processes stochastic differential equations and equations lectures in complex systems sfi studies in the science of complexity ghaffari sharifzadeh m shahriar pedrycz application of soft granulation theory to permeability analysis international journal of rock mechanics and mining sciences volume issue pages tsonis a and roebber the architecture of the climate network physica tsonis and swanson topology and predictability of el and la networks physical review letters dell accio f and veltri approximations on the peano river network application of the hierarchy to the case of low connections phys rev e latora marchiori is the boston subway a network physica a mooney dean using complex networks to model and soil porous architecture soil sci soc am j walker tordesillas topological evolution in dense granular materials a complex networks perspective international journal of solids and structures albert barabasi statistical mechanics of complex networks review of modern physics alkan percolation model for permeability of the excavation damaged zone in rock salt international journal of rock mechanics and mining sciences pages f taraskin neri gilligan invasion in soil complex network analysis proceedings of the international conference on digital signal processing santorin greece karabacak t guclu h yuksel network behaviour in thin film growth dynamics phys rev b published may valentini l perugini d poli the topology of rock fracture networks physica a ghaffari sharifzadeh fall analysis of aperture evolution in a rock joint using a complex network approach international journal of rock mechanics and mining sciences volume issue january pages ghaffari sharifzadeh http evgin e complex aperture networks adler mp and thovert jf fractures and fracture networks kluwer academic alava j nukala zapperi s statistical models of fracture advances in physics volume issue pages knopoff the organization of seismicity on fault networks pnas april vol no zimmerman chen and cook the effect of contact area on the permeability of fractures j hydrol pp lanaro random field model for surface roughness and aperture of rock fractures int j rock mech min sci wilson rj introduction to graph theory fourth edition prentice hall harlow gao and jin identification and nonlinear dynamics of twophase flow in complex networks review e colizza flammini serrano vespignani detecting ordering in complex networks nature physics issue pages song c havlin s makse origins of fractality in the growth of complex networks nature physics pages kim js goh ki salvi g oh e kahng b kim d fractality in complex networks critical and supercritical skeletons phys rev e brown sr kranz rl bonner bp correlation between the surfaces of natural rock joints geophys res lett hakami e einstein h h genitier s iwano characterization of fracture aperturesmethods and parameters in proc of the int congr on rock mech tokyo lanaro f stephansson unified model for characterization and mechanical behavior of rock fractures pure appl ghaffari o complexity analysis of unsaturated flow in heterogeneous media using a complex network approach http newman assortative mixing in networks phys rev lett and sokolov changing correlations in networks assortativity and dissortativity acta phys pol b korniss g synchronization in weighted uncorrelated complex networks in a noisy environment optimization and connections with transport efficiency phys rev e dijkstra ew a note on two problems in connexion with graphs numerische mathematik pp newman barabasi watts structure and dynamics of princeton university press watts dj strogatz sh collective dynamics of networks nature mitani esaki zhou nakashima y experiments and simulation of shear flow coupling properties of rock joint in rock mechanics conference essen mitani esaki sharifzadeh vallier shear flow coupling properties of rock joint and its modeling geographical information system gis in isrm conference south african institute of mining and metallurgy sharifzadeh experimental and theoretical research on coupling properties of rock joint thesis kyushu university japan valentini perugini and poli the nature of networks next term possible implications for disequilibrium transport of magmas beneath ridges journal of volcanology and geothermal research pp gnecco enrico meyer ernst fundamentals of friction and wear springer rubinstein cohen fineberg j contact area measurements reveal dependence of static friction phys rev lett sharifzadeh m mitani y esaki t joint surfaces measurement and analysis of aperture distribution under different normal and shear loading using gis rock mechanics and rock engineering volume number april pages xia rosakis j kanamori laboratory earthquakes the rupture transition science rubinstein cohen fineberg detachment fronts and the onset of dynamic friction nature rubinstein cohen and fineberg visualizing experimental observations of processes governing the nucleation of frictional sliding phys d appl phys sornette d critical phenomena in natural sciences berlin heidelberg arenas kurths moreno zhou synchronization in complex networks reports strogatz exploring complex networks nature vol brace wf byerlee jd as a mechanism for earthquakes science volume issue pp scholz ch earthquakes and friction laws nature and http sharifzadeh and http and motifs of networks from shear fractures submitted to journal barahona m and pecora synchronization in systems phys rev lett figures ss mpa sd mm figure variation of shear strength for different cases normal stresses for mpa mpa and mpa without control of upper shear box figure evolution of correlation values of aperture profiles at shear displacement s and mm figure correlation patterns throughout the shear displacements figure visualization of adjacency matrix for the achieved networks k c a l x b c mean geodesic distance k c sd mm sd mm sd mm figure a clustering displacement sd b number of and c average path ki k j number of bins kj number of bins ki figure joint degree distribution from to mm top row is and top right row is shear slip ki k j ki ki figure attributed weight distribution of links related to joint degree distribution for and mm ci c j figure joint clustering coefficient distribution plus attributed weight histograms based on averages of triangles connected to a link sequence of figures are as well as figure k a y x x x data fit b k ci ci ki c ki c figure a spectrum of complex aperture networks ci ki and b evolution of mean degree of node against clustering coefficient and fitness of a polynomial function dki dt i dk dc dci dt dki dci space with respect to shear displacements dt dt figure data accumulation in data are related to shear displacements from mm dc d k dt dt dt a dc dt b std d k t d c t sd mm sd m m sd mm sd m m dc d k with shear displacements and b anisotropy dt dt figure variation of evolution at the rate of spectrum networks space
5
oct mapping class groups are not linear in positive characteristic button abstract for an orientable surface of finite topological type having genus at least possibly closed or possibly with any number of punctures or boundary components we show that the mapping class group m od has no faithful linear representation in any dimension over any field of positive characteristic introduction a common question to ask of a given infinite finitely generated group is whether it is linear for instance consider the braid groups bn the automorphism group aut fn of the free group fn and the mapping class group mod of the closed orientable surface with genus linearity in the first case was open for a while but is now known to hold by for n showed that aut fn is not linear whereas for g the third case is open however whereas the definition of linearity is that a group embeds in gl d f for some d n and some field f in practice one tends to concentrate on the case where f in fact a finitely generated group embeds in c if and only if it embeds in some field of characteristic zero so it is enough to restrict to this case if only characteristic zero representations are being considered however we can still ask about faithful linear representations in positive characteristic for instance in the three examples above it is unknown for n if the braid group bn admits a faithful linear representation in any dimension over any field of positive characteristic for aut fn with n the proof in applies to any field not just the characteristic zero case so that there are also no faithful representations in positive characteristic proof as for mapping class groups we show here that there are no faithful linear representations of mod in any dimension over any field of positive characteristic when is an orientable surface of finite topological type having genus g at least which might be closed or might have any number of punctures or boundary components the idea comes from considering the analogy between a finitely generated group being linear in positive characteristic and having a nice geometric action as we did in when showing that gersten s free by cyclic group has no faithful linear representation in any positive characteristic on looking more closely to see which definition of nice aligns most closely with linearity in positive characteristic we were struck by the similarities between that and the notion of a finitely generated group acting properly and semisimply more so than properly and cocompactly on a complete cat space in bridson shows that for all the surfaces mentioned above the mapping class group mod does not admit such an action this result is first credited to but the proof in consists of finding an obstruction to the existence of such an action by any one of these groups this obstruction involves taking an element of infinite order and its centraliser in said group then applying a condition on the abelianisation of this centraliser here we show that this condition holds verbatim for groups which are linear in positive characteristic thus obtaining the same obstruction we leave open the question of whether the mapping class group of the closed orientable surface of genus is linear in positive characteristic but we note that it was shown in and using the braid group results that this group is linear in characteristic zero anyway proof the following is the crucial point which distinguishes our treatment of linear groups in positive characteristic from the classical case proposition if f is an algebraically closed field of positive characteristic and d n then there exists k n such that for all elements g gl d f the matrix g k is diagonalisable proof if f has characteristic p then we take k to be any power of p which is at least proof we put g into jordan normal form or indeed any form where the matrix splits up into blocks corresponding to the generalised eigenspaces of g and such that we are upper triangular in each block then on taking the eigenvalue f of g the block of g corresponding to will be of the form n where n is upper triangular with all zeros on the diagonal so that k k k k n k n n i but n k because k d and ki modulo p for i k as k is a power of thus in this block we have that g k is equal to i but we can do this in each block making g k a diagonal matrix as for the mapping class group mod of the surface we have proposition proposition if is an orientable surface of finite type having genus at least with any number of boundary components and punctures and if t is the dehn twist about any simple closed curve in then the abelianisation of the centraliser in mod of t is finite this is in contrast to theorem suppose that g is a linear group over a field of positive characteristic and cg g is the centraliser in g of the infinite order element then the image of g in the abelianisation of cg g also has infinite order proof as the abelianisation is the universal abelian quotient of a group it is enough to find some homomorphism of cg g to an abelian group where g maps to an element of infinite order so we use the determinant we first replace our field by its algebraic closure then proposition tells us that we have the diagonalisable element g k whereupon showing that g k has infinite order in the abelianisation of cg g which could of course be smaller than the centraliser in g of g k will establish the same for take a basis so that g k is actually diagonal and group together repeated eigenvalues so that we have gk idk references this means that any element in cg g k and thus also in cg g is of the form ak with the same block structure consequently we have as homomorphisms from cg g to the multiplicative abelian group not just the determinant itself but also the subdeterminant functions detk here for h cg g we define deti h as the determinant of the ith block of h when expressed with respect to our basis above which diagonalises g k and this is indeed a homomorphism now it could be that deti g k has finite order which implies that i and thus also has finite multiplicative order in however if this is true for all i k then all have finite order this means that g k and so g does too which is a contradiction thus for some i we know g k and g map to elements of infinite order in the abelian group under the homomorphism deti corollary if is an orientable surface of finite type having genus at least with any number of boundary components and punctures then mod is not linear over any field of positive characteristic proof we can combine proposition and theorem to get a contradiction because dehn twists have infinite order references bigelow braid groups are linear amer math soc bigelow and budney the mapping class group of a genus two surface is linear algebr geom topol references bridson semisimple actions of mapping class groups on cat spaces geometry of riemann surfaces london math soc lecture note cambridge univ press cambridge button minimal dimension faithful linear representations of common finitely presented groups http formanek and procesi the automorphism group of a free group is not linear algebra kapovich and leeb actions of discrete groups on nonpositively curved spaces math ann korkmaz on the linearity of certain mapping class groups turkish j math krammer braid groups are linear ann of math selwyn college university of cambridge cambridge uk address
4
packing circles within circular containers a new heuristic algorithm for the balance constraints case washington alves de luiz leduino de salles antonio carlos and ednei felix corresponding author de aplicadas universidade estadual de campinas limeira sp brazil instituto de e tecnologia universidade federal de paulo dos campos sp brazil departamento de universidade federal do ponta grossa pr brazil oliveira salles neto moretti moretti edneif reis faculdade abstract in this work we propose a heuristic algorithm for the layout optimization for disks installed in a rotating circular container this is a unequal circle packing problem with additional balance constraints it proved to be an problem which heuristics methods for its resolution in larger instances the main feature of our heuristic is based on the selection of the next circle to be placed inside the container according to the position of the system s center of mass our approach has been tested on a series of instances up to circles and compared with the literature computational results show good performance in terms of solution quality and computational time for the proposed algorithm keywords packing problem layout optimization problem nonidentical circles heuristic algorithm introduction we study how to install unequal disks in a rotating circular container which is an adaptation of the model for the unequal circle packing problem with balance behavioral constraints this problem arises in some engineering applications development of satellites and rockets multiple spindle box rotating structure and so on the low cost and high performance of the equipment require the best internal among different geometric devices this problem is known as layout optimization problem lop and consists in placing a set of circles in a circular container of minimum envelopment radius without overlap and with minimum imbalance each circle is characterized by its radius and mass there the original threedimensional case the equipment must rotate around its own axis is different circles see figure c represent cylindrical objects to be placed inside the circular container figure illustrates the physical problem figure a shows a rotating cylindrical container the symbol and the arrow illustrate the rotation around the axis of the equipment is the angular velocity in another viewpoint figure b shows the interior of the equipment where distinct circular devices need to be placed in this example six cylinders are placed in which the radii masses and heights are not necessarily equal research on packing circles into a circular container has been documented and used to obtain good solutions heuristic metaheuristic and hybrid methods are used in most of them there are only a few publications discussing the disk problems with balance constraints lop is a combinatorial problem and has been proved to be lenstra rinnooy kan this problem was proposed by teng et al where a mathematical model and a series of intuitive algorithms combining the method of constructing the initial objects topomodels with the iteration method are described and the validity of the proposed algorithms is by numerical examples tang teng presented a genetic algorithm called decimal coded adaptive genetic algorithm to solve the lop xu et al developed a version of genetic algorithm called positioning technique which the best ordering for placing the circles in the container and compare it with two existing natureinspired methods qian et al extended the work tang teng by introducing a genetic algorithm based on intervention in which a human expert examines the best solution obtained through the loops of many generations and designs new solutions methods based on particle swarm optimization pso have been applied to the lop li et al developed a pso method operator this approach can escape from the local minima maintaining the characteristic of fast speed of convergence zhou et al proposed a hybrid approach based on constraint handling strategy suit for pso where improvement is made by using direct search to increase the local search ability of the algorithm xiao et al presented two approaches based on gradient search the hybrid with simulated annealing sa method and the second hybrid with pso method lei presented an adaptive pso with a better search performance which employs strategies to plan space global search and local search to obtain global optimum huang chen proposed an improved version of the algorithm proposed by wang et al for solving the disk packing problem with equilibrium constraints an strategy of accelerating the search process is introduced in the steepest descends method to shorten the execution time in liu li the lop is converted into an unconstrained optimization problem which is solved by the basin algorithm presented by them together with the improved energy landscape paving method the gradient method based on local search and the heuristic update mechanism liu et al presented a simulated annealing heuristic for solving the lop by incorporating the neighborhood search mechanism and the adaptive gradient method the neighborhood search mechanism avoids the disadvantage of blind search in the simulated annealing algorithm and the adaptive gradient method is used to speed up the search for the best solution liu et al developed a tabu search algorithm for solving the lop the algorithm begins with a random initial and applies the gradient method with an adaptive step length to search for the minimum energy he et al proposed a hybrid approach based on quasiphysical optimization method where improvement is made by adapting the descent and the tabu search procedures the algorithm approach takes into account the diversity of the search space to facilitate the global search and it also does search to the corresponding best solution in a promising local area liu et al presented a heuristic based on energy landscape paving the lop is converted into the unconstrained optimization problem by using strategy and penalty function method subsequently the heuristic approach combines a new updating mechanism of the histogram function in an improved energy landscape paving and a local search for solving the lop in this paper we propose a new heuristic to solve the lop the basic idea of our approach called placing technique cmpt is to place each circle according to the current position of the center of mass of the system results for a selected set of instances are found in huang chen xiao et al lei liu li liu et al and he et al to validate our approach we compare the results of our heuristic with these instances computational results show good performance in terms of solution quality and computational time the paper is organized as follows section presents a formal of the unequal circle packing problem with balance constraints and some are established section describes our heuristic in section we present and analyze the experimental results and section concludes the paper problem formulation we consider the following layout optimization for the disks installed in a rotating circular container given a set of circles not necessarily equal the minimal radius of a circular container in which all circles can be packed without overlap and the shift of the dynamic equilibrium of the system should be minimized the decision problem is stated as follows consider a circular container of radius r a set of n circles i of radii ri and mass mi i n n let x y t be the coordinates of the container center and xi yi t the center coordinates of the circle i let z r be the objective function and n n z mi xi x mi yi y the second objective function which measures the shift in the dynamic equilibrium of the system caused by the rotation of the container without loss a c r b figure circular devices inside a rotating circular container and a feasible solution of generality we can consider the problem is to determine if there exists a dimensional vector z r x y xn yn t that the following mathematical formulation lop minimize subject to r max f z z z ri xi x yi y xi x j yi y j ri r j i j n where are a pair of preset weights constraint states that circle i placed inside the container should not extend outside the container while constraints require that two any circles placed inside the container do not overlap each other figure c illustrates a typical feasible solution to the lop the circles are numbered from to is the radius of the circle there is no overlap between the circles and the seven circles are completely placed into the larger circle of radius r radius of the container we develop a constructive heuristic guided by a simple strategy a suboptimal solution is reached after gradually placing a circle at a time inside the container each circle is placed in an euclidean coordinates system on the following evaluation criteria select as the new position of the circle according to the current center of mass of the system without overlapping with the circles placed earlier attempt to the wasted spaces after placing this circle and in the end select as the new coordinates of the container center that completely eliminate the dynamic imbalance of the system to perform the above criteria we need some notations and denote by x i xi yi t the center coordinates of the circle i by d i j d x i x j xi x j yi y j the euclidean distance between the center coordinates of the circles i and j and by i j x i j the set of points on the line segment whose endpoints are x i and x j figure a illustrates the set contact pair if d i j ri r j we say that i j is a contact pair of circles layout a partial layout denoted by l is a partial pattern layout formed by a subset of the m of the circle centers which have already been placed inside the container without overlap assume in addition that the container itself is in if m n then l is a complete layout or solution figure b illustrates a partial layout formed by circles placed inside the container among others and are contact pairs placed cyclic order let c it i p n p t be a cyclic order of circles which have already been placed inside the container without overlap in addition the intersection of any two sets i and i have at most one endpoint in common t we say that c is a placed cyclic order contact cyclic order let c be a placed cyclic order if the circles are two by two contact pairs in c we say that c is a contact cyclic order given a c i i p i it we say that the circles i are in counterclockwise order in relation to circle i p and the circles i it are in clockwise order in relation to circle i p main area let c be a placed cyclic order we say that the area bounded by the union of the line segments i p iq where i p iq are in c is the main area of c which is denoted by a c figure a illustrates a contact cyclic order formed by circles placed inside the container note that all i j in are two by two contact pairs in figure b in addition to it is illustrated a contact cyclic order c formed by circles note that all circles in c dashed lines are completely placed on a this is a feature of our approach since several contact cyclic order are obtained by circling each other this approach is an important requirement since it can yield a more compact layout l xcm c c c a b figure two contact cyclic orders and a partial layout let n be a subset of circles placed inside the container and be the cardinality of we denote the centroid of by the coordinates xc x i border let l be a partial layout and c be a contact cyclic order if the center of each circle in l belongs to a c we say in addition that c is the border of the partial layout figure b illustrates the border circles of the partial layout l circles note that all circle centers in l belong to a on the other hand c is not a border of l since x a c we consider two cases of inclusion for placing circles in the case we require that the circle k to be included must touch at least two previously placed circles after this in the second case we require that another circle to be included occupies the wasted spaces after placing the circle this is a reasonable requirement since it will generally yield a more compact layout than one by separate circles these two cases of inclusion can be explained by a partial layout of the lop example with seven existing circles illustrated in figure in figure a it is shown the case of inclusion there are two positions to place the circle dashed lines touching the contact pair and two positions to place the circle dashed lines touching the contact pair each position can be obtained by the solutions of the following particular case of the problems of apolonio coxeter x xi y yi rk ri p p p x xi y yi rk ri q q q we denote by st k i p iq the coordinates of the solution of the system which does not belong to a c note that the system has two real solutions whenever d i p iq ri p riq in figure a by choosing st as the coordinates of the circle we obtain a feasible layout however it is not enough to choose st as the coordinates of the circle because the circle overlaps the circles and in our approach we always select the coordinates st k i p iq a c in order to place the new circle k touching the contact pair i p iq in the border c in figure a we have k and i p iq but due to the potentially large differences in the radii it is possible to occur overlap with the circles in the border as it is illustrated in figure a we get around this situation by repositioning the circle k to the coordinates of the solution of new system for k i and where now the circle k touches the circles i and in figure b we have k i and this case of inclusion and the possible reposition the following placement approach external placement let l be a partial layout and c be the border of an external placement is the placement of a circle k inside the container so that there is no overlap its center does not belong to a c and it becomes contact pair with at least two circles in we denoted an external placement by pe k the external placement is always selected outside a c however if there is overlap on c the repositioning of the new circle k as explained above is done in the following routine procedure external placement routine input a circle k a contact pair i p iq a partial layout l and the border c output an external placement pe k p and q step calculate st k i p iq by system and pe k st k i p iq if the circle k does not overlap any circles in c stop otherwise go to step step while there is overlap between the circle k and the circles in c repeat if the circle k overlaps the circle i furthest with respect to the counterclockwise order of the border c p and if the circle k overlaps the circle furthest with respect to the clockwise order of the border c q and choose pe k as the solution of the system that is furthest from the centroid of the circles in l with respect to the euclidean distance first if the new circle k does not overlap any circles in border c the external placement routine selects pe k st k i p iq in our approach this case is the most convenient way to place the next circle however if there is overlap in step the routine such circles i and in order to reposition the circle k further from the centroid of the partial layout eventually avoiding any kind of overlap to obtain a more compact layout after including the circle k it is checked the possibility of including another circle to occupy the wasted spaces after placing the circle we check among the remaining circles outside the container preferably the largest one if there is a circle that can be placed into the container in a centralized position without overlap each centralized position is the centroid coordinates of a certain set of circles which includes the circle k and the two circles touching the circle figure b illustrates the second case of inclusion which we can investigate the possibility of positioning a circle in the wasted space after placing the circle touching the contact pair centroid of the circles and in the wasted space after placing the circle touching the circles and centroid of the circles internal placement let l be a partial layout c be the border of l and i p k k iq be contact pairs in c where k is the previous circle included an internal placement is the placement of a circle inside the container so that its center belongs to a c does not overlap with any other circle and the center of is placed at the centroid coordinates of the subset where k i p iq we denote an internal placement by pi meaning that is to be placed at the centroid coordinates xc of let l be a partial layout and c be the border of in our algorithm each positioning in the case of inclusion is always done by looking at the contact pairs in the border suppose that the remaining circle k is selected to be placed touching the contact pair i p iq in the placement of the circle k causes the addition of one element in l and one index in c and perhaps the removal of some indices from this will be represented by the following operation k i p i c c k i i i p k i it a a c c b xc x c a c c figure two cases of inclusion a external placement and b internal placement where s t with this choice for s there are fewer indices between p and p s than p s and the operation k i p i c applied to c means that the circle k was placed inside the container touching the circles i p and i without overlap then the index k is added to c the subset of indices i i between p and p s is removed from c and the coordinates x k are added to the partial layout note that if s there is no removal of indices from c and the index k is inserted in c between the indices i p and i the possible placement of the circle after the placement of the circle k only causes the possible addition of the coordinates x to the partial layout in our approach we require the imbalance of the system be zero it seems intuitive that this requirement may result in a good solution the center of mass of the system by n n n xcm xcm ycm mi mi xi mi yi then one can shift the center of the rotating circular container to the center of mass of the system to have zero imbalance this shift is made at each outer iteration and at the end of the algorithm but it may increase the envelopment radius thus if the layout l represents a complete solution of the lop we denote the radius r of the container by r r l max rik d xcm x ik moreover the index where r l is reached is denoted by kmax arg r l placing technique cmpt we present a new placing technique which yields compact layouts and quality solutions in an manner let n be a permutation of n we place the circles in the partial layout l one by one according to the order by this permutation given a order of inclusion the circles and must be positioned as follows procedure initial layout routine input the circles and output an initial layout l and the initial border c place the circle at coordinates x choose the coordinates x such that touches for each circle and solve the system and place them at coordinates x st and x st without overlap l x x x x and c figure b illustrates the initial l x x x x and the initial border c among many optional positions we can choose x for example at the with the coordinates suppose we have already placed the circles k we describe our approach for placing the circle k and after that we verify the possibility of placing another circle k when we place the circle k where k see procedure we require that the circle touches at least two previously placed circles see figure a and procedure this will generally yield a more compact layout however we can increase the compactness of the layout if the wasted spaces after placing the circle k can be occupied by another circle see b and procedure k k xcm xcm k c k c a b figure example of the cmpt routine we observe that for each additional circle the envelopment radius of a layout is generally enlarged in order to minimize the rate of growth of this radius during inclusions we must properly choose a new position for circle k which yields a smaller envelopment radius our strategy cmpt attempts to reduce the rate of growth of the envelopment radius by including every circle around the coordinates of the center of mass of the system which is updated during each outer iteration this strategy consists of shifting the origin of the euclidean plane to the current center of mass of the system then we require that the circle k touches the circles of a contact pair arbitrarily chosen among the elements of the border c taking into consideration the quadrants of the euclidean plane this approach is performed according to the following routine procedure cmpt routine input a partial layout l and the border c output the sets and step calculate the coordinates of the center of mass xcm of the circles in l and translate the origin of the euclidean plane to xcm step include each contact pair i p i of c in the set qh if the center of i p belongs to the quadrant h of the euclidean plane for h given the border c the procedure only separates the contact pairs in c according to the quadrants of the euclidean plane with origin shifted to the current center of mass of the system figure illustrates the procedure in figure a we observe that the coordinates of the center of mass xcm of the system do not coincide with the coordinates of the origin x we wish to place the next circle k around the coordinates xcm in order to mitigate the growth of the envelopment radius by dividing the plane into quadrants we can obtain a border c with a more circular shape we see in figure b that if we place each new circle k at a different quadrant of the euclidean plane with the origin shifted to xcm then the layout is more evenly distributed the choice of different quadrants a contact pair in qh h to position the next circle k and the operation on the border c lead to a updated border c more similar to a circular shape this will generally yield a more compact layout because the wasted space between the main area a c and envelopment radius is minimized see the example in figure next we describe the two cases of inclusion in the following routine procedure inclusion routine input a circle k a permutation the sets qh h a contact pair i p iq in a set qh a partial layout l and the border c output a partial layout l the border c and the sets qh h step obtain an external placement pe k and the new values for p and q by procedure if there are fewer indices in the border c between q and p than those between p and q then p p q and q step c iq c x k pe k l l x k and qh qh k i i p i iq h note that q p s where s t step if it is possible to obtain an internal placement pi for the set k i p i iq and a circle preferably the largest in the permutation k n then x pi l l x and exclude from the inclusion routine attempts to place the new circles in a more compact layout first it computes an external placement for the next circle k by procedure and updates the values for p and q in order to obtain fewer indices between p and q than those between q and in step the border c is updated by the operation where the indices between p and q are removed from c by step those indices correspond to the internal circles from the border when compared to the indices between q and p and k is added to the circle k is placed inside the container and all contact pairs between p and q including i p i and iq are removed from the sets qh h this choice guarantees that the operation will exclude the internal circles from the border finally a search to place another circle internal placement is performed a is performed after the algorithm builds a complete solution represented by layout l which contemplates improvements via circle repositioning at the border c of this process causes changes in c where an index is removed and then it is repositioned in c by operation the removal of the index from c will be represented by the following operation d i p c i i it the operation d i p c applied to c means that the circle i p is deleted from its position we delete the current x i p from the layout l and we test if a new position pe i p for i p improves the radius r l of the container main routine we choose to position each circle inside the container according to the following main procedure main routine input a permutation n output a layout l complete solution step initialization obtain the initial layout l and the initial border c by procedure k step cmpt obtain the sets qh h by procedure step layout construction while there are contact pairs in any qh and circles outside the container repeat for each h choose an arbitrary i p iq qh and include the circle k and the possible if qh circle k n by procedure and k k step if there are circles outside the container return to step otherwise go to step step l c r compute kmax in and k kmax step x ik delete the current x ik and repeat step for each contact pair i p iq of excluding ik and ik step obtain an external placement pe ik and the new values for p and q by procedure and x ik pe ik if the radius of the container is improved then d ik ik i p iq l c and return to step step x ik and the routine with the complete solution l whose container center is the center of mass of the system given a permutation the main routine builds an initial layout l in step by placing the four circles as in procedure next in step the main aspect of our approach is performed by procedure cmpt routine where the euclidean plane is divided into four parts and the subsets qh h of contact pairs are obtained next step is repeated by looking at each subset qh and while there are circles remaining to be placed in this step an arbitrary contact pair in qh is chosen and the two cases of inclusions are performed by procedure after we placing all circles inside the container we obtain a complete solution l and its border then in step a is performed via circle repositioning at the border c which attempts improvements in the envelopment radius in the end the center of the container is shifted to center of mass of the system which achieves zero imbalance order of placement of the circles as previously described a permutation n of n is used as an input in our algorithm to generate a layout by specifying the order in which the circles are placed since there exist n possible permutations for n circles we need an appropriate technique in order to search in such a large space preliminary tests show that the wasted spaces after placing circles are minimized with greater when the order of addition of the circles favors those of larger radii let n be a sequence obtained by considering their radii in descending order of the circles k j k j choose an integer b b n and subdivide the terms of the sequence in blocks thus it is possible to obtain a subsequence of to be used as an input to the algorithm by permuting the positions of the elements of the elements of and so on until we permute the positions of n n last elements of with this procedure several subsequences to place the different circles may be generated actually there are b n possibilities so that b n n thus when b n we only obtain the sequence and xcm ikmax figure a suboptimal solution circles inside the container when b we can generate at most n distinct subsequences in our numerical experiments for each instance of dimension greater than or equal to we chose b to generate such sequences complexity the analysis of the real computational time of the main routine is because it does not depend only on the number of circles but also on the diversity of the circle radii and the number of circles in a current border c as well as the implementation here we analyze the upper bound of the complexity of the main routine when it a complete solution l with border c such that where including the process recall that before the circles in border c are two by two contact pair given a partial layout l with m circles already placed inside the container and n m circles outside let be the number of circles in the border c of the partial layout the strategy cmpt in procedure checks the position of circles in the euclidean plane which is done in o when we position the circle ik where k see procedure touching two circles in border c existing circles can positions since two existing circles two possible positions for the third ik to determine an external placement for ik we must check the overlap with a c or with any circles in this is the same that we check the overlap with each circle in l that is m circles a good implementation can reduce the number of checks because we assess positions when we place the circle ik each time checking for overlaps m times then the complexity to obtain an external placement is about m after placing the circle ik we must check if there is a circle outside the container to be placed in an internal placement then we must check the overlaps among n m circles and a subset in c which is done in about n m n m in the process we select one circle in c and assess at most contact pairs in c to try to improve of the envelopment radius by checking at most external placements thus the complexity of the is bounded by n therefore the complexity of placing n circles during the main routine is bounded by o after placing the new circle ik the operation the border this operation controls the size of c during the iterations since n the theoretical upper bound is o experimental results in this section we measure the quality and performance of our algorithm on a series of instances up to circles from literature we tested three sets of instances from the literature we compare our approach with a series of hybrid approaches based on simulated annealing and particle swarm optimization xiao et al lei a hybrid approach based on simulated annealing neighborhood search mechanism and the adaptive gradient method liu et al a hybrid tabu search algorithm and gradient method liu et al a series of heuristics based on energy landscape paving gradient method and local search liu li liu et al and a series of algorithms based on approaches gradient method and local search huang chen he et al liu et al size size size table data of each instance first set of instances liu li radii mass size radii mass second set of instances huang chen radii mass size radii mass third of instances xiao et al radii mass size radii mass these methods search for the optimal layout by directly evolving the positions of every circle as well as considering imbalance we use the benchmark suite of instances of the problem table numerical results for the set of instances instance n instance n instance n huang chen lei liu li time s time s time s liu et al liu et al time s he et al liu et al times s our algorithm time s time s time s described in table to test our algorithm for each instance we present the range for ri and mi a more detailed description of the instances can be found in huang chen liu li and xiao et al the routines were implemented in matlab language and executed on a pc with an intel core gb of ram and linux operating system except for the instances and both of them with circles we decide to generate distinct permutations as input for the algorithm in each instance we in the number of executions of the main routine for each instance and the best solution found was selected this amount of tests proved adequate for our comparisons the results from the second and third sets of instances are presented in tables and respectively where we compare our approach with those described in each indicated reference the results are shown for the size of the instances the best radius of the container obtained objective function the imbalance obtained second objective function and the running time t in seconds table shows that our approach proved to be competitive we obtained the best value for on instance tied in instance while obtained results and worse than the best result on instances and respectively similar results were obtained in the second set test as can be seen in table our algorithm tied in instance and while obtained results at most worse than the best result on instances and in table we compare our approach with three other algorithms the set of data are from a version of simulated annealing sa the second set of data are from the same reference but one of them is a version of particle swarm optimization pso while the third set of data is a table numerical results for the second set of instances instance n instance n huang chen liu et al liu et al time s time s time s he et al liu et al our algorithm time s time s time s heuristic based on energy landscape paving again our approach proved to be competitive in relation to the envelopment radius we obtained better results in out of the instances we only obtained worse results in cases but they were on average approximately worse than the results from literature for such instances overall running time obtained by our algorithm can be considered good since the center of the rotating circular container is shifted to the center of mass of the system we always have making our solutions more interesting than the others for this set of instances figure illustrates a typical solution obtained by our algorithm for an instance of circles note that the large border have circles of the size and when we carefully read the cmpt routine we can see that the initial border is iteratively transformed in the border and the latter is iteratively transformed in the border in this example there were only two inclusions by internal placement the computational results show that the proposed algorithm is an effective method for solving the circular packing problem with additional balance constraints conclusions we have presented a new heuristic called placing technique for packing unequal circles into a circular container with additional balance constraints the main feature of our algorithm is the use of the euclidean plane with origin in the center of mass of the system to select a new circle to be placed inside the container we evaluate our approach on a series of instances from the literature and compare with existing algorithms the computational results show that our approach is competitive and outperforms some published methods for solving this problem we conclude that our approach is simple but with high performance future work will focus on the problem of packing spheres table numerical results for the third set of instances instance n instance n xiao et al sa xiao et al pso time s time s liu et al our algorithm time s time s acknowledgements the authors are indebted to the anonymous reviewers for their helpful comments the author wishes to thank capes and grant the second author is grateful to fapesp the third author thank cnpq references coxeter the problem of apollonius the american mathematical monthly he mo ye huang a optimization method for solving the circle packing problem with equilibrium constraints computers industrial engineering huang chen note on an improved algorithm for the packing of unequal circles within a larger containing circle computers industrial engineering lei constrained layout optimization based on adaptive particle swarm optimizer berlin heidelberg lenstra rinnooy kan complexity of packing covering and partitioning problems in schrijver a ed packing and covering in combinatorics econometric institute mathematisch centrum amsterdam pp li liu sun a study on the particle swarm optimization with mutation operator constrained layout optimization chinese journal of computers in chinese apud in xiao xu amos liu jiang li xue liu z zhang z energy landscape paving for the circular packing problem with performance constraints of equilibrium physica a statistical mechanics and its applications liu j li basin algorithm for the circular packing problem with equilibrium behavioral constraints science china information sciences liu li chen liu wang y equilibrium constraint layout using simulated annealing computers industrial engineering liu li geng a new heuristic algorithm for the circular packing problem with equilibrium constraints science china information sciences qian teng sun z interactive genetic algorithm and its application to constrained layout optimization chinese journal of computers apud in xiao xu amos tang teng a genetic algorithm and its application to layout optimization journal of software in chinese apud in xiao xu amos teng sun ge zhong x layout optimization for the dishes installed on rotating table the packing problem with equilibrium behavioural constraints science in china mathematics series a wang huang zhang q xu an improved algorithm for the packing of unequal circles within a larger containing circle european journal of operational research xiao xu amos two hybrid compaction algorithms for the layout optimization problem biosystems xu xiao b amos a novel genetic algorithm for the layout optimization problem in evolutionary computation cec ieee congress on ieee pp zhou gao gao particle swarm optimization based algorithm for constrained layout optimization control and decision
5
counting racks of order n may matthew ashford and oliver may abstract a rack on n can be thought of as a set of maps fx n where each fx is a permutation of n such that f x fy fx fy for all x and y in blackburn showed that the number of isomorphism classes of racks on n is at least n and at most n where c in this paper we improve the upper bound to n matching the lower bound the proof involves considering racks as loopless directed multigraphs on n where we have an edge of colour y between x and z if and only if x fy z and applying various combinatorial tools introduction a rack is a pair x where x is a set and x x x is a binary operation such that for any y z x there exists x x such that z x y whenever we have x y z x such that x y z y then x z for any x y z x x y z x z y z if x is we call the order of the rack note that conditions and above are equivalent to the statement that for each y the map x x y is a bijection on x as mentioned by blackburn in racks originally developed from correspondence between conway and wraith in while more structures known as quandles which are racks such that x x x for all x were introduced independently by joyce and matveev in as invariants of knots fenn and rourke provide a history of these concepts while nelson gives an overview of how these structures relate to other areas of mathematics mathematical institute university of oxford andrew wiles building radcliffe observatory quarter woodstock road oxford uk riordan as a example note that for any set x if we x y x for all x y x then we obtain a rack known as the trivial rack tx if g is a group and g g g is by x y y xy then the resulting quandle g is known as a conjugation quandle for a further example let a be an abelian group and aut a be an automorphism if we a binary operation a a a by x y x y y x y y then g a is an alexander quandle or affine let x and x be racks a map x x is a rack homomorphism if x y x y for all x y x a bijective homomorphism is an isomorphism we will only be concerned with racks up to isomorphism if x is a rack of order n then it is clearly isomorphic to a rack on n so we will take n to be our underlying ground set we will denote the set of all racks on n by rn and the set of all isomorphism classes of racks of order n by so there have been several published results concerning the enumeration of quandles of small order ho and nelson and henderson macedo and nelson enumerated the isomorphism classes of quandles of order at most while work of clauwens and vendramin gives an enumeration of isomorphism classes of quandles of order at most whose operator group is transitive the operator group is in section recently pilitowska and gave an enumeration of medial quandles a type of quandle of order at most as far as we are aware the only previous asymptotic enumeration result for general racks was due to blackburn giving lower and upper bounds for of n and n respectively where c theorem of improves the upper bound to n in the case of dial quandles the authors then conjecture an upper bound of n under the same restriction the main result of this paper proves this upper bound for general racks and hence for medial quandles theorem let then for all sufficiently large integers n n n the lower bound follows from the construction in theorem of our focus is on the upper bound the bound given in theorem of was obtained by applying group theoretic results to the operator group associated with a rack in our arguments we apply combinatorial results to a graph associated with a rack this graph is in the next section graphical representations of racks for any rack x we can a set of bijections fy by setting x fy for all x and y the following result see for example gives the correct conditions for a collection of maps fy to a rack throughout the paper we write maps on the right proposition let x be a set and let fx be a collection of functions each with domain and x define a binary operation x x x by x y x fy then x is a rack if and only if fy is a bijection for each y x and for all y z x we have f y fz fy fz proof as noted earlier conditions and in the of a rack hold if and only if each fy is a bijection so it remains to show that condition is equivalent to this is essentially a reworking of the we omit the simple details in the light of proposition we can just as well a rack on a set x by the set of maps fy providing they are all bijections and satisfy we will move freely between the two with x y x fy for all x y x unless otherwise stated the operator group of a rack is the subgroup of sym x generated by fy the following standard lemma see for example lemma of shows that proposition can be extended to elements of the operator group lemma let x be a rack and let g be its operator group then for any y x and g g f y g g fy any rack on x can be represented by a directed multigraph on x we give each vertex a colour and then put a directed edge of colour i from vertex j to vertex k if and only if j fi we then remove all loops from the graph if j fi j we don t have an edge of colour i incident to j it will be helpful to recast the representation of racks by directed multigraphs in a slightly setting let v be a set and let sym v then we can a simple loopless directed graph on v by setting uv e if and only if u v and u by considering the decomposition of into disjoint cycles we see that consists of a collection of disjoint directed cycles isolated double edges corresponding to cycles of length two and some isolated vertices we can now extend this to the case of multiple permutations in a natural way definition suppose sym v is a set of permutations on a set v a directed loopless multigraph v e with a by putting a directed edge of colour i from u to v if and only if u v and u we also the reduced graph to be the directed graph on v obtained by letting e uv e if and only if there is at least one directed edge from u to v in note that the reduced graph contains at most two edges between any u v v and also observe that if then g is a subgraph of g namely uv vu before continuing let us clarify some terminology a path in a directed multigraph g need not respect the orientation of the edges so for x y v g there is an in g if and only if there is an in the underlying undirected graph a component of a directed graph g is to be a component of the underlying undirected graph for x y v g a directed is a sequence of vertices x xr y such that i e g for all i now let us return to racks definition let r x be a rack and let fy be the associated maps for any s x fy y s then by gs we mean the directed multigraph in the sense of gs thus has an associated although if we may not necessarily consider gs as being coloured we will also write gr gx indicating the graph for the whole rack when describing racks on n in a graphical context we may refer to elements of n as vertices the following two observations are straightforward but crucial lemma let sym v be a family of permutations and let u v v be distinct then there is a in if and only if there is a directed in proof we need only prove the only if statement let u ut v be a path in any edge i is part of a directed cycle and thus can be replaced with a directed ui replacing each such edge gives a directed the shortest such walk is a path lemma let sym v be a family of permutations and u v then u is an orbit of the natural action of on v if and only if u spans a component in proof let u v v then u and v are in the same orbit of the natural action if and only if there exists a sequence of elements from and a sequence m m such that v u but this is exactly equivalent to there being a in with edges successively coloured im and the value of indicating the direction of the edge thus the partition of v into orbits of coincides with the partition of v into components of applying this last result to a rack r on n shows that the orbits of the operator group in its natural action on n coincide with the components of gr we can illustrate these notions with a simple example let x be a rack then a subrack of x is a rack y where y x thus a subset y x forms a subrack if and only if for all y z y z fy y as each fy is a bijection we then also have that z y for all z and thus y and x y are separated in the graph gy the notation in the above context will always be abbreviated to with the restriction to the subset y left implicit outline of the proof in this short section we give a brief outline of how we will count the number of racks on n we shall reveal information about an unknown rack on n in several steps counting the number of possibilities for the revealed information at each step at the end the rack will have been determined completely so we obtain an upper bound on the number of racks the principle behind the argument is as follows we choose a set t n and reveal the maps fj we then consider the components of the graph gt a key lemma shows that if v is a set of vertices such that each component contains precisely one element from v then revealing the maps fv determines the entire rack the is in a set t which is not too big but such that gt has relatively few components we will actually need to consider two sets t and w we choose a threshold and consider the set of vertices s that have degree strictly greater than in the underlying graph we will choose probabilistically a relatively small set w n such that any vertex with high degree in also has high degree in because the degree of any vertex in s is so high the number of components of contained in s is small this allows us to determine the maps fs for the vertices in those with degree at most in we will construct greedily a set t of a given size by adding vertices one at a time and revealing their corresponding maps each time choosing the vertex whose map joins up the most components it will follow that for every j t fj can only join up a limited number of components of gt we will reveal the restriction of fj to these components because of the complex nature of this argument we will store these revealed maps in a i i r and then count the racks consistent with i the main term in the argument comes from considering maps in t acting within components of gt we can control the action of these maps by revealing some extra information corresponding to the neighbours of t in which can themselves be controlled as t consists of low degree vertices in section we formally the information i r which requires some straightforward graph theory in section we show that the number of possibilities for i is at most n in section we will complete the proof of theorem important information in a rack degrees in graphical representations of racks let r n be a rack and t n for each v n t v v fj j t v fj v the set of vertices w such that vw e gt if v n we so t v is s v v t t definition with notation as above the of v with respect to t is t v v so dt v is the of v in the simple graph gt we can of course the t v similarly we now show that when s is a subrack then all components of are lemma let n be a rack and s be a subrack and let c span a component of gs and hence also of then for any u v c s u ds v proof first suppose that v is an of u so that there is a directed edge uv e gs of some colour i s u fi take an arbitrary w s u so w u and there exists a j s such that w u fj as s is a subrack k j fi now observe that v fk v f j fi v fj fi u fj fi w fi suppose for a contradiction that v fk v then w fi v u fi and thus w u contradicting the fact that w s u hence w fi v fk v and as w u was arbitrary we have that u fi v as fi is a bijection s u u v ds v now let u v c be arbitrary from lemma there is a directed path u ur v in gs c from above we have that s u ds v by instead considering a directed we have that ds v ds u and the result follows some multigraph theory the construction of the information i r requires some straightforward graph theory here a multigraph g v e is by a vertex set v v g and an edge multiset e e g of unordered pairs from v for multisets a and b a b is the multiset obtained by including each element e with multiplicity m where m is the sum of the multiplicity of e in a and the multiplicity of e in b if f is a multiset of unordered pairs from v g g f is the multigraph with vertex set v g and edge multiset e g f in this subsection we will consider only undirected multigraphs for clarity as paths and components of a directed multigraph are by the underlying undirected multigraph all of these results remain true for directed multigraphs we will write cp g for the number of components of a multigraph let g be a then for distinct u v v g we have that cp g uv cp g if and only if there is no in the following result is standard while the following results can be formulated using just simple graphs we will use multigraphs to be consistent with the definition of the graph of a rack proposition let g be a multigraph and and be multisets of unordered pairs of elements of v g then cp g cp proof the case where follows from the above observation since there is a in g if there is a in the general case now follows by induction definition let g be a multigraph and e be a multiset of unordered pairs of elements of v g let c v g span a of g we say that c is merged by e if there exist u c v v g c such that uv is an edge from we denote by m g e the set of vertex sets of components of g merged by note that for multisets of edges e and f m g e f m g e m g f as a single edge can merge at most two components g e for any unordered pair lemma let g be a multigraph and e and e be multisets of unordered pairs of elements of v g if cp g e e cp g e then m g e e m g e proof write e uv if u and v are in the same component of g then e is not a merging edge and m g e so suppose that u and v are in components of write c for the vertex set of the component containing u and d for that containing v so that m g e c d as cp g e e cp g e we have that c and d are both contained a single component of g e it follows easily that c d m g e it follows that in all cases we have m g e m g e and thus that m g e e m g e corollary let g be a multigraph and e be a multiset of unordered pairs of elements of v g then g e cp g cp g e proof order e as el and write a cp g then there are precisely a edges eia such that cp eij cp eij for each j write ek ek for each k so we always have m g ek m g ek m g m g ek now consider adding the edges of e in the order given to g if k ij for any j then cp g ek cp g and so from lemma we have that m g ek m g while if k ij for some j we have that g ek g as there are only a such edges it follows that g e cp g cp g e in other words c is the vertex set of a component of g the component itself is a multigraph not just a set of vertices the information i r we introduce the following terminology for any rack r n and any with n let r v n r v denote the set of all vertices with in at most write s r n r for the set of all vertices with strictly greater than we now show that this partition is actually a partition into subracks lemma let r n be a rack and n then s r and r are both subracks of proof by lemma with s r two vertices in the same component of gr have the same hence s r and r are separated in gr and thus both s r and r are subracks now n so for large given a rack r we will construct a set t r r with r n by the following procedure the subgraph ga where a n is described in and construction if r then t r otherwise order the vertices of r as follows choose so that cp g cp g v for any v r given a partial ordering uk choose the next vertex such that cp g uk cp g uk v for any v r uk now take l n and t r ul if r l r otherwise we now introduce some more notation for any j n write e j e g j for the set of edges of gr of colour j and mj m gt r e j for the set of vertex sets of components of gt r merged by the edges of colour j note that if j t r then e j e gt r and so mj the key property of the set t r is given in the next lemma lemma let r n be a rack then for any j r t r n proof note that if r l n then t r r and the statement is trivial we will thus assume that s r l and so r for i s write hi g ui and xi cp cp hi where is the empty graph on n note that cp hi cp e ui cp and so xi for each i we also have that s x xi s x cp cp hi cp cp hs cp now an i and suppose that xi then cp cp hi cp hi cp cp e ui cp e ui e cp cp e from proposition but then cp g ui cp g contradicting our ordering of the vertices hence xi and as i was arbitrary we conclude that xi is a decreasing sequence from this and the fact that p s xi n it follows that l n n now take j r t r so in our ordering of r we have j uk for some k by our construction cp g ul j cp g ul noting that gt r g ul hl we may rewrite this as cp gt r e j cp and thus cp gt r cp gt r e j cp hl cp n n we can combine this with corollary to see that gt r e j cp gt r cp gt r e j n showing the result before formally the information i r associated with a rack r we will need some more notation firstly write t r t r r t r for any j r t r yj c to be the set of vertices in components of gt r merged by e j and write zj n yj yj j figure a representation of the components of gt r with the edges of colour j in blue here precisely five components shaded light blue are merged by the edges of colour j so yj is the set of vertices in these shaded components if j r t r then only the restriction fj is included in the information i r see figure the following lemma gives the key property of the set zj in a slightly more general setting if u v n are such that v fj v for all j u we say that v is u we will write instead of j lemma let r n be a rack w n and j n let c n span a component of gw with c m gw e j then c is proof take a set c as described and let x c be arbitrary if x fj y x then is an edge of colour j as c is not merged by xy e j we must have x fj y c and as x was arbitrary and fj is a bijection it follows that c fj now apply this lemma with w t r for any c n spanning a component of gt r with c mj c fj as zj consists of all vertices in components of gt r not merged by e j we have that zj fj zj and thus that yj fj yj we will now formally the information associated with a rack definition let r n be a rack and let n then with notation as above let m be the r r m mj r r where we order the vertices in some arbitrary way and let y yj j n r r we i r r n t r r n n r m as i j i fj the second entry of this is equivalent to the set of maps fj r or alternatively the graph gs r the fourth entry is equivalent to the set of maps fj r n from lemma the image of each of these maps is contained within r knowing the fourth entry determines r t r which is necessary for the entry note also that the entry is equivalent to the set of maps fj r and thus the graph gt r while the seventh entry is equivalent to fj r r we will think of i as a map from rn to a set of the form of this set will be considered in more detail in section in the next section we show that the image i rn is small we will do this by considering the map i where i r r r and then i where i r r r determining the information i r random subsets the part of the argument relating to the vertices of high degree requires some probabilistic tools in particular we will require a result of known as the chernoff bounds we will use the following more workable version see for example theorems and and corollary of theorem let xn be pindependent random variables each taking values in the range let x xi then for any p x e x and p x e x e x e x let r n be a rack to ease notation we write v dr v for any v n lemma let r n be a rack and let p construct a random subset x of n by retaining each element with probability p independently for all elements then p np for any v n and v p p dx v e proof for each j let xj sop that each xj ber p and the variables xj are independent then xj bi n p and e x np so we can apply theorem to showing the statement for the second statement take a vertex v n and let be the v of v in gr for each i dv choose an element jvi n such that v fjvi vi and put jv jvi i v the elements are clearly distinct and so v now v dv x xjvi bi v p so e v p and we can apply theorem to see that for any dv p v p p vp since x v the second result follows the high degree part we will need the following crucial lemma lemma let r n be a rack and t n let c span a component of gt and let v let a n be n then knowledge of the maps fi and fv is sufficient to determine the maps fu and further the maps fu are conjugate in sym a proof let u c so from lemma there is a directed in gt let the colours of the edges of this path be il so v fil u and thus from fv fil as a is n a fij a for any lemma fu l j and thus f f fil fu a v a a l but as each ij t all of these maps are determined and thus fu is determined also note that fu is conjugate to fv by the map fil proving the result we will now show the main result of this section proposition let r n be a rack then for n sufficiently large there exists a set w n with n n n such that i r r r is determined by the sets r and w and the maps fi proof we will construct the set w by a mixture of probabilistic and deterministic arguments let p n and consider a random subset x of n as described in lemma let e be the event that from item of the lemma with we have that p e c p since np as n it follows that p e c if n is large enough and now for each v n and u n call v u if u v log n let bv be the event that v is so that p bv p dx v n let np n x denote the number of vertices in s s r so that for each v s n now hence from v p n item of lemma with and x x e n p bv n p x v log n let f be the event that n that every vertex in s is then from markov s inequality p f c p n e n n n as n hence p f c if n is large enough if n is large enough then p e c f c and thus p e f hence we can a set u n such that and each vertex in s is whenever v s u this means that or in graphical u v log n terms that each vertex v s is adjacent to at least n vertices in now from lemma there are no edges from s to n s so s is a disjoint union of vertex sets of components of each component of contained within s has size at least n and hence there are at most n such components write the vertex sets of these components as cr and take a set of vertices v vr such that vk ck for each k we have shown that r n now from lemma with t u and a n knowledge of the maps fi and fvk is to determine the maps fu applying this to each component in turn shows that knowledge of the maps fi and fv is to determine fu the second entry of i r so put w u v as n we have the result corollary let there exists some positive integer such that for all n rn proof take n large for the previous result to hold then i r is determined by r or equivalently s r w and the maps fi for a suitable set w depending on r with n now such an n it follows that rn is at most the number of distinct triples r w fi arising from all racks in rn there are clearly at most possibilities for each of r and w as there are n nn choices for any map fi and n there are at most n possibilities for the maps fi hence rn n and from we have that n n n n n n n n n o hence for any there exists a positive integer such that for n rn components of the graph gt r in order to prove that there are few choices for i r r r we will need the following lemma recall that t r r is in construction and that t r t r r t r lemma let r n be a rack let c span a component of gt r and let v then for any j n knowledge of the maps fl r n and fi r and the vertex v fj is sufficient to determine the map fj as in the maps fl r n determine the set r t r and thus the set t r proof let u c from lemma there is a directed in gt r note that this graph is determined from the maps fi r knowledge of which is assumed let d v u denote the length of the shortest directed we will show that u fj is determined by induction on the graph distance d d v u the base case d is true by assumption so take d and suppose the result is true for smaller take a shortest directed path of length d from v to u and let w be the is an edge in g penultimate vertex on the path then wu t r so there exists some i t r such that w fi u as we know fj r k i fj is determined as k t r r t r t r the map fk is also determined further d v w d so by the inductive hypothesis the vertex w fj is determined now fk f i fj fi fj and so fj fj fk hence u fj u fj fk w fj fk which is determined as we know the vertex w fj and the map fk the result follows by induction we can now show the second main result of this section proposition let then there exists a positive integer such that for any n rn proof as in corollary rn is equal to the number of distinct values of the i r as r ranges over all racks of order n we will produce bounds for each of these entries there are clearly at most choices for the set t r recall that by construction r l n and that t r r so r v v for any v t r it follows that t r d v r r r r and thus that r l there are at most n possibilities for the vertex u fi for any i u n so there are at most nl n possibilities for the maps fj r n and at most nn possibilities for the maps fi r hence there are at most n ln possible values for the three entries of i r as r ranges over all racks of order n now consider a rack r rn and suppose that the three entries of i r have been determined fix a j r r we must consider the possibilities for the set mj of components of gt r merged by e j and the restricted map fj for each c mj if aj then there are crudely at most naj possibilities for mj as there are at most n components of gt r from lemma aj n so the number of possibilities for mj is at most n x aj n aj n n n for large now take a c mj and choose an arbitrary v c as the maps fl r n and fi r have been determined already we have from lemma that the restriction fj is determined entirely by v fj there are at most n possibilities for this vertex and so considering the components making up yj there are at most n possibilities for the restriction fj now note that m and are determined by mj and fj for each j r t r as there are at most n such elements regardless of the set r there are at most n n possibilities for m and at most n possibilities for combining these bounds there are at most n ln n log n n n log n possibilities for i r as r ranges over all racks of order n for n large and thus rn we have that n so log n for large thus for n large n n ln n n n n o n hence for any there exists a positive integer such that for n rn proving the result maps acting within components of gt r some preparatory results we will need the following easy claim claim for real numbers x y x y proof simply observe that x y x the above claim is used to prove the following key technical lemma the notation is chosen to match the quantities in the next subsection lemma let npbe a positive integer and be a sequence of integers such that set then n x q p q n x proof by expanding the product we have that n n x q x x p q q q pq and n n x x x so if we set then n x dq x n x cp q where dq q and cp q pq since r for all positive integers r cp q for p q similarly r for all positive integers r and thus dq for all q hence we can bound this sum from below by using claim with x and y we have that a more elaborate version of this argument gives a corresponding stability result saying informally speaking that is close to if and only if is close to n and is close to for all q for full details see proof of theorem at the end of section we introduced the information i r in a rack r on n and explained how i can be thought of as a map from rn to a set of let us call this set in in section we showed that the image i rn in has size at most n in this section we will an i in this image and consider all racks r rn such that i r i we will show that once the information corresponding to i is known there are not too many possibilities for we will consider the set in of in more detail from and the subsequent discussion each i in has the form described below is a set s i n is an i of elements of sym n indexed by i n i in some arbitrary order is a subset t i of n such that t i s i is an of injective maps from t i to s i is a sequence of elements of sym n indexed by the set t i t i t i t i in some arbitrary order the last two entries of i r are graphical in nature to relate them to an abstract i in we will formally the graph associated with such a i definition let i in write i t i and gi in the sense of so gi is a i multigraph on n we write ci for cp gi and c i for the set of vertex sets of components of gi so i ci we can now describe the form of and namely is a sequence mji of subsets of c i indexed by s i t i in some arbitrary order s is a sequence indexed by s i t i where yji i c and j sym yji for all j to avoid later inconvenience we will extend the of mji and yji to all j n as follows for each j i t i a set of ordered pairs from n which we can think of as edges of colour j by setting x i y x y e j xy j s then we mji m gi e ij and yji i j now an i in and suppose r rn is a rack such that i r i recall that n and i r r n t r r n n r m where each entry by can also be determined in terms of the maps fn corresponding to comparing i r with i we see that r s i s r i and t r t i we also have that fj for j i t i and thus gt r gi finally mji mj and yji yj for all j n and fj for j s i t i this means that if the information i i r is known we need only determine the maps fj r r to determine the entire rack r noting that for each j n the set zj n yj n yji is determined by i it follows that an upper bound on the number of possibilities for these maps is also an upper bound for the number of racks r such that i r i we can further reduce the number of maps left to determine by considering components of the graph gi lemma let i in be fixed and let r rn be a rack such that i r i let c i cci and let v vci be a set of vertices such that vj cj for each j then r is determined by i and the maps fv proof as i r i we have that gt r gi and so the set of vertex sets of components of t r is c i take any j ci the knowledge of i determines the maps fi r i and thus from lemma with a n knowledge of the map fvj is to determine the maps fu applying this over all the components of gt r shows that we can determine the entire rack r by determining the set of maps fv as the restrictions fv are determined by i fv fv is equal to if v s i t i and to otherwise we need only determine the restrictions fv before proving an upper bound we will need some more notation for i in and q n let denote the number of vertices in components of gi of size exactly q then there are components of size exactly q and thus n x i and c n x q proposition let i in and define n n i x x q i p q i then there are at most racks r rn such that i r i proof write c i cci and let v vci be a set of vertices such that vj cj for each j let r rn be a rack such that i r i from lemma r is determined by i and the maps fv it follows that an upper bound on the number of possibilities for the maps fv is also an upper bound for the number of racks r such that i r i now a v v let c n span a component of gt r gi with c mv and let x c from lemma with w t r c fv c and so in particular there are at most possibilities for x fv now fj r n n and fi r t i are determined by i i r so from lemma fv is determined by x fv thus there are at most possibilities for fv as there are components of gt r gi of size q there are at most n qn possibilities for the map fv considering all of the ci components q i together there are at most n c possibilities for the maps fv it follows i that there are at most n c racks r such that i r i now n x n c ci q q n n x x q p q proving the result this proposition allows us to prove the main result proof of theorem recall that denotes the set of isomorphism classes of racks of order n as it to an upper bound on let we have from corollary and proposition that for n large rn and rn now take such a large n as i r i r i r for any r rn rn rn rn and so there are at most possibilities for the i i rn from proposition i and lemma there are at most racks r such that i r i and thus an extremal result for each positive integer n let pn be a partition of n and let rpn denote the set of racks r on n such that the components of gr are exactly the parts of pn let n denote the number of parts of pn of size exactly two an extension of the methods used in this paper can be used to prove that unless n there exists a constant such that for many in other words informally speaking almost all in an exponentially strong sense racks r on n are such that almost all components of gr have size the idea of the proof is to a function similar to from proposition but taking into account the size of the components of gr rather than gt r the components of size two are a special case as the symmetric group on two elements is small and abelian for a full proof see references matthew ashford graphs of algebraic objects phd thesis university of oxford simon r blackburn enumerating racks quandles and kei the electronic journal of combinatorics clauwens small connected quandles arxiv preprint roger fenn and colin rourke racks and links in codimension two journal of knot theory and its ramifications richard henderson todd macedo and sam nelson symbolic computation with quandles journal of symbolic computation benita ho and sam nelson matrices and quandles homology homotopy and applications wassily probability inequalities for sums of bounded random variables journal of the american statistical association svante janson tomasz luczak and andrzej random graphs john wiley sons agata pilitowska david and anna zamojskadzienio the structure of medial quandles journal of algebra david joyce a classifying invariant of knots the knot quandle journal of pure and applied algebra sergei vladimirovich matveev distributive groupoids in knot theory sbornik mathematics sam nelson the combinatorial revolution in knot theory notices of the ams leandro vendramin on the of quandles of low order journal of knot theory and its ramifications pages
4
generalized minimum distance estimators in linear regression with dependent errors jan jiwoong kim university of notre dame abstract this paper discusses minimum distance estimation method in the linear regression model with dependent errors which are strongly mixing the regression parameters are estimated through the minimum distance estimation method and asymptotic distributional properties of the estimators are discussed a simulation study compares the performance of the minimum distance estimator with other well celebrated estimator this simulation study shows the superiority of the minimum distance estimator over another estimator koulmde r package which was used for the simulation study is available online see section for the detail keywords dependent errors linear regression minimum distance estimation strongly mixing introduction consider the linear regression model yi where xi xip rp with xij j p i n being non random design variables and where rp is the parameter vector of interest the methodology where the estimators are obtained by minimizing some dispersions or pseudo distances between the data and the underlying model is referred to as the minimum distance estimation method in this paper we estimate regression parameter vector by the estimation method when the collection of in the model is a dependent process let tn be independent identically distributed random variables s with distribution function where is unknown the classical estimator of is obtained by minimizing following mises cvm type z gn y y dh y where gn is an empirical of ti s and h is a integrating measure there are multiple reasons as to why cvm type distance is preferred including the asymptotic normality of the corresponding estimator see parr and schucany parr and millar many researchers have tried various h s to obtain the estimators anderson and darling proposed estimator obtained by using for dh another important example includes h y y giving a rise to hodges lehmann type estimators if gn and of the integrand are replaced with kernel density estimator and assumed density function of ti the hellinger distance estimators will be obtained see beran departing from one sample setup koul and dewet extended the domain of the application of the estimation to the regression setup on the assumption that s are s with a known f they proposed a class of the estimators by minimizing between a weighted empirical and the error f koul extended this methodology to the case where error distribution is unknown but symmetric around zero furthermore it was shown therein that when the regression model has independent nongaussian errors the estimators of the regression parameters obtained by minimizing with various integrating measures have the least asymptotic variance among other estimators including wilcoxon rank the least absolute deviation lad the ordinary least squares ols and normal scores estimators of the estimators obtained with a degenerate integrating measure display the least asymptotic variance when errors are independent laplace s however the efficiency of the estimators depends on the assumption that errors are independent with the errors being dependent the estimation method will be less efficient than other estimators examples of the more efficient methods include the generalized least squares gls gls is nothing but regression of transformed yi on transformed the most prominent advantage of using the gls method is decorrelation of errors as a result of the transformation motivated by efficiency of estimators which was demonstrated in the case of independent errors and the desirable property of the gls decorrelation of the dependent errors the author proposes generalized estimation method which is a mixture of the and the gls methods the estimation will be applied to the transformed variables generalized means the domain of the application of the method covers the case of dependent errors to some extent the main result of this paper generalizes the work of koul as the efficiency of the method is demonstrated in the case of independent errors the main goal of this paper is to show that the generalized estimation method is still competitive when the linear regression model has dependent errors indeed the simulation study empirically shows that the main goal is achieved the rest of this article is organized as follows in the next section characteristics of dependent errors used through this paper is studied also the cvm type distance and various processes which we need in order to obtain the estimators of will be introduced section describes the asymptotic distributions and some optimal properties of the estimators findings of a finite sample simulations are described in section all the proofs are deferred until appendix in the remainder of the paper an italic and boldfaced variable denotes a vector while a and boldfaced variable denotes a matrix an identity matrix will carry a suffix showing its dimension denotes a n n identity matrix r for a function f r r let denote f y dh y for a real vector u rp kuk denotes euclidean norm for any y ky kp denotes for a real matrix w and y r w y means that its entries are functions of y strongly mixing process cvm type distance l be the generated by m the sequence j z is said let fm to satisfy the strongly mixing condition if k sup a b p a p b a b as k is referred to as mixing number chanda gorodetskii koul and withers investigated the decay rate of the mixing number having roots in their works section defines the decay rate assumed in this paper see the assumption hereinafter the errors s are assumed to be strongly mixing with mixing number in addition is assumed to be stationary and symmetric around zero next we introduce the basic processes and the distance which are required to obtain desired result recall the model let x denote the n p design matrix whose ith row vector is then the model can be expressed as y where y yn rn and rn let q be any n n real matrix so that the inverse of is a positive definite symmetric matrix note that the diagonalization of positive definite symmetric matrix guarantees the existence of q which is also a symmetric matrix let q qin for i n denote the ith row vector of q define transformed variables yei q y e q x x q i as in the gls method q obtained from covariance matrix of transforms dependent errors into uncorrelated ones decorrelates the errors however the gls obtains q in a slightly different manner instead of using the gls equates q to the inverse of the covariance matrix the gls uses cholesky decomposition the empirical result in section describes that q from the diagonalization yields better estimators here we propose the class of the generalized estimators of the regression parameter upon varying q we impose noether condition on qx now let a x and aj denote jth column of a let d dik i n k p be an n p matrix of real numbers and dj denote jth column of as stated in koul if d qxa dik q xak then under noether condition n x max o i for all k next define cvm type distance from which the generalized estimator are obtained let fi and fi denote the density function and the of respectively analogue of with gn and being replaced by empirical of and fi will be a reasonable candidate however the fi is rarely known since the original regression error s are assumed to be symmetric the transformed error s are also symmetric therefore we introduce as in koul definition uk y b q n x n dik i q y q xb y i q y q xb y u y b q y b q up y b q y r z l b q ku y b q dh y b rp n p z o x n x dik i q y q xb y i q y q xb y where i is an indicator function and h is a measure on r and symmetric around b as dh x x subsequently define b q inf l b q l p next define e i q q e q q q en q e i and q e are n n where q is the ith row vector of q and rn observe that q and n matrices respectively define a n matrix if y so that its i j th entry is fi y i j n i k k k th entry is fk y for all k n and all other entries are zeros finally define following matrices z e qxa e y if y dh y q f b let f h which are needed for the asymptotic properties of ij pp d d note that ik jk n n xx fijh q i q xa r fi fj dh and b asymptotic distribution of b under the current setup note in this section we investigate the asymptotic distribution of that minimizing l q does not have the closed form solutions only numerical solutions b to can be tried and hence it would be impracticable to derive asymptotic distribution of redress this issue define for b rp z l b q u y q y b dh y e where y if y qxa is a p p matrix next define e q inf b q p unlike l q minimizing q has the closed form solution therefore it is not unb by one of e if l q can be reasonable to approximate the asymptotic distribution of approximated by q this idea is plausible under certain conditions which are called uniformly locally asymptotically quadratic see koul for the detail under b and e converges to zero in probthese conditions it was shown that difference between b is ability see theorem the basic method of deriving the asymptotic properties of similar to that of sections of koul this method amounts to showing that l au q is uniformly locally asymptotically quadratic in u belonging to a bounded b k op to achieve these goals we need the following assumptions set and which in turn have roots in section of koul the matrix x is nonsingular and with a x satisfies lim sup n max kdj the integrating measure h is and symmetric around and z fi dh i for any real sequences an bn bn an lim sup z bn an z fi y x dh y dx i for a r define max a a let kq xak for all u rp kuk b for all and for all k p lim sup z hx n f y q xau f y q xau dh y i i i i i i ik where c does not depend on u and for each u rp and all k p z hx n dik fi y q xau fi y q xaufi y dh y o fi has a continuous density fi with respect to the lebesgue measure on r b for i fir dh for r and i the in the model is strongly mixing with mixing number satisfying lim sup x k k remark note that implies noether condition and implies dh from corollary of koul we note that in the case of errors the asymptotic r b was established under the weaker conditions noether condition and normality of fi dh the dependence of the errors now forces us to assume two stronger conditions and remark here we discuss examples of h and f that satisfy clearly it is satisfied by any finite measure next consider the measure h given by dh fi dfi f a continuous symmetric around zero then fi and z fi dh z fi dfi fi fi z u du another useful example of a measure h is given by h y y for this measure is satisfied by many symmetric error including normal logistic and laplace for example for normal we do not have a closed form of the integral but by using the well celebrated tail bound for normal distribution see theorem of durrett we obtain z z y exp dy fi y dy b corresponding to h y y is the extensions of the one recall from koul that the sample estimator of the location parameter to the above regression model r remark consider condition if fi s are bounded then fi dh implies the r other two conditions in for any measure for h y y fi y dy when fi s are normal logistic or laplace densities in particular when dh fi fi dfi and fi s are logistic s so that dh y dy this condition is also satisfied we are ready to state the needed results the first theorem establishes the needed uniformly locally asymptotically quadraticity while the corollary shows the boundedness of b theorem and corollary are counterparts of conditions a suitably standardized and in theorem of koul respectively note that condition in theorem is met by in the appendix condition in theorem is trivial theorem let yi i n be in the model assume that hold then for any c e sup kl b q b q k o proof see appendix corollary suppose that the assumptions of theorem hold then for any m there exists an and such that p inf l b q m n proof see appendix theorem under the assumptions of theorem z b a ax q y du y dh y op where is as in e therefore proof note that the first term in the side is nothing but the proof follows from theorem and corollary as in case illustrated in the theorem of koul next define z z x x fi y dh y fi y dh y z zn y du y dh y symmetry of the fi around yields e for i j let denote covariance matrix of ax q zn define a n n matrix and write i p j p where n x n x e q q observe that e e zn qxa qxa e q n b now we are ready to state the asymptotic distribution of lemma assume is positive definite for all n in addition assume that sup u o then e zn n q where rp and is the p p identity matrix e zn is proof to prove the claim it suffices to show that for any rp q asymptotically normally distributed note that n n x e zn q q i q which is the sum as in the theorem from mehra and rao with cni q i p and q note that n x zz e n x q i n q also observe that zz k kax q i k max q i zz max max by assumption finally we obtain lim inf lim sup zz by the assumption that the terms in the denominator is o hence the desired result follows from the theorem of mehra and rao corollary in addition to the assumptions of theorem let the assumption of lemma hold then b n proof claim follows from lemma upon noting that z zn y du y dh y b denote the asymptotic variance of b then we have remark let asym b a asym e qxa e e qxa e a q qxa q a observe that if all the transformed errors have the same distribution fn we have e qxa e q b will be simplified as therefore asym a qxa a moreover if all the transformed errors are uncorrelated as a result of the transformation b can be simplified further as asym x where v ar q simulation studies in this section the performance of the generalized estimator is compared with one of b denote covariance matrix of the errors and its the gls estimators let e and estimate respectively consequently we obtain the gls estimator of b b b gls x x x y in order to obtain the generalized estimator we try two different q s qs and qc where b qc b we refer to the generalized estimators corresponding to qs and qc as and estimators respectively in order to generate strongly mixing process for the dependent errors the several restrictive conditions are required so that the mixing number decays fast enough the assumption is met withers proposed the upperbound and the decay rate of the mixing number for the shake of completeness we reproduce theorem and corollary here lemma let be independent on r with characteristic functions such that max i z t and max for some i let gv v be a sequence of complex numbers such that n omax st min where st x assume that as t gv o v where max p then the sequence gv is strongly mixing with mixing number k o k where max to generate strongly mixing process by lemma we consider four independent s normal laplace logistic and mixture of the two normals mtn note that all the s have the finite second moments and hence we set at it can be easily seen that for any we have and hence the assumption is satisfied then for x v satisfies the strongly mixing condition with k o k we let or equivalently the has a laplace distribution if its density function is fla x exp while the density function of logistic innovation is given by flo x exp exp when we generate we set mean of normal laplace and logistic innovations at since we assumed the the sum of s is symmetric we set the standard deviation of normal at while both and are set at for laplace and logistic respectively for mtn we consider n where in each we subsequently generate using next we set the true p for each k we obtain xik in as a random sample from the uniform distribution on yi is subsequently generated using models we estimate by the generalized and the gls methods we report empirical bias standard error se and mean squared error mse of these estimators we use the lebesgue integrating measure h y y to obtain the generalized estimators the author used r package koulmde the package is available from comprehensive r archive network cran at https table and report biases se s and mse s of estimators for the sample sizes and each repeated times the author used high performance computing center hpcc to accelerate the simulations all of the simulations were done in the n la lo m bias gls se mse bias se mse bias se mse n la lo and m denote normal laplace logistic and mtn respectively table bias se and mse of estimators with n as we expected both biases and se s of all estimators decrease as n increases first we consider the normal s when s are normal the gls and estimators display the best performance gls and show similar biases se s and hence mse s estimators show slightly worse performance than aforementioned ones they display similar or smaller bias estimators corresponding to n and while they always have larger se s which in turn cause larger mse s therefore we conclude that gls and show similar performance to each other but better one than when s are normal for s we come up with a different conclusion the estimators outperform all other estimators while the gls and estimators display the similar n la lo m bias gls se mse bias se mse bias se mse table bias se and mse of estimators with n performance note that weighing the merits of the gls the and the estimators in terms of bias is hard for example for the laplace when n the gls and estimators of all s show the almost same biases the estimator of show smaller larger bias than the gls and the estimators when we consider the se the estimators display the least se s regardless of n s and s the gls and the estimators show somewhat similar se s when is laplace or logistic however the estimators have smaller se s than the gls ones when is mtn as a result the estimators display the least mse for all s and n s the and the gls corresponding to laplace or logistic s show similar mse s while the estimators show smaller mse than the gls ones when is mtn appendix proof of theorem section of koul illustrates holds for independent errors proof of the theorem therefore will be similar to the one of theorem in that section define for k p u rp y r jk y u n x dik fi y q xau yk y u n x wk y u yk y u jk y u dik i q y q xau rewrite l au q p z h x wk y u wk y wk u wk jk y u jk y n x jk u jk uk y n x dik q xaufi y n x dik q xaufi y dik q xaufi y dh y where rp note that the last term of the integrand is the kth coordinate of u y q y b vector in b q if we can show that suprema of norms of the first four terms of the integrand are op then applying inequality on the cross product terms in will complete the proof therefore to prove theorem it suffices to show that for all k p z e sup wk u wk dh y o sup z jk u jk e sup z uk y n x n x dik q xaufi y dh y o dik q xaufi y dh y o where sup is taken over kuk b here we consider the proof of the case only the similar facts will hold for the case observe that implies e z uk y dh y max kdi k max z fi dh therefore immediately follows from and the proof of does not involve the dependence of errors and hence it is the same as the proof of of koul thus we shall prove thereby completing the proof of theorem to begin with let jku yku and wku denote jk u yk u and wk u in when dik is replaced with dik so that jk yk and wk define for x rp u rp y r rewrite wku pi y u x fi y q xau f y bni i q y q xau i q y pi y u x n x ik i q i y q i xau i q i y pi y u x note that e bni fi y kuk fi y recall a lemma from deo lemma suppose for each n j n are strongly mixing random variables with mixing number suppose x and y are two random variables respectively measurable with respect to and m m k assume p q and r are such that q r and kxkp and ky kq then for each m k m n xy e x e y m kxkp ky kq consequently if b then for q and each m k m n xy e x e y m ky kq in addition consider following lemma lemma for r n x x k o proof for given r let p such that note that p r therefore by s inequality with p and r we have x x x n k p k np k x k k the last inequality follows from the assumption thereby completing the proof of lemma now we consider the cross product terms of e wku h z x n x n e dik djk i q y q xau i q y pi y u x i q y q xau i q y pj y u x dh y z n x n x dik djk bni bnj dh n n x x dik djk n max i j i max i z kbnj dh x x m z dh the second inequality follows from lemma and the convergence to zero follows from the lemma with r and consequently by fubini s theorem together with we obtain for every fixed kuk b lim sup e lim sup z x n fi y q xau f y dh y lim sup n max i z an z fi y s dh y ds where an b maxi to complete the proof of it suffices to show that for all there exists a such that for all v rp ku vk lim sup e sup kkv where kku u rp k follows from of koul thereby completing the proof of theorem proof of corollary the proof of for independent errors can again be found in the section of koul the difference between the proof in the section and one here arises only in the part which involves the dependence of the error thus we present only the proof of an analogue of in koul let lk z h n i x wk y wk jk y jk dik y dh y z hx n i dik i q y i q y y dh y l lp r note that lk uk y q y dh y by the symmetry of h and fubini s theorem we obtain z z e i q i y i qi y dh y fi dh i in addition lemma yields for j i z e i q y i q y dh y i q y i q y z fi dh j i max together with the fact that fi fi by and lemma we obtain for some c eklk p x z n n x dik c p max z fi dh n max n x n x j i max z fi dh o fi dh using e lk for k p and chebyshev inequality for all there exists and such that c p fi dh p klk n the rest of the proof will be the same as the proof of lemma of koul references beran j minimum hellinger distance estimates for parametric models ann deo a note on empirical processes of strong mixing sequences ann koul behavior of robust estimators in the regression model with dependent errors ann koul minimum distance estimation in linear regression with unknown error distributions statist probab koul minimum distance estimation and tests in autoregression ann koul weighted empirical process in nonlinear dynamic models springer berlin vol koul and de wet minimum distance estimation in a linear regression model ann mehra and rao weak convergence of generalized empirical processes relative to dq under strong mixing ann millar robust estimation via minimum distance methods zeit fur noether on a theorem by wald and wolfowitz ann math parr and schucany minimum distance and robust estimation amer statist prescitt and schucany theory ahead of business cycle measurement conference on public policy
10
static analysis of programs using file format specifications raveendra raghavan and apr indian institute of science bangalore and tata consultancy services indian institute of science bangalore raghavan narendran abstract programs that process data that reside in files are widely used in varied domains such as banking healthcare and analysis precise static analysis of these programs in the context of software verification and transformation tasks is a challenging problem our key insight is that static analysis of programs can be made more useful if knowledge of the input file formats of these programs is made available to the analysis we propose a generic framework that is able to perform any given underlying abstract interpretation on the program while restricting the attention of the analysis to program paths that are potentially feasible when the program s input conforms to the given file format specification we describe an implementation of our approach and present empirical results using real and realistic programs that show how our approach enables novel verification and transformation tasks and also improves the precision of standard analysis problems introduction processing data that resides in files or documents is a central aspect of computing in many organizations and enterprises standard file formats or document formats have been developed or evolved in various domains to facilitate storage and interchange of data in banking enterpriseresource planning erp billing and analysis the wide adoption of such standard formats has led to extensive development of software that reads processes and writes data in these formats however there is a lack of tool support for developers working in these domains that specifically targets the idioms commonly present in programs we address this issue by proposing a generic approach for static analysis of programs that takes a program as well as a specification of the input file format of the program as input and analyzes the program in the context of behaviors of the program that are compatible with the specification motivating example our work has been motivated in particular by batch programs in the context of enterprise legacy systems such programs are typically executed periodically and in each run process an input file that contains transaction records that have accumulated since the last run in order to motivate the challenges in analyzing programs we introduce as a running example a small batch program as well as a sample file that it is meant to process in figure data division input file buffer output file buffer char digit procedure division open read into at end move to while if itm record processing move to if s itm record processing for same batch header else itm record processing for diff batch header write from else if record processing move to if same move s to else move d to rest of header record processing else if trl record processing trl record processing else terminate program with error read into at end move to close goback hdr itm itm itm trl hdr itm itm trl same diff b typ pyr tot src hdr same typ rcv amt itm typ pyr tot src rcv amt main type payer account number total batch amount source bank receiver account num item amount c a fig a example program b sample input file c input file record layouts input file format although our example is a toy one the sample file shown in figure b adheres to a simplified version of a real banking format each record is shown as a row with fields being demarcated by vertical lines in this file format the records are grouped logically into batches with each batch representing a group of payments from one customer to other customers each batch consists of a header record value hdr in the first field which contains information about the paying customer followed by one or more item or payment records itm in the first field which identify the recipients followed by a trailer record trl figure c gives the names of the fields of header as well as item records other than the first field typ which we have discussed above another field of particular relevance to our discussions is the src field in header records which identifies whether the paying customer is a customer of the bank that s running the program same or of a different bank diff the meanings of the other fields are explained in part c of the figure the code figure a shows our example program which is in a syntax the data division contains the declarations of the variables used in the program including the input file buffer and output file buffer is basically an overlay or union following the terminology of the c language of the two record layouts shown in figure c after any record is read into this buffer the program interprets its contents using the appropriate layout based on the value of the typ field the output buffer is assumed to have fields pyr rcv amt as well other fields that are not relevant to our discussions these field declarations have been elided in the figure for brevity the statements of the program appear within the procedure division the program has a main loop in lines a record is read from the input file first outside the main loop line and then once at the end of each iteration of the loop line in each iteration the most recent record that was read is processed according to whether it is a header record lines item record lines or trailer record lines the sole write statement in the program is in the processing block line and writes out a processed payment record using information in the current item record as well as in the previously seen header record lines and represent code details elided that populates certain fields of in distinct ways depending on whether the paying customer is from the same bank or a different bank analysis issues and challenges programs typically employ certain idioms that distinguish them from programs in other domains these programs read an unbounded number of input records rather than have a fixed input size furthermore typically a program is designed not to process arbitrary inputs but only input files that adhere to a known domain related state variables are used in the program to keep track of the types of the records read until the current point of execution these state variables are used to decide how to process any new record that is read from the file for instance in the program in figure a the variable is set in lines to s for same or to d for different based on src field of the header record that has just been read this variable is then used in line to decide how to process item records in the same batch that are subsequently read in certain cases the state variables could also be used to identify unexpected or sequences and to reject them analyzing understanding and transforming programs in precise ways requires a unique form of path sensitive analysis in which at each program point distinct information about the program s state needs to be tracked corresponding to each distinct pattern of record types that could have been read so far before control reaches the point we illustrate this using example tions about the program in figure answers to which would enable various verification and transformation activities does the program silently accept inputs this is a natural and important verification problem in the context of programs in our running example if an input file happens to contain an item record as the first record without a preceding header record the variable would be uninitialized after this item record is read and when control reaches line this is because this variable is initialized only when a header record is seen in lines therefore the condition in line could evaluate nondeterministically furthermore the output buffer which will be written out in line could contain garbage in its pyr field because this field also is initialized only when a header record is seen in line in other words programs could silently write out garbage values into output files or databases when given inputs which is undesirable ideally in the running example the programmer ought to have employed an additional state variable to keep track of whether a header record is seen before every item record and ought to have emitted a warning or aborted the program upon identifying any violation of this requirement in other words state tracking in programs is complex and prone to being done erroneously therefore there is a need for an automated analysis that can check whether a program over accepts bad files files that don t adhere to a specification of files what program behaviors are possible with inputs in other situations we are interested only in information on program states that can arise after prefixes of files have been read for instance a developer might be interested in knowing about possible uses of unitialized variables during runs on files only without the clutter caused by warning reports pertaining to runs on files intuitively only the first category of warnings mentioned above signifies genuine errors in the program this is because in many cases developers do not try to ensure meaningful outputs for corrupted input files in our example program there are in fact no instances of uninitialized variables being used during runs on files on a related note one might want to know if a program can falsely issue an input warning even when run on a file this sort of under acceptance problem could happen either due to a programming error or due to misunderstanding on the developer s part as to what inputs are to be expected this could be checked by asking whether statements in the program that issue these warnings such as line in the example program are reachable during runs on inputs in the example program it turns out that this can not happen what program behaviors are possible under restricted scenarios of interest in some situations there is a need to identify paths in a program that are taken during runs on certain narrower of files for instance in the running example we might be interested only in the parts of the program that are required when input files contain batches whose header records always have same in the src field these parts constitute all the lines in the program except lines and variable will no longer need to be set or used because all input batches are guaranteed to be same batches this is essentially a classical program specialization problem but with a specialization criterion rather than a standard criterion on the parameters to the program program specialization has various applications for example in program comprehension decomposition of monolithic programs to collections of smaller programs that have internally cohesive functionality and reducing overhead our approach and contributions approach static analysis based on file formats the primary contribution of this paper is a generic approach to perform any given underlying abstract interpretation of interest u based on an abstract lattice l in a manner by maintaining at each point a distinct abstract fact element of l for distinct patterns of record types that could have been read so far typically to lift the analysis u to a domain a finite set of predicates of p would be required the analysis domain would then essentially be p for our example program a set of six predicates each one formed by conjuncting one of the three predicates hdr itm or trl with one of the two predicates s or d would be natural candidates however coming up with this set of predicates manually would be tedious because it requires detailed knowledge of the state variables in the program and their usage automated predicate refinement might be able to generate these predicates but is a complex iterative process and might potentially generate many additional predicates which would increase the running time of the analysis qsh shdr itm qs shdr itm qi itm dhdr qdh trl qt eof qe record type shdr dhdr itm trl constraint typ typ typ typ hdr src same hdr src diff itm trl dhdr a b fig a input automaton b input record types specifications which are usually readily available because they are or even standards have been used by previous programming languages researchers in the context of tasks such as parser and validator generation and testing our key insight is that if a specification can be represented as a input automaton whose transitions are labeled with record types then the set of states q of this automaton which we call file states can be directly used to lift the analysis u by using the domain q the intuition is that if an abstract fact l l is mapped to a file state q q at a program point then l all possible concrete states that can arise at that point during runs that consume a sequence of records such that the concatenation of the types of these records is accepted by the file state q of the automaton figure a shows the input automaton for the used in our running example with figure b showing the associated input record type descriptions as dependent types the sample input file in figure b is a file as per this automaton this is because the sequence that consists of the types of the records in this file namely shdr itm itm itm trl dhdr itm itm trl is accepted by the automaton intuitively statements other than read statements do not affect the that a program is in during execution therefore the lifted transfer functions for these are straightforward and use the underlying the transfer functions from the l analysis the transfer function for read plays the key role of enforcing the ordering among record types in files for instance consider the qsh in figure a which represents the situation wherein a same header record has just been read therefore in the output of the read transfer function qsh is mapped to the join of the abstract facts that the predecessors of qsh namely qs and qt were mapped to in the input to the transfer function applications in addition to our basic approach above we propose two applications of it that address two natural problems in the analysis of programs that to our knowledge have not been explored previously in the literature the first application is a sound approach to check if a program potentially over accepts files or under accepts files the second is a sound technique to specialize a program wrt a given specialization criterion that represents a restriction of the full and that is itself represented as an input automaton program file state graph pfsg we propose a novel program representation the pfsg which is a graph derived from both the graph cfg of the program and the given input automaton for the program the pfsg is basically an exploded version of the cfg of the original program the controlflow paths in the pfsg are a subset of the paths in the cfg with certain paths that are infeasible under the given input automaton being omitted being itself a cfg any existing static analysis can be applied on the pfsg without any modifications with the benefit that the infeasible paths end up being ignored by the analysis we describe how to modify our basic approach to emit a pfsg and also discuss formally how the results from any analysis differ when performed on the pfsg when compared to being performed on the original cfg implementation and empirical results we have implemented our approach and applied it on several realistic as well as real cobol batch programs our approach found related conformance issues in certain real programs and was also able to verify the absence of such errors in other programs in the program specialization context we observed that our approach was surprisingly precise in being able to identify statements and conditionals that need not be retained in the specialized program we found that our analysis when used to identify references to possibly uninitialized variables and reaching definitions gave improved precision over the standard analysis in many cases the rest of this paper is structured as follows in section we introduce key assumptions and definitions in section we present our approach as well as the two applications mentioned above section introduces the pfsg section presents our implementation and result section discusses related work while section concludes the paper assumptions and definitions definitions records record types and files a record is a contiguous sequence of bytes in a file a field is a labeled of a record any record has zero or more fields if it has zero fields then the record is taken to be a leaflevel record a record type ri is intuitively a specification of the length of a record the names of its fields and their lengths and a constraint on the contents of the record for example consider the record types shown in figure b each row shows the name of a record type and then the associated constraint we say that a record r is of type ri iff r satisfies the length as well as value constraints of type ri for instance the first record in the file in figure b is of type shdr see figure b note that in general a record r could be of multiple types definitions files and read operations a file is a sequence of records of possibly different lengths successive records in a file are assumed to be demarcated explicitly either by markers or by other that captures the length of each record at run time there is a file pointer associated with each open file a read statement upon execution retrieves the record pointed to by this pointer copies it into the file buffer in the program associated with this file and advances the file pointer definition input automaton an input automaton s is a tuple q qs qe where q is a finite set of states which we refer to as file states t eof where t is a set of record types is a set of transitions between the file states with each transition labeled with an element of qs is the designated start state of s and qe is the set of designated final states of a transition is labeled with eof iff the transition is to a final state there are no outgoing transitions from final states note that an input automaton may be in two different senses multiple transitions out of a file state could have the same label also it is possible for a record r to be of two distinct types and and for these two types to be the labels of two outgoing transitions from a file state let q be any state of q we define lt q the type language of q as the set of sequences of types elements of t that take the automaton from its start state to q for a final state qe lt qe is defined to be the union of type languages of the states from which there are transitions to qe we define lr q the record language of any q as follows lr q consists of sequences of records r such that there exists a sequence of types t in lt q such that a the sequences r and t are of equal length and b for each j record r j is of type t j recall that a file is nothing but a sequence of records we say that a file f conforms to an input automaton s or that s accepts f if f is in lr qe for some final state qe of let r be some sequence of records possibly empty say r is in lr q for some file state q of an input automaton if there exists an execution trace t of the given program p that starts at the program s entry consumes the records in r via the read statements that it passes through and reaches a program point p then we say that a trace t is due to the prefix r and b trace t reaches point p while being in q of the input automaton we define ns t as the sequence of nodes of the graph cfg of given program that are visited by the trace t the sequence always begins with the entry node of the cfg and contains at least two nodes the trace ends at the point before the last node in the sequence a input automaton which we often abbreviate to automaton is an input automaton that accepts all files that are expected to be given as input to a program a specialization automaton is an input automaton that accepts a subset of files as a automaton while a full automaton is an input automaton that accepts every possible file if a program accesses multiple sequential input files this situation could be handled using two alternative approaches by concurrently using multiple input automatons in the analysis one per input file or by modeling one of the input files as the primary input file with an associated automaton and by modeling reads from the remaining files as always returning an undefined record we adopt the latter of these approaches in our experimental evaluation our approach in this section we describe our primary contribution which is a generic approach for lifting a given abstract interpretation wrt a specification we then discuss its soundness and precision following this we present the details of the two applications of our generic approach that were mentioned in section finally we present an extension to our approach which enables the specification of data integrity constraints on the contents of input files in relation to the contents of persistent tables abstract interpretation lifted using input automatons the inputs to our approach are a program p an input automaton s q qs qe and an arbitrary underlying abstract interpretation u l vl fl where l is a join and fl is a set of transfer functions with signature l l associated with statements and conditionals our objective as described in the introduction is to use the provided input automaton to compute a least solution considering only paths in the program that are potentially feasible wrt the given input automaton the lattice that we use in our lifted analysis is d q the partial ordering for this lattice is a point wise ordering based on the underlying lattice l vd q vl q the initial value that we supply at the entry of the program is qs il where il l is an input to our approach and is the initial value to be used in the context of the underlying analysis we now discuss our transfer functions on the lattice we consider the following three categories of cfg nodes statements other than read statements conditionals and read statements let n be any node that is neither a read statement nor a conditional let fln l l fl be the underlying transfer function for node since the file state that any trace is in at the point before node n can not change after the trace executes node n our transfer function for node n is n fd d d fln d q let c be a conditional node with a true successor and a false successor let c c ft l fl and ff l fl be the underlying and transfer functions of since a conditional node can not modify the that a trace is in either our transfer function for conditionals is c c fb d d d fb l d q where b stands for t or f finally we consider the case where a node r is a read node this is the most interesting case because executing a read statement can change the file state that a trace is in firstly a note on terminology a dataflow value l l is said to in read qsh qdh qi qt lsh ldh li lt itm itm qs shdr out itm dhdr eof trl shdr qdh qsh qi qt qe f ls shdr f lt shdr f lsh itm f ldh itm f li itm f li trl f lt eof fig illustration of transfer function for read statements represent a concrete state s if s is an element of the concretization of l which is written as l secondly we make the following assumption on the underlying transfer function flr fl for read statements rather than simply have the signature l l the function flr ought to have the signature l if t is some record type element of t then intuitively flr t should return a dataflow fact that represents the set of concrete states that can result after the execution of the read assuming the concrete state just before the execution of the read is some state that is represented by and the read statement retrieves a record of type t from the input file and places it in the input buffer correspondingly flr eof should return a dataflow fact that represents the set of concrete states that can result after the execution of the read assuming the concrete state just before the execution of the read is some state that is represented by and the input buffer in the program gets populated with an undefined value and the statement within the at end clause if any executes after the read operation as an illustration say the underlying analysis u is the cp constant propagation analysis flr t would return a fact that is obtained by performing the following transformations on remove all existing cp facts associated with the input buffer and obtain suitable new cp facts for the input buffer using the constraints associated with the type on the other hand flr eof would perform only step above r we are now ready to present our transfer fd for a read node the transfer function is g r fd d q flr d qi label s qi qj qi where label s qi qj returns the label which is a type or eof of the transition qi qj in the intuition behind this transfer function is as follows for qsh qdh qi qt qe h s h d i t s o s o while qsh t f qdh qi qt qe h s h d i t s o s o if itm close f t qi qsh i qdh qt h s s o h d s o t if hdr qsh qdh qi qt h s s h d d i t read qsh qdh h s h d qi qt qe i t fig fix point solution for program in figure a using cp analysis abbreviations used e s i o h hdr s same d diff i itm t trl any file state qj a trace can be in qj after executing the read if the trace is in any one of the predecessor states of qj in the automaton just before executing the read therefore the fact from lattice l that is to be associated with qj at the point after the read statement can be obtained as follows for each file state qi such that there is a transition qi qj labeled s in the automaton transfer the fact flr s where l is the fact that qi is mapped to at the point before the read statement take a join of all these transferred facts figure sketches this transfer function schematically each edge from a column before the read statement to a column after the read denotes a transfer that happens due to step above the label on the edge denotes the label on the corresponding transition in the automaton we have abbreviated each instance of flr in this figure as f we have also omitted some of the columns for compactness our presentation above was limited to the setting however our analysis can be extended to the setting using standard techniques some details of which we discuss in section illustration figure shows the solution at certain program points for the example program in figure as computed by our analysis using the wellformed automaton shown in figure we assume an underlying lattice l that is a product of the cp and possibly uninitialized variables uninit lattices each table in the figure denotes the solution a function from q to l at the program point that precedes the statement that is indicated below the table each column of a table shows the underlying dataflow value associated with a file state columns in which the underlying dataflow value is which represents unreachability basically are omitted from the tables for brevity the first component of each dataflow value within angle brackets indicates the constant values of variables while the second component within curly braces indicates the set of variables that are possibly uninitialized empty sets are omitted from the figure for brevity we abbreviate the variable names as well as constant values for the sake of compactness as described in the caption of the figure in the interest of space we focus our attention on just one of the program points the point just before the if condition in line any execution trace reaching this point can be in any one of the following four file states qsh qdh qi or qt these file states have associated cp facts that indicate that has value hdr hdr itm and trl respectively additionally the state variable is possibly uninitialized under columns qsh and qdh because lines which initialize this variable may not have been visited yet whereas is initialized under the other two columns now only the fact associated with qi flows down the true branch of the conditional in line this is because this conditional tests that contains itm therefore is inferred to be definitely initialized by the time it is referenced in line which is the desired precise result soundness precision and complexity of our approach our soundness result intuitively is that if the underlying analysis u is sound then so is our lifted analysis modulo the assumption that any input file given to the program p conforms to the given input automaton u itself is said to be sound if at any program point p of any program p the solution l computed by u represents all concrete states that can result at p due to all possible execution traces of p that begin in any concrete state that is represented by the given initial value il the following theorem states the soundness characterization of our analysis more formally theorem assuming u is a sound abstract interpretation if d is the solution produced by our approach at a program point p of the given program p starting with initial value qs il then l d q represents all concrete states that can result at point p due to all possible executions that begin in any concrete state that is represented by il and that are on an input file that conforms to the given automaton the proof of the theorem above is straightforward the following observations follow from the theorem above if the given input automaton is a automaton then the solution at a program point p represents all concrete states that can result at point p during executions on files if the given input automaton is a full automaton then the solution at a program point p represents all concrete states that can result at point p during all possible executions including on files given any input automaton s our approach will produce a solution that is at least as precise as the one that would be produced directly by the underlying analysis u however the choice of the input automaton does impact the precision of the our approach intuitively if input automaton is a subautomaton of input automaton is obtained by deleting certain states and transitions or by constraining further some of the types that label some of the transitions then will result in a more precise solution than also if and accept the same language but structurally refines then will result in a more precise solution we formalize these notions in the appendix the time complexity of our analysis when used with an automaton s that has a set of states q is in the worst case times the time complexity of the underlying analysis u two applications of our analysis in this section we describe how we use our analysis described in section to address the two new problems that we mentioned in the introduction file format conformance checking and program specialization file format conformance checking as mentioned in the introduction a verification question that developers would like an answer to is whether a program can silently accept an input file and possibly write out a corrupted output file over acceptance or conversely could the program reject a file via an abort or a warning message under acceptance different programs use different kinds of idioms to reject an input file generating a warning message and then continuing processing as usual ignoring an erroneous part of the input file and processing the remaining records and aborting the program via an exception in order to target all these modes in a generic manner our approach relies on the developer to identify related rejection points in a program these are the statements in a program where format violations are flagged using warnings aborts etc detecting we detect warnings by applying our analysis using a automaton and using any given programstate abstraction domain cp or interval analysis as u issuing an warning if the fact associated with any is at any rejection point the intuition is simply that rejection points should be unreachable when the program is run on files since our analysis is conservative in that it never produces dataflow facts this approach will not miss any issues as long as the developer does not fail to mark any actual rejection point as a rejection point as an illustration say line in the program in figure is marked as a rejection point using the automaton in figure a and using cp as the underlying analysis u our analysis will find this line to be unreachable therefore no warnings will be issued detecting intuitively a program has over acceptance errors with respect to a given file format if the program can reach the end of the main procedure without going through any rejection point when run on an input file that does not conform to the automaton we check this property as follows we first extend the given automaton s to a full automaton which accepts all input files systematically by adding a new final state qx a few other new states and new transitions that lead to these new states from the original states the intent is for these new states to accept record sequences that are not accepted by any file state in the original automaton we provide the full details of this construction in the appendix secondly we modify the transfer functions of our lifted analysis d at rejection points such that they map all file states to in their output intuitively the idea behind this is to block paths that go through rejection points we then apply our analysis using this full automaton and using any programstate abstraction domain as u and flag an warning if the dataflow value associated with any file state that is not a final state in the original automaton is at the final point of the main procedure clearly since our analysis dataflow facts at all program points we will not miss any scenarios as long as the developer does not wrongly mark a point as a rejection point program specialization based on file formats as mentioned in the introduction it would be natural for developers to want to specify specialization criteria for programs as patterns on sequences of record types in an input file we propose the use of input automatons for this purpose for example if the automaton in figure a were to be modified by removing the file state qdh as well as all transitions incident on it what would be obtained would be a specialization automaton that accepts files in which all batches begin with same headers only our approach to program specialization using a specialization automaton is as follows we apply our analysis of section using the given specialization automaton as the input automaton and using any abstraction domain as u we identify program points p at which every file state is mapped to as per the solution computed in step basically these program points are unreachable during executions on input files that conform to the specialization criterion the that immediately follow these points can be projected out of the program to yield the specialized program the details of this projection operation are not a focus of this paper it is easy to see that our approach is sound in that it marks a point as unreachable only if it is definitely unreachable during all runs on input files that adhere to the given criterion illustration using the specialization automaton mentioned above and using cp as the underlying analysis lines and in the code in figure a are marked as unreachable it is worthwhile noting that if one did not use the specialization automaton as criterion and instead simply specified that all header records have value same in their src field then line would not be identified as unreachable intuuitively this is because the path consisting of lines along which is uninitialized would not be found infeasible as discussed in section subsequently as a step which is not a part of our core approach the following further simplifications could be done to the program i make lines and unconditional and remove the respective controlling if conditions this would be safe because the else branches of these two if conditions have become empty ii remove line entirely this would be safe because after the conditional in line is removed the variable is not used anywhere in the program imposing data integrity constraints on input files the core of our approach which was discussed in section used input automatons that constrain the sequences of types of records that can appear in an input file however in many situations a file also needs to satisfy certain data integrity constraints wrt the contents of certain persistent tables if these constraints can also be specified in conjunction with the input automaton then certain paths in the program that execute only upon the violation of these constraints can be identified and pruned out during analysis time this has the potential to further improve the precision and usefulness of our approach in our running example say there is a requirement that the receiver of any payment in the input file represented by the field in the item record necessarily be an account holder in the bank such a requirement could be enforced in the code in figure by adding logic right after line to check if the value in appears as a primary key in the accounts database table of the bank and to not execute lines if the check fails however if a user of our approach wishes to assert that input files will never contain items that refer to invalid account numbers then the logic mentioned above could be identified as redundant to enable users to give such specifications we allow predicates of the form isintable tab field and isnotintable tab field to be associated with record type definitions where tab is the name of a persistent table and field is the name of a field in the record type for example the itm type in figure b could be augmented as typ itm isintable accounts rcv where accounts is the master accounts table the semantics of this is that the value in the rcv field is guaranteed to be a primary key of some row in the table accounts similarly isnotintable tab field asserts that the value in field is guaranteed not to be a primary key of any row in tab we assume our programming language has the following construct for keybased lookup into a table tab read tab into buffer key variable invalid key the semantics of this statement is as follows if a table row with a key matching the value in variable is found in the table tab then it will be copied into buffer and control is given to if no matching key is found then the buffer content is undefined and control is given to with this enhancement of record type specifications we extend our analysis framework as follows the new lattice we use will be d q s l where l is the given original underlying lattice s and c is the set of all possible predicates of the two kinds mentioned above we now describe the changes required to the transfer functions the transfer function of normal read statements that read from input files that we described in section is to be augmented as follows whenever a record of a certain type t t is read in any predicates in the incoming fact that refer to fields of the input buffer are removed and the predicates associated with t are included in the outgoing fact the transfer function of the statement move x to y copies to the outgoing fact all predicates in the incoming fact that refer to variables other than y further for each predicate in the incoming fact that refers to x it creates a copy of this predicate makes it refer to y instead of x and adds it to the outgoing fact transfer functions of conditionals do not need any change finally we need to handle lookups which is the most interesting case consider once again the statement read t into buffer key v invalid key the transfer function first checks if a predicate of the form isintable t v is present in the incoming dataflow fact if it does it essentially treats statementsn as unreachable else if a predicate of the form isnotintable t v is present in the incoming fact it essentially treats as unreachable otherwise it treats both and as reachable a more formal presentation of these transfer functions is omitted from this paper in the interest of space the program file state graph pfsg in this section we introduce our program representation for programs the program file state graph pfsg we then formalize the properties of the pfsg finally we discuss how the pfsg serves as a basis for performing other program analyses without any modifications while enabling them to ignore certain cfg paths that are infeasible as per the given input automaton structure and construction of the pfsg the pfsg is a representation that is based on a cfg g of a program p as well as on a given input automaton s for p the pfsg is basically an exploded cfg if the set of states in s file states is qs qe then for each node m in the cfg we have nodes m qs m m m qe in the pfsg in other words the pfsg has n nodes where n is the number of nodes in g and q is the set of of a structural property of the pfsg is that an edge is present between nodes m qi and n qj in the pfsg only if there is an edge from m to n in the cfg in other words any path m qi n qj r ql in the pfsg corresponds to a path m n r in the cfg let sg be the entry node of the cfg the node sg qs is regarded as the entry node of the pfsg the pfsg can be constructed in a straightforward manner using our basic approach that was described in section the precision of the pfsg is linked to the precision of the underlying analysis u that is selected for example the standard cp analysis could be used as u if more precision is required a more powerful lattice then for instance a relational domain wherein each lattice value represents a set of possible valuations of variables such as the octagon domain could be used once the solution is obtained from the approach edges are added to the pfsg as per the following procedure for each edge m n in the cfg rule applicable if m is a read node for each transition qi qj in the automaton s add an edge m qi n qj in the pfsg rule applicable if m is not a read node for each q q add an edge m q n q in the pfsg in both the rules above we add an edge m qk n ql only if dm qk and dn ql where dm and dn are the solutions at m and n respectively we follow this restriction because dm qk resp dn ql is only when there is no execution trace that can reach m resp n due to a sequence of records that is in lr qk resp lr ql the intuition behind the first rule above is that when a read statement executes it modifies the that the program could be in intuitively this is a that the input automaton could be in were we to start simulating the automaton from qs when the program starts executing and transition to an appropriate target state upon the execution of each read statement based on the type of the record read when executing a read statement a program could transition from a qi to qj only if such a transition is present in the input automaton the second rule above does not switch the of the program because statements other than read statements affect the program s internal state valuation of variables but not the that the program is in qs qsh qdh qi dhdr itm qt qe shdr eof eof trl fig pfsg for program in fig a and input automaton in fig illustration of pfsg for illustration consider the pfsg shown in figure corresponding to the program in figure a and input automaton in figure recall that this automaton describes all files where headers item records and trailers appear in their correct positions visually the figure is laid out in six columns corresponding to the six file states in the input automaton the nodes in the pfsg are labeled with the corresponding line numbers from the program therefore the node labeled in the qs column is actually node qs where represents the open statement in line of the program on a related note qs being the start state of the input automaton and line being the entry node of the program the node mentioned above is in fact the entry node of the pfsg certain parts of the pfsg are elided for brevity and are represented using the cloud patterns this pfsg was generated using a solution from our approach of section using cp constant propagation as the underlying analysis u a fragment of this solution was shown in figure the sets of possibly uninitialized variables in that figure can be ignored in the current context we now discuss in more detail a portion of the pfsg in figure with emphasis on how it elides certain infeasible paths that are present in the original cfg line in the program is a read statement as per the given input automaton the outgoing transitions from qs go to qsh and to qdh therefore as per rule of our pfsg approach see section above there are outgoing edges from qs to copies of node in the qsh and qdh columns for clarity we have labeled these edges with the types on the corresponding transitions the qsh column the second column essentially consists of a copy of the loop body specialized to the situation wherein the last record read was of type shdr shdr being the type on all transitions coming into qsh in particular note that the true edge out of to in the qsh column is elided this is because in the solution see figure the cp constant propagation fact associated with the qsh file state at the point before indicates that has value hdr this fact is abbreviated as h in the figure therefore in the solution the underlying fact associated with the qsh file state out of this edge ends up being which results in rule adding only the false edge from to line in the program being a read statement there is an edge from node qsh at the bottom of the qsh column to the entry of column qi qi being the sole successor of qsh in the input automaton the qi column consists of a copy of the loop body specialized to the situation wherein the previous record read is of type itm this being the type on transitions coming into qi from the end of the qi column control goes to the qt column and from the end of that column back to the beginning of the qsh and qdh columns it is notable that the structure of the pfsg is inherited both from the cfg and from the input automaton as was mentioned in the discussion above control transfers from one column to another mirror the transitions in the input automaton while paths within a column are inherited from the cfg but specialized wrt the type of record that was last read program analysis using pfsg any program analysis that can be performed using a cfg can naturally be performed unmodified using a pfsg by simply letting the analysis treat each node n qi as being the same as the underlying node such an analysis will be no less precise than with the original cfg of the program this is because by construction every path in the pfsg corresponds to a path in the original cfg in other words there are no extra paths in the pfsg to the contrary certain cfg paths that are infeasible as per the given input automaton could be omitted from the pfsg in other words precision of the analysis is improved by ignoring executions due to certain infeasible inputs for instance in the example that was discussed above due to the omitted edge from to in the qsh column there is no path in the pfsg that visits copies of the nodes and in that order even though such a path exists in the original cfg in other words the pfsg encodes the fact that under the given input file format an item record can not occur as the first record in an input file however in general due to possible imprecision in the given underlying analysis u not all paths that are infeasible as per the given input automaton would necessarily be excluded from the pfsg to illustrate the benefits of program analysis using the pfsg we discuss two example analyses below say we wish to perform possibly uninitialized variables analysis due to the path in the original cfg the use of in line would be declared as possibly uninitialized however under the given input automaton since every path that reaches line in the pfsg reaches it via lines or which both defined the use mentioned above would be declared as definitely initialized when the analysis is performed on the pfsg a cp analysis when done on the pfsg in figure would indicate that at the point before line would not have a constant value however when the same analysis is done on the pfsg the same analysis would indicate that if a if is hdr and is same then has value s b if is hdr and is diff then has value d and c is not a constant otherwise these correlations are identified because the pfsg is exploded hence segregates cfg paths that end at the same program point but are due to record sequences that are accepted by different of the input automaton correlations such as the one mentioned above can not be identified in general using the cfg unless very expensive domains such as relational domains are used the two instances of precision improvement mentioned above can also be obtained using our approach of section if we use cp uninit as the underlying domain u where uninit is the analysis for the first instance and if we simply use cp as the underlying domain u for the second instance however in general there are several scenarios where the pfsg serves better as a foundation for performing program analysis than the approach of section the approach of section applies only to forward dataflow analysis whereas the pfsg can be used for forward as well as backward dataflow analysis problems the pfsg as a basis for applying static analysis techniques other than dataflow analysis such as symbolic execution assertional reasoning etc implementations of these techniques that are designed for cfgs can be applied unmodified on the pfsg all these analyses are likely to benefit from the pruning of paths from the pfsg that are infeasible as per the given input automaton formal properties of the pfsg soundness we now characterize the paths in the original cfg that are necessarily present in the pfsg this result forms the basis for the soundness of any static analysis that is applied on the pfsg theorem let u l vl fl be a given underlying sound abstract interpretation consider any execution trace t of the program p that begins with a concrete state that is represented by the given initial dataflow fact il let l be the sequence of records due to which t executes and t be the number of nodes in ns t if a l is in lr q for some q of s and t did not encounter upon a read or b l is in lr q for some final q of s and the last read in t encountered then there is a path in the pfsg such that a the first node of is sg qs b for all i t if the ith node of ns t is some node m then the ith node of is of the form m qj for some qj and c the last node of is of the form m q for some intuitively the theorem above states that for all execution traces that are due to record sequences that are accepted by the given input automaton paths taken by these traces are present in the pfsg in the specific scenario where the pfsg is used to perform a dataflow analysis then the theorem above can be instantiated as follows corollary let d be any sound dataflow analysis framework based on a lattice let be a given dataflow fact at program entry is an element of d s lattice for any node n of the original cfg g let s n denote the final solution at n computed using d on the cfg using initial value consider a pfsg for g obtained using a given input automaton for any node n qi of the pfsg let s n qi denote the final solution at n computed byfd but applied on the pfsg using the same initial value let n qi is a of s s n qi a n v s n precision b n the set of concrete states that can arise at node n when the program is run on input files that conform to soundness precision ordering among pfsgs as was clear from the discussion in this section the pfsg produced by our approach for a given cfg g and input automaton s is not fixed but depends on the selected underlying abstract interpretation u the theorem given above states that no matter what abstract interpretation is used as u the pfsg is sound does not elide any paths that can be executed due to record sequences that are accepted by s as long as u is sound however the precision of the pfsg depends the precision of u given a cfg g and an input automaton s we can define a precision ordering on the set of pfsgs for g and s that can be obtained using different sound underlying domains etc a pfsg can be said to be as precise as another pfsg if every edge m qi n qj in is also present in note that this implies that every path in is also present in theorem if an underlying domain is a consistent abstraction of another underlying domain then the pfsg obtained for g and s using is at least as precise as the pfsg obtained for g and s using implementation and evaluation prog name acctran dtap clieopp loc no of cfg nodes automaton full automaton fig benchmark program details we have targeted our implementation at cobol batch programs these are very prevalent in large enterprises and are based on a variety of standard as well as proprietary file formats another motivating factor for this choice is that one of the authors of this paper has extensive professional experience with developing and maintaining cobol batch applications we have implemented our analysis using a proprietary program analysis framework prism our implementation is in java we use the call strings approach for precise analysis cobol programs do not use recursion therefore we place no apriori bound on lengths our analysis code primarily consists of an implementation of our generic analysis framework as described in section we have currently not implemented our extension for data integrity constraints that was described in section nor have we implemented our pfsg construction approach section we also have some lightweight scripts that process the solution emitted by the analysis to compute results for the specialization problem as well as the file conformance problem see section we ran our tool on a laptop with an intel ghz cpu with gb ram benchmark programs we have used a set of eight programs as benchmarks for evaluation figure lists key statistics about these programs the second and third columns give the sizes of these programs in terms of lines of code including variable declarations and in terms of number of executable nodes in the cfg as constructed by prism the program acctran is a toy program that was used as a running example in a previous paper is an example inventory management program used in a textbook to showcase a typical sequential file processing program the program dtap has been developed by the authors of this paper it is a payments validation program the it uses and the validation rules it implements are both taken from a widely used standard specification the program clieopp is a payment validation and transformation program it was developed by a professional developer at a large it consulting services company for training purposes the format and the validation rules it uses are from another standard specification and are programs used in a bank for validating and reporting return payments sent from branches of the bank to the and are programs from major multinational financial services companies the program is a format translator which translates various kinds of input records to corresponding output records reads data from a sequential master file collects the data required for computing monthly interest and fee for each account and writes this data out to various output files the file formats used in these four programs are proprietary columns and in figure give statistics about the automaton for each program for the programs acctran and the respective original sources of these programs also give the expected input file formats for the real programs and we derived the record types as well as automatons by going through the programs and guessing the intended formats of the input files to these programs for program the maintainers have provided us the file format specification in the case of programs dtap and clieopp we constructed the record types as well as automatons from their respective standard specifications in all cases we employed a while creating the automatons namely that all incoming transitions into a file state be labeled with the same type for most of the programs we also constructed a full automaton to use in the context of over acceptance analysis we created each full automaton using the corresponding automaton as a basis following the basic procedure described in section statistics about these full automatons are presented in the last two columns in figure we did not create a full automaton for clieopp and because the full automatons for these program turn out to be large and unwieldy to specify instead we used the automatons in place of the full automatons in analysis which can cause potential unsoundness we evaluate our approach in three different contexts its effectiveness in detecting conformance violations in programs its usefulness in specializing programs and its ability to improve the precision of a standard dataflow analysis file format conformance checking as a first step in this experiment we manually identified the rejection points for each program this was actually a task because each program had its own idioms for rejecting files some programs wrote warnings messages into log files others used system routines for terminating the program while others used cobol keywords such as goback and stop run furthermore since not every instance of a warning output or termination is necessarily due to file format related issues we had to exercise care in selecting the instances that were prog file format conformance warnings name under acceptance over acceptance acctran dtap clieopp fig conformance checking results due to these issues we also manually added summary functions in our analysis for calls to certain system routines that terminate the program our summary functions treat these calls as returning a dataflow value for all file states thus simulating termination in this experiment we use cp constant propagation as the underlying analysis u for our lifted approach figure summarizes the results of this analysis for each program the second column captures the number instances of a file state of the automaton having a value at a rejection point these are basically the warnings the third column depicts the number of file states of the full automaton excluding the final states of the original automaton that reach the final point of the main procedure with a value these are basically the warnings the running time of the analysis was a few seconds or less on all programs except on this very large program the analysis took seconds discussion of results a noteworthy aspect of these results is that four of the eight programs namely acctran dtap and have been verified as having no errors in the case of clieop some of the warnings turned out to be true positives during manual examination in that the code contained programming errors that cause rejection of files we also manually examined one other program for which there were warnings although this program is a textbook program it follows a a complex idiom certain fields in certain record types in the input file format for this program are supposed to contain values that appear as primary keys in a sorted persistent table that is accessed by the program however the automaton that we created does not capture this constraint and is hence overapproximated this caused false warnings to be reported discussion of results as is clear from the table our implementation reports warnings on all the programs the numbers marked with a are potentially lower than they should really be because as mentioned in section we did not actually use a full automaton for these this program uses sequential lookup on the persistent table which is an idiom that our extension section does not support two programs we manually examined four of these programs and report our findings below warnings reported for two of the programs dtap and turned out genuine the input for dtap is similar to the one shown in figure a the difference is that it uses single state qh in place of qsh and qdh this program happens to accept files that contain batches in which a header record and a trailer record occur without any intervening item records which is a violation of the specification in the case of when we discussed the warnings with the maintainers of the program they agreed that some of them were genuine however at present there is another program that runs before in their standard workflow that ensures that files are not supplied to in the case of and the automatons were overapproximated there is one other challenging idiom in which also contributes to imprecision some of the routines that emit warnings emit fileconformance warnings when called from certain and other kinds of warnings when called from other since we currently do not have a scheme to mark rejection points we left these routines unmarked as a conservative gesture program specialization no program name acctran acctran dtap dtap dtap dtap clieopp clieopp criterion name deposit withdraw add change delete ddbank ddcust ctbank ctcust payments directdebit edit update form telex modified trancopy daccts maccts criterionspecific common nodes nodes fig specialization criteria and results the objective of this experiment is to evaluate the effectiveness of our approach in identifying program statements that are relevant to given criteria that are specified as specialization automatons in this experiment we used cp as the underlying analysis u we ran our tool multiple times on each program each time with a different specialization criterion that we identified which represents a meaningful functionality from the perspective for instance consider the program the input file to this program consists of a sequence of request records with each request being either to add an item to the inventory which is stored in a persistent table to change the details of an item in the inventory or to delete an item from the inventory a meaningful criterion for this program would be one that is concerned only about add requests similarly change and delete are meaningful criteria figure summarizes the results from this experiment each row in the figure corresponds to a programcriterion pair the third column in the figure indicates the mnemonic name that we have given to each of our criteria note that for the criterion trancopy specializes the program to process one of twelve kinds of input record types while we have done the specialization with all criteria for brevity we report only one of them in the figure trancopy results for any criterion the sum of the numbers in the fourth and fifth columns in the figure is the number of cfg nodes that were determined by our analysis as being relevant to the criterion were reached with a value under some file state with the specialization automaton for instance for acctrandeposit the number of relevant cfg nodes is out of a total of nodes in the program see figure the fifth column indicates the number of common nodes that were relevant to all of the criteria supplied while the fourth column indicates the number of nodes that were relevant to the corresponding individual criterion but are not common to all the criteria note that in the case of dtap we show commonality not across all four criteria but within two subgroups each of which contains two related criteria also in the case of the common nodes depicted are across all twelve criteria it is notable that in most of the programs the commonality among the statements that are relevant to the different criteria is high while statements that are specific to individual criteria are fewer in number our belief is that in a program comprehension setting the ability of a developer to separately view common code and code would let them appreciate in a better way the processing logic that underlies each of these criteria manual examination we manually examined the output of the tool to determine its precision we did this for all programs except and which had difficult as well as logic which made manual evaluation difficult to our surprise the tool was precise on every criterion for four of the remaining programs acctran clieop dtap and that is it did not fail to mark as unreachable any cfg node that was actually unreachable as per our human judgment during executions on input files that conformed to the given specialization automaton this is basically evidence that specialization automatons in conjunction with cp as the underlying analysis are a sufficiently precise mechanism to specialize programs the remaining one program is for which as discussed in section we have an input automaton although the specialized program does contain extra statements that should ideally be removed the result actually turns out to be precise relative to the given automaton precision improvement of existing analyses as discussed in section there are scenarios where one is interested in performing standard analyses on a program but restricted to paths that can be taken during runs on files only to evaluate this scenario we implemented two analyses one is a possibly uninitialized variables analysis whose abstract domain we call uninit wherein one wishes to locate references to variables that have either not been initialized or have been initialized using computations that in turn refer to possibly uninitialized variables the second is a reaching definitions analysis whose abstract domain we call rd we ran each of these two analyses in two modes a direct mode where the analysis is run and a lifted mode where the analysis is done by lifting it with a automaton in the lifted mode for uninit we used cp uninit as the underlying analysis u while for rd we used cp rd as the underlying analysis the cp component is required to enable as was illustrated in figure in the interest of space we summarize the results with uninit of all variable references in were labeled as uninitialized in the direct mode whereas only were labeled so in the lifted mode for dtap the analogous numbers are and in other programs the lifted mode performed only marginally better than the direct mode in the case of rd the total number of edges computed by the lifted mode were below those computed by the direct mode for dtap below for clieopp and below for in the other programs the reduction was marginal we do not have numbers for these experiments on the large program as the domains mentioned above do not yet scale to programs of these sizes on the other programs the direct analyses took anywhere from a few hundreds of a second to up to seconds while the lifted analyses took anywhere from a few tenths of a second to seconds we did a limited study of some programs where the lifted mode did not give significant benefit some of the causes of imprecision that we observed were array references and calls to external programs both of which we handle only conservatively these confounding factors in these programs could not be offset by the precision improvement afforded by the input automatons discussion in summary we are very encouraged by our experimental results except the two smaller programs acctran and our benchmark programs are either real or work on real formats and implement real specifications conformance checking and program specialization are two novel problems in whose context we have evaluated our tool the tool verified four programs as not rejecting any files and found genuine related errors in several other programs the tool was very precise in the program specialization context finally it enabled improvement in precision in the context of uninitialized variables or reaching definitions analysis on four of the eight programs related work we discuss related work broadly in several categories analysis of and programs there exists a body of literature of which the work of godefroid et al and saxena et al are representatives on testing of programs whose inputs are described by grammars or regular expressions via concolic execution their approaches are more suited for bug detection with high precision while our approach is aimed at conservative verification as well as program understanding and transformation tasks various approaches have been proposed in the literature to recover record types and file types from programs by program analysis these approaches complement ours by being potentially able to infer input automatons from programs in situations where file formats are not available a report by auguston shows the decidability of verifying certain kinds of assertions in programs program specialization blazy et al describe an approach to specialize fortran programs using constant propagation there is a significant body of literature on the technique of partial evaluation which is a sophisticated form of program specialization involving loop unrolling to arbitrary depths simplification of expressions etc these approaches typically support only criteria on fixed sized program inputs launchbury et al extend partial evaluation to allow criteria on data structures consel et al provide an interesting variant of partial evaluation wherein they propose an based framework to specialize functional programs with abstract values such as signs types and ranges our approach could potentially be framed as an instantiation of their approach with an lifted lattice and corresponding lifted transfer functions program slicing program slicing is widely applicable in software engineering tasks usually to locate the portion of a program that is relevant to a criterion the constrained variants of program slicing provide good precision in general at the cost of being potentially expensive existing approaches for constrained slicing do not specifically support constraints on the record sequences that may appear in input files of programs our lifted analysis which we described in section enables this sort of slicing typestates there is a rich body of literature in specifying and using type states with the seminal work being that of strom et al in the context of analyzing programs automatons have been used to capture the state of a file open closed error to our knowledge ours is the first work in this space to use automatons to encode properties of the prefix of records read from a file shape analysis shape analysis is a precise but technique for verifying shapes and other properties of data structures while at a high level a data file is similar to an list the operations used to traverse files and data structures are very different to our knowledge shape analysis has not been used in the literature to model the contents and states of files as they are being read in programs it would be an interesting topic of future work to explore the feasibility of such an approach conclusions and future work we presented in this paper a novel approach to apply any given abstract interpretation on a program that has an associated input the basically enables our approach to elide certain paths in the program that are infeasible as per the file format and hence enhance the precision and usefulness of the underlying analysis we have demonstrated the value of our approach using experiments especially in the context of two novel applications file format conformance checking and program specialization a key item of future work is to allow richer constraints on the data in the input file and persistent tables for instance general logical constraints constraints expressing sortedness would be useful in many settings to obtain enhanced precision and usefulness also we would like to investigate our techniques on domains other than batch programs to programs xmlprocessing programs and applications references auguston decidability of program verification can be achieved by replacing the equality predicate by the constructive one technical report new mexico state university blazy and facon sfac a tool for program comprehension by specialization in proc ieee workshop on program pages nov caballero yin liang and song polyglot automatic extraction of protocol message format using dynamic binary analysis in proc acm conf on computer and comm security ccs pages canfora cimitile and lucia conditioned program slicing information and software technology apache common log format http html brain drain where cobol systems go from here computerworld may http consel hornof marlet muller thibault volanschi lawall and partial evaluation for software engineering acm comput consel and khoo parameterized partial evaluation acm transactions on programming languages and systems toplas cousot and cousot abstract interpretation a unified lattice model for static analysis of programs by construction or approximation of fixpoints in proc acm symp on principles of programming languages popl pages cui peinado chen wang and tupni automatic reverse engineering of input formats in proc acm conf on comp and comm security ccs pages das lerner and seigle esp program verification in polynomial time in proc conf prog langs design and impl pldi pages david krop clieop client orders file description http devaki and kanade static analysis for checking data format compatibility of programs in proc foundations of softw tech and theor comput science fsttcs pages driscoll burton and reps checking conformance of a producer and a consumer in proc foundations of softw engg fse pages retail payment system rps deutsche bundesbank http j field ramalingam and tip parametric program slicing in proc sym on principles of prog langs popl pages fischer jhala and majumdar joining dataflow with predicates in acm sigsoft int symp on foundations of softw engg fse pages fisher and walker the pads project an overview in proc int conf on database theory icdt pages godefroid kiezun and levin whitebox fuzzing in proc acm conf on prog lang design and impl pldi pages harman hierons fox danicic and howroyd conditioned slicing in proc int conf on software maintenance icsm pages introduction to standards http jones gomard and sestoft partial evaluation and automatic program generation prentice hall international khare saraswat and kumar static program analysis of large embedded code base an experience in proc india software engg conf isec komondoor and ramalingam recovering data models via guarded dependences in working conf on reverse engg wcre pages launchbury project factorisations in partial evaluation volume cambridge university press a mine the octagon abstract domain higher order symbol march murach a prince and menendez how to work with sequential files in murach s structured cobol chapter murach sagiv reps and wilhelm parametric shape analysis via logic in proc symp on principles of prog langs popl pages saxena poosankam mccamant and song symbolic execution on binary programs in proc int symposium on softw testing and analysis issta pages sharir and pnueli two approaches to interprocedural data flow analysis in muchnick and jones editors program flow analysis theory and application prentice hall professional technical reference sinha ramalingam and komondoor parametric process model inference in working conf on reverse engg wcre pages strom and yemini typestate a programming language concept for enhancing software reliability software engineering ieee transactions on se the united nations rules for electronic data interchange for administration commerce and transport draft directory http xi dependent types in practical programming phd thesis university xu qian zhang wu and chen a brief survey of program slicing sigsoft softw eng notes mar a appendix precision of our approach we discuss here the precision of our approach which was alluded to in section in more detail an is a function from program points in the given program p to dataflow values from the lattice a q l is a function from program points to functions in the domain q l where q is the set of file states in an input automaton given a q l solution g and an f for the same program p we say that g is more precise than f iff at each program point p g p q vl f p note that we are actually using more precise as shorthand for equally precise or more precise let be the obtained for program p by directly using the underlying analysis u our first key result is as follows theorem if s is any input automaton for p with set of files states q then the q l solution computed by our approach using s and using the underlying analysis u is more precise than solution computed by u directly intuitively the above theorem captures the fact that the that results from tracking different dataflow values from lattice l for different file states causes increase in precision note that the theorem above does not touch upon soundness in order to ensure soundness s would additionally need to accept all files or all files depending on the notion of soundness that is sought a different question that naturally arises is if there are multiple candidate automatons that all accept the same set of files will they all give equally precise results when used as part of our analysis the answer in general turns out to be no it can also be shown that if an input automaton accepts a smaller set of files than another automaton then the first automaton need not necessarily give more precise results than the second one on all programs in fact precision is linked both to the set of files accepted as well as to the structure of the automatons themselves in order to formalize the above intuition we first define formally the notion of a precision ordering on different solutions for a program p using different input automatons a l solution is said to be more precise than another l solution iff at each program point p for each file state there exists a file state such that p vl p we then define a notion of refinement among input automatons for the same program p we say that automaton is a refinement of automaton iff there exists a mapping function m such that m and for each transition in labeled with some symbol there exists one or more transitions from m to m in furthermore if is the label on any of these transitions then either and are both eof or and are both types and s constraint implies s constraint if an input automaton is a refinement of an input automaton then the following two properties can be shown to hold each accepting state of is mapped by m to an accepting state of and the set of files accepted by is a subset of files accepted by now our main result on precision ordering of input automatons is as follows theorem for any program p and for any given underlying analysis u if an input automaton for p is a refinement of an input automaton for p then the solution computed by our approach using and u is more precise than the solution computed by our approach using and u an important take away from the above theorem is when there is a choice between two input automatons that accept the same set of files two different automatons accepting the same set of files if one of them is a refinement of the other then the refined automaton will give more precision than one of which it is a refinement checking errors we discuss here a procedure to extend a given automaton s q t eof qs qe into a full automaton we first create a new type named na none of the above and associate with it a constraint that lets it cover all records that are not covered by any of the types in the original set of types t we also add a new file state to the automaton which we denote as qy in this discussion let t t na and q qy for every state q in and for every type t in t if there is no transition labeled t out of q we add a transition from q to qy labeled finally we add one more new file state qx to the automaton make it a final state add eof transitions from all states to this state the intuition behind this construction is that qx accepts all files while qy accepts all record sequences that are not prefixes of files
6
achievability performance bounds for source coding elad domanovitz and uri erez dec abstract source coding has been proposed as a method for compression of distributed correlated gaussian sources in this scheme each encoder quantizes its observation using the same fine lattice and reduces the result modulo a coarse lattice rather than directly recovering the individual quantized signals the decoder first recovers a set of judiciously chosen integer linear combinations of the quantized signals and then inverts it it has been observed that the method works very well for most but not all source covariance matrices the present work quantifies the measure of bad covariance matrices by studying the probability that source coding fails as a function of the allocated rate where the probability is with respect to a random orthonormal transformation that is applied to the sources prior to quantization for the important case where the signals to be compressed correspond to the antenna inputs of relays in an rayleigh fading environment this orthonormal transformation can be viewed as being performed by nature hence the results provide performance guarantees for distributed source coding via integer forcing in this scenario i ntroduction if source coding proposed in is a scheme for distributed lossy compression of correlated gaussian sources under a minimum mean squared error distortion measure similar to its channel coding counterpart in this scheme all encoders use the same nested lattice codebook each encoder quantizes its observation using the fine lattice as a quantizer and reduces the result modulo the coarse lattice which plays the role of binning rather than directly recovering the individual quantized signals the decoder first recovers a set of judiciously chosen integer linear combinations of the quantized signals and then inverts it an appealing feature of source coding not shared by previously proposed practical methods coding for the distributed source coding problem is its inherent symmetry supporting equal distortion and quantization rates a potential application of if source coding is to distributed compression of signals received at several relays as suggested in and further explored in similar to if channel coding if source coding works well for most but not all gaussian vector sources following the approach of in the present work we quantify the measure of bad source covariance matrices by considering a randomized version of if source coding where a random unitary transformation is applied to the sources prior to quantization while in general such a transformation implies joint processing at the encoders we note that in some natural scenarios including that of distributed compression of signals received at relays in an rayleigh fading environment the random transformation is actually performed by in fact it was already empirically observed in that if source coding performs very well in the latter scenario the rest of this paper is organized as follows section ii formulates the problem of distributed compression of gaussian sources in a compound vector source setting and provides some relevant background on if source coding section iii describes randomly precoded if source coding and its empirical performance section iv derives upper bounds on the probability of failure of if as a function of the excess rate in section v deterministic linear precoding is considered a bound on the excess rate needed is derived for any number of sources with any correlation matrix when spacetime precoding derived from determinant codes is used further we show that this bound can be significantly tightened for the case of uncorrelated sources concluding remarks appear in section ii p roblem f ormulation and background in this section we provide the problem formulation and briefly recall the achievable rates of if source coding as developed in the work of domanovitz and erez was supported in part by the israel science foundation under grant no and by the heron consortium via the israel ministry of economy and industry domanovitz and erez are with the department of electrical engineering systems tel aviv university tel aviv israel email domanovi uri this follows since the left and right singular vector matrices of an gaussian matrix k are equal to the eigenvector matrices of the wishart ensembles kkt and kt k respectively the latter are known to be uniformly haar distributed see chapter in a distributed compression of gaussian sources we start by recalling the classical problem of distributed lossy compression of jointly gaussian real random variables under a quadratic distortion measure specifically we consider a distributed source coding setting with k encoding terminals and one decoder each of the k encoders has access to a vector of n realizations of the random variable xk k the random vector x xk corresponding to the different sources is assumed to be gaussian with zero mean and covariance matrix kxx e xxt each encoder maps its observation xk to an index using the encoding function ek rn and sends the index to the decoder the decoder is equipped with k decoding functions dk rn where k upon receiving k indices one from each terminal it generates the estimates dk ek xk k a vector rk dk achievable if there exist encoding functions ek and decoding functions dk such that e kxk dk for all k we focus on the symmetric case where dk d and rk where we denote the sum rate by the best known achievable scheme for this symmetric setting is that of berger and tung for which the following in general suboptimal sum rate is achievable k x rk log det i kxx d rbt as shown in rbt is a lower bound on the achievable rate of if source coding we will refer to rbt as the benchmark to simplify notation we note that d can be absorbed into kxx hence without loss of generality we assume throughout that d b compound source model and scheme outage formulation consider distributed lossy compression of a vector of gaussian sources x n kxx we define the following compound class of gaussian sources having the same value of rbt via their covariance matrix k rbt kxx log det i kxx rbt we quantify the measure of the set of source covariance matrices by considering outage events those events sources where integer forcing fails to achieve the desired level of distortion even though the rate exceeds rbt more broadly for a given quantization scheme denote the necessary rate to achieve d for a given covariance matrix kxx as rscheme kxx then given a target rate r rbt and a covariance matrix kxx k rbt a scheme outage occurs when rscheme kxx to quantify the measure of bad covariance matrices we follow and apply a random orthonormal precoding matrix to the vector of source samples prior to encoding as mentioned above this amounts to joint processing of the samples and hence the problem is no longer distributed in general nonetheless as in the scenario described in section in certain statistical settings this precoding operation is redundant as it can be viewed as being performed by nature applying a precoding matrix to the source vector we obtain a transformed source vector px pkxx pt with covariance matrix it follows that the achievable rate of a quantization scheme for the precoded source is rscheme when p is drawn at random the latter rate is also random the wc scheme outage probability is defined in turn as wc pout scheme rbt the time axis will be suppressed in the sequel and vector notation will be reserved to describe samples taken from different sources pr rscheme rbt sup kxx rbt where the probability is over the ensemble of unitary precoding matrices considered and is the gap to the benchmark in the sequel we quantify the tradeoff between the quantization rate r or equivalently between the excess rate wc r rbt and the outage probability pout if rbt as defined in source coding in a manner similar to if equalization for channel coding if can be applied to the problem of distributed lossy compression the approach is based on standard quantization followed by binning however in the if framework the decoder first uses the bin indices for recovering linear combinations with integer coefficients of the quantized signals and only then recovers the quantized signals themselves for our purposes it suffices to state only the achievable rates of if source coding we refer the reader to for the derivation and proofs we recall theorem from stating that for any covariance matrix kxx if source coding can achieve any sum rate satisfying r rif kxx k log min max atk i kxx ak k rif m kxx a atk i kxx ak det where atk is the kth row of the integer matrix a we denote by the effective rate that can be achieved at the m th equation the matrix i kxx is symmetric and positive definite and therefore it admits a cholesky decomposition i kxx fft k rif kxx log min max kft ak k with this notation we have det t denote by f the lattice spanned by the matrix ft ft ft a a zk then the problem of finding the optimal matrix a is equivalent to finding the shortest set of k linearly independent vectors in ft denoting the kth successive minimum of the lattice by ft we have k log ft rif kxx just as successive interference cancellation significantly improves the achievable rate of if equalization in channel coding an analogous scheme can be implemented in the case of if source coding specifically for a given integer matrix a let l be defined by the cholesky decomposition a i kxx at llt and denote the mth element of the diagonal of l by m then as shown in and the achievable rate of successive if source coding which we denote as for this choice of a is given by kxx a k max k m kxx where m kxx a finally by optimizing over the choice of a we obtain kxx min det log m kxx a while the gap between rif and even more so and rbt is quite small for most covariance matrices it can nevertheless be arbitrarily large we next quantify the measure of bad covariance matrices by considering if source coding iii recoded if s ource c oding and its e mpirical p erformance recalling and with a slight abuse of notation the rate of if source coding for a given precoding matrix p is denoted by rif kxx p rif pkxx pt k log min max atk i pkxx pt ak k i kxx udut i pkxx pt pudut pt det since kxx is symmetric it allows orthonormal diagonalization when unitary precoding is applied we have to quantify the measure of bad sources we consider precoding matrices that are uniformly haar distributed over the group of orthonormal matrices such a matrix ensemble is referred to as the circular real ensemble cre and is defined by the unique distribution on orthonormal matrices that is invariant under left and right orthonornal transformations that is given a random matrix p drawn from the cre for any orthonormal matrix both and are equal in distribution to since put is equal in distribution to p for cre precoding for the sake of computing outage probabilities we may simply assume that ut and also u is drawn from the cre for a specific choice of integer vector ak we define again with a slight abuse of notation rif d u ak log atk udut ak log ut ak and correspondingly rif d u k log min max ut ak k det let be the lattice spanned by g d t u then may be rewritten as k rif d u log let us denote the set of all diagonal matrices having the same value of rbt d rbt k d det d we may thus rewrite the outage probability of if source coding defined in as wc pout if rbt sup pr rif d u rbt rbt k where the probability is with respect to the random selection of u that is drawn from the cre to illustrate the performance of if we present its empirical performance for the case of a twodimensional compound gaussian source vector where the outage probability is computed via simulation figure depicts the results for different values of rbt rather than plotting the outage probability its complement is depicted we plot the probability that the rate of if falls below rbt as can be seen from the figure the wc outage probability as a function of converges to a limiting curve as rbt increases figure depicts the results for the single high rate rbt the required compression rate required to support a given outage probability constraint is marked for several outage probabilities we observe that for outage probability a gap of bits or bits per source is required out if bt r c empirical outage empirical outage empirical outage empirical outage empirical outage empirical outage empirical outage empirical outage fig empirical results for the complement of the outage probability of if source coding when applied to a compound gaussian source vector as a function of for various values of rbt for outage probability a gap of bits or bits per source is required for outage probability a gap of bits or bits per source is required iv u pper b ounds on the o utage p robability for recoded i nteger orcing s ource c oding in this section we develop achievability bounds for if source coding as the derivation is very much along the lines of the results for the analogous problem in channel coding as developed in we refer to results from the latter in many points the next lemma provides an upper bound on the outage probability of precoded if source coding as a function of rbt and the rate gap as well as the number of sources and dmax defined below denote r k a k a z kak lemma for any k gaussian sources such that d d rbt k and for u drawn from the cre we have pr rif d u rbt x k k rbt dmax k where dmax max di i the set a k is defined in and where k is defined in below i wc if rbt c empirical outage fig zoom in on the empirical outage probability for a compound gaussian source vector with rbt proof let denote the dual lattice of and note that it is spanned by the matrix gt ut the successive minima of and are related by theorem in k where max i k with denoting hermite s constant the tightest known bound for hermite s constant as derived in is k since this is an increasing function of k it follows that is smaller than the of combining the latter with the exact values of the hermite constant for dimensions for which it is known we define k k k otherwise therefore we may bound the achievable rates of if via the dual lattice as follows k log k rif d u hence we have pr rif d u rbt k pr rbt log k pr k k rbt denote k k rbt we wish to bound or equivalently we wish to bound p pr pr for a given matrix d d rbt k note that the event is equivalent to the event p ut applying the union bound yields p pr note that whenever dmax we have x p pr ut p pr ut therefore substituting in the set of relevant vectors a is n o p a k a zk it follows from and that p pr x k p pr ut ak the rest of the proof follows the footsteps of lemma in and is given in appendix a while lemma provides an explicit bound on the outage probability in order to calculate it one needs to go over all diagonal matrices in d rbt k and for each such diagonal matrix sum over all the relevant integer vectors in a k hence the bound can be evaluated only for moderate compression rates and for a small number of sources the following theorem that may be viewed as the counterpart of theorem in provides a looser yet very simple bound another advantage for this bound is that it does not depend on the achievable rate theorem for any k sources such that d d rbt k and for u drawn from the cre we have pr rif d u c c k where k c k k k cmax k k and cmax k k k note that c k is a constant that depends only on the number of sources proof see appendix similarly to the case of if channel coding section in analyzing theorem reveals that there are two main sources for looseness that may be further tightened union bound while there is an inherent loss in the union bound in fact some terms in the summation may be completely specifically using corollary in the set a k appearing in the summation in may be replaced by the smaller set b k where b d k k kak r and c ca zk d dual lattice bounding via the dual lattice induces a loss reflected in this may be circumvented for the case of a source vector by using as accomplished in lemma and theorem which we present next lemma for a gaussian source vector such that d d rbt k and for u drawn from the cre we have pr d u rbt x dmin k min where dmin min di i and a dmin k is defined in i theorem for a gaussian source vector such that d d rbt k and for u drawn from the cre we have pr d u c k k where proof see appendix figure depicts the bounds derived as well as results of a monte carlo evaluation of for the case of a gaussian and cre when calculating the empirical curves and the lemmas we assumed high quantization rates rbt the lemmas were calculated by going over a grid of values of and satisfying a application distributed compression for cloud radio access networks since we described if source coding as well as the precoding over the reals we outline the application of if source coding for the cloud radio access network scenario assuming a real channel model we then comment on the adaptation of the scheme to the more realistic scenario of a complex channel consider the scenario depicted in figure where m transmitters send their data that is modeled as an gaussian source vector over a k m mimo broadcast channel h the data is received at k receivers relays that wish to compress and forward it for processing decoding at a central node via noiseless bit pipes as we wish to minimize the distortion at the central node subject to the rate constraints this is a distributed lossy source coding problem see depiction in figure here the covariance matrix of the received signals at the relays is given by kxx snrhht i we note that we can absorb the snr into the channel and hence we set snr so that kxx hht i we further assume that the entries of the channel matrix h are gaussian j hi j n as mentioned in the introduction the svd of this matrix h similar to the derivation in section in a simple factor of can be deduced regardless of the rate and number of sources by noting that a and result in the same outcome and hence there is no need to account for both cases the bounds are computed after applying a factor of to the lemmas in accordance to footnote again rather than plotting the outage probability we plot its complement out if bt r c empirical outage of empirical outage of thm lemma thm lemma thm thm fig upper bounds on the outage probability of if source coding for various values of source dimension optical distribution network managing control unit fig cloud radio access network communication scenario two relays compress and forward the correlated signals they receive from several users satisfies that and belong to the cre we may therefore express the random covariance matrix as kxx i i t where is drawn from the cre it follows that the precoding matrix p is redundant as we assumed that p is also drawn from the cre thus the analysis above holds also for the considered scenario specifically assuming the encoders are subject to an equal rate constraint then for a given distortion level the relation between the compression rate of if source coding and the guaranteed outage probability for meeting the prescribed distortion is bounded using theorem above we note that just as precoded if channel coding can be applied to complex channels as described in so can precoded if source coding be extended to complex gaussian sources in describing an outage event in this case we assume that the precoding matrix is drawn from the circular unitary ensemble cue the bounds derived above replacing k with in all derivations for the relation between the compression rate of if source coding and the outage probability hold for the scenario over complex gaussian channels h where the cue precoding can be viewed as been performed by nature p erformance g uarantees for i nteger orcing s ource c oding with d eterministic p recoding in this section we consider the performance of if source coding when used in conjunction with judiciously chosen deterministic precoding performance will be measured in a stricter sense than in previous sections namely no outage is allowed achieving the goal of no outage for the case of general gaussian sources requires performing precoding whereas for the case of parallel independent sources precoding still suffices we note that while doing away with outage events is desirable it does come at a price namely the precoding assumed in this section requires joint processing of the different sources prior to quantization and is thus precluded in a distributed setting on the other hand the precoded if scheme considered in this section has several advantages with respect to traditional source coding of correlated sources namely the traditional approach requires utilizing the a statistical characterization of the source at the encoder side via a prediction filter or via applying a transformation such as dct or dft along with bit loading in contrast if as considered in this section is applied after applying a universal transformation a transformation that is independent of the source statistics in addition all samples are quantized at the same bit rate knowledge of the statistics of the source needs of course be utilized but only at the decoder side a propertry that may be advatageous in certain applications this is similar to the case of compression but whereas the latter requires in general the use of bit allocation unless the source is stationary if source coding does not a additive bound for general sources similar to the case of channel coding we can derive a additive bound for the gap to the benchmark achieving this guaranteed performance requires joint algebraic number theoretic based precoding at the encoders the following theorem is due to or ordentlich theorem ordentlich for any k sources with covariance matrix kxx and benchmark rbt the excess rate with respect to the benchmark normalized per the number of used of if source coding with an nvd precoding matrix with minimum determinant is bounded by rif rbt log k log proof see appendix we note that similarly to the case of if for channel coding the gap to the benchmark is large and thus it has limited applicability uncorrelated sources for the special case of uncorrelated gaussian sources much tighter bounds in comparison to theorem on the quantization rate of if for a given distortion level may be obtained first precoding may be replaced with precoding over space only this allows to obtain a tighter counterpart to theorem as derived in section of we next derive yet tighter performance guarantees also following ideas developed in by numerically evaluating the performance of if source coding over a densely quantized set of source diagonal covariance matrices belonging to the compound class and then bounding the excess rate to the evaluated ones for any possible source vector in the compound class in the case of uncorrelated sources the covariance matrix kxx is diagonal hence becomes kxx s where we denote s diag s the compound set of channels may be parameterized by k x s rbt s log rbt we note that we may associate with each diagonal element a rate corresponding to an individual source ri log thus the compound class of sources may equivalently be represented by the set of rates k x ri rbt r rbt rk rk we define a quantized set as follows the interval rbt is divided into n each of length rbt thus the resolution is determined by the parameter the quantized belong to the grid n k rbt rk rbt k x o rbt we may similarly define the quantized set s rbt of diagonal matrices such that the diagonal entries satisfy i k where rbt theorem for any gaussian vector of independent sources with covariance matrix s such that s s rbt the rate of if source coding with a given precoding matrix p is upper bound by rif s p max rif p k log rbt where rbt k rbt k proof assume we have a covariance matrix s in the compound class hence its associated satisfies rk r rbt assume without loss of generality that rk by the rate of if source coding associated with a specific integer linear combination vector a is rif s p a log at p i s pt a denote v at we will need the following two lemmas whose proofs appear in appendices e and f respectively lemma for any diagonal covariance matrix s associated with a rk r rbt there exists a rbt such that diagonal covariance matrix associated with a rk si for i k where is defined in lemma consider a gaussian vector with a diagonal covariance matrix s and let a be an integer vector then for any we have rif s p a log rif s p a using lemma and denoting by the covariance matrix associated with s and whose existence is guaranteed by the lemma it follows from that k x rif s p a log vi si k x vi si log rif s p a using lemma we further have that rif p a log rif p a combining and we obtain rif s p a rif p a log rif p a recalling we have rif p k log min max rif p ak k det denoting a as the optimal integer matrix for the quantized channel which is not necessarily the optimal matrix for s it follows that rif s p rif s p now assuming atk are the rows of by we have rif s p k max rif s p ak k k max k log rif p ak k log rif p rif s p rif p k log thus we conclude that and therefore for any s s rbt we have rif s p max rif p k log this concludes the proof of the theorem as an example for the achievable performance show fig gives the empirical performance for two and three real sources that is achieved when using if source coding and a fixed precoding matrix over the grid rbt for the precoding matrix was taken from the explicit precoding matrix used for two sources is p where and for three sources p where rif rather than plotting the gap from the benchmark we plot the efficiency r the ratio of the bt rate of if source coding and rbt we also plot the upper bound on rif s p given by theorem cyclo precoding if bound on the performance cyclo precoding if bound on the performance rbt fig empirical and guaranteed upper bound efficiency of if source coding for two and three uncorrelated gaussian sources when using the precoding matrices and given in respectively and taking for the calculation of theorem a ppendix a p roof of t heorem following the footsteps lemma in and adopting the same geometric interpretation described there we may interpret pr ut ak as the ratio of the surface area of an ellipsoid that is inside a ball with radius and the surface kak area of the entire ellipsoid the axes of this ellipsoid are defined by xi di denote the vector okak as a vector drawn from the cre with norm kak using lemma in and since we assume that ut is drawn from the cre we have that the right hand side of is equal to x p pr okak k x k capell xk l xk where capell xk k p capell and k kakk x di l xk q k kak di kak dmax substituting and in we obtain x k capell xk l xk capell l k x k d r max k bt x recalling see that k k rbt we obtain pr d u c x k k rbt dmax k a ppendix b p roof of t heorem to establish theorem we follow the footsteps of the proof of theorem in to obtain x p pr ut ak k x k k dmax where a k and are defined in and respectively noting that n jp k o a k a kak the summation in can be bounded as x x k k dmax we apply lemma in a bound for the number of integer vectors contained in a ball of a given radius using this bound while noting that when kak there are exactly k integer vectors the right hand side of may be further bounded as rbt x k vol bk dmax k k max k k where we note that trivially holds when the left hand side of evaluates to zero since a k is the empty set in this case and hence the right hand side of can further be rewritten as k k rbt x k vol bk k z dmax i x max k k k k k z iii we search for and independent of k such that k k z ii k k k for k k k and k k for k since it will then follow that ii iii we note that since again assuming k k k max x max k jp max it will thus follow that max x k i ii iii max x k max jp k k max an explicit derivation for and appears in appendix b of where should be replaced with k from which we obtain k k k k k in is also shown in appendix of that for k holds for k we observe that this is so since k implies that and hence indeed for k we have k k k k k k k k k k k recalling now and denoting cmax it follows that i ii iii k jp k cmax using and substituting the volume of a unit ball vol bk it follows that right hand side of is upper bounded by rbt p k k cmax dmax p k k k cmax finally we substitute as defined in into to obtain x p pr ut ak k k k k rbt k cmax rbt k cmax k cmax k k k k c k where c k is as defined in a ppendix c p roof of t heorem we first recall a theorem of minkowski theorem that upper bounds the product of the successive minima theorem minkowski for any lattice ft that is spanned by a full rank k k matrix ft k y ft k k det ft to prove theorem we further need the following two lemmas lemma for a gaussian source vector with covariance matrix kxx k rbt and for any integer matrix a the of satisfies k x m kxx a rbt log det a where m kxx a is defined in proof k x m kxx a k x log m log k y m log det llt log det a i kxx at d rbt log det a theorem in shows that for successive if used for channel coding there is no loss in terms of achievable rate in restricting a to the class of unimodular matrices we note that same claim holds also in our framework that of successive source coding by replacing g the matrix spanning the lattice which was defined in theorem as i snrhht ggt with f as defined in and noting that the optimal a can be expressed in both cases as aopt min max k k det where k are the diagonal elements of the corresponding l matrix derived from the cholesky decomposition in each case pk having established that the optimal a is unimodular it now follows that m kxx aopt suc rbt we are now ready to prove the following lemma that is analogous to theorem in lemma for a gaussian source vector with covariance matrix kxx k rbt and for the optimal integer matrix a the of if is upper bounded as k x rif m kxx rbt k log k where rm if kxx is the rate of the mth equation corresponding to the mth row of a as defined in proof k x k x log ft k y t log f log k k det f k rbt log k where the inequality is due to theorem minkowski s theorem rif m kxx now for the case of two sources we have by lemma kxx kxx rbt kxx rbt kxx kxx kxx rbt or equivalently we further have by lemma that we note that the optimal integer matrix a used for if in is in general different than the optimal matrix a used for in nonetheless when applying one decodes first the equation with the lowest rate since for this equation suc has no effect it follows that the first row of a is the same in both cases and hence kxx kxx since source is decoded first it follows that and hence kxx rbt therefore kxx max kxx kxx rbt max rbt kxx max rbt kxx henceforth we analyze the outage for rbt and target rates that are no smaller than rbt so that the inequality kxx rbt is satisfied thus we consider excess rate values satisfying our goal is to bound pr kxx rbt pr kxx rbt rbt pr kxx r bt t log f pr pr ft let we wish to bound or equivalently p pr ft pr ft for a given matrix d corresponding to kxx via the relation note that the event is equivalent to the event p ut applying the union bound yields p pr x p pr ut using the same derivation as in lemma in we get pr d u rbt x k dmin k min since we are analyzing the case of two sources we have pr d u rbt x dmin k min applying a similar argument as appears in appendix b as part of the proof of theorem and noting that k implies that cmax we get pr d u rbt rbt finally substituting as defined in we obtain pr d u rbt rbt a ppendix d a dditive w ase b ound for nvd s pace ime p recoded s ources combining precoding and integer forcing in the context of channel coding was suggested in as we next briefly recall we then present the necessary modifications for the case of source coding we derive below an additive bound using a unitary precoding matrix satisfying a determinant nvd property as the theory of nvd codes has been developed over the complex field it will prove convenient for us to employ complex precoding matrices to this end we may assume that we stack samples from two time slots where the samples stacked at the first time slot represent the real part of a complex number and the samples stacked at the second time slot represent the imaginary part of a complex number hence we have kxx where is the kronecker product we note that the benchmark of this stacked source vector is rbt kxx next in order to allow precoding we stack t times the k complex outputs of the k sources and let denote the effective source vector its corrlation matrix which we refer to as the effective covariance matrix takes the form k it we assume a precoding matrix that in principle can be either deterministic or random is applied to the effective source vector we analyze performance for the case where the precoding matrix pst c is deterministic specifically a precoding matrix induced by a perefect block code operating on the stacked source having covariance matrix as given in an explanation on how to extract the precoding matrix from a code can be found in section iv in we denote the corresponding precoding matrix over the reals as pst we denote t i pst kph st f f as we assume that the precoding matrix is unitary the benchmark normalized by the total number of time extensions used remains unchanged log det i pst kph log det f t st rbt as noted above we assume that the generating matrix of a perfect code is employed as a precoding matrix a code is called perfect if it is full rate it satisfies the determinant nvd condition the code s generating matrix is unitary let denote the minimal determinant of this code such codes further use the minimal number of time extensions possible t thus we have a total of k stacked complex samples subsisting as the dimension number of real samples jointly processsed in the rate of if source coding for the samples is given by rif k pst log f t to bound f t we note that for every lattice we have f t f t y f t using minkowski s theorem theorem in appendix c it follows that f t det f t f t hence the rate of if source coding normalized by the number of time extensions can be bounded as rif k pst log f t k t log det f f t k k log det f t log f t k log k log rbt k log f t we next use the results derived in for channel coding using nvd precoding we note that since the covariance matrix is positive it may be written as kxx hht the covariance matrix of the stacked source vector may be written as where we may take there are many such choices of h and any such choice corresponds to a channel matrix that can be viewed as the real representation of a complex channel matrix which in the present case is real has no imaginary part in the context of the effective covariance matrix can similarly be rewritten as k hht where h it using the channel coding terminology of we further define the minimum distance at the receiver dmin h l as dmin h l min l khak where qam l l l i l l setting snr and for which is the real representation of hc lemma in states that h min at i ph st h hpst a min snrdmin hpst l using corollary in we get hh hpst a min at i ph st h cwi k k l l min min cwi k k min min cwi k min where cwi log det i hh h is the mutual information of since cwi is the rate of a real matrix resulting from a k k complex matrix it equals rbt defined in hence we obtain min at i ph hh hph st st min which in turn yields f t min at i pst hht ptst a k min finally plugging the bound into we arrive at rif k pst rbt k log k log k k log rbt log k log a ppendix e p roof of l emma a gaussian source component with a specific rate ri can be transformed to a different gaussian source with rate by appropriate scaling specifically scaling each source component i by s results in parallel uncorrelated sources xi with variances by we therefore have log log s log i log log we associate with any rate tuple rm r rbt a rate tuple rm rbt according to the following transformation ri i k ri x ri rk rk for i k we have rk rk k it follows that the scaling factor needed to achieve rk is bounded by rk rbt k rbt k where follows since it is readily verified that the function denoting it follows from that rbt k is monotonically increasing in x for x and a rbt k now for i k we have by that ri hence for such i it trivially holds that since thus the lemma follows by observing that from it follows that as well a ppendix f p roof of l emma recalling we note that can be written as k x rif s p a log therefore when scaling the gaussian input vector by a factor of we have k x rif s p a log vi si k x log log rif s p a log rif s p a r eferences ordentlich and erez source coding ieee transactions on information theory vol no pp aguerri and guillaud integer forcing conversion for massive mimo systems in signals systems and computers asilomar conference on ieee pp domanovitz and erez outage behavior of integer forcing with random unitary ieee transactions on information theory vol pp no pp edelman and rao random matrix theory cambridge university press vol tung multiterminal source coding thesis information theory ieee transactions on vol no pp november fischler universal precoding for parallel gaussian channels master s thesis tel aviv university tel aviv online available https he and nazer source coding successive cancellation and duality in information theory isit ieee international symposium on ieee pp mehta random matrices and the statistical theory of energy level academic press lagarias lenstra and schnorr bases and successive minima of a lattice and its reciprocal lattice combinatorica vol no pp blichfeldt the minimum value of quadratic forms and the closest packing of spheres mathematische annalen vol no pp ordentlich private communication oggier and viterbo algebraic number theory and code design for rayleigh fading channels foundations and trends in communications and information theory vol no pp ordentlich and erez a simple proof for the existence of good pairs of nested lattices ieee transactions on information theory vol no pp micciancio and goldwasser complexity of lattice problems a cryptographic perspective springer science business media vol ordentlich erez and nazer successive and its optimality corr vol online available http the approximate sum capacity of the symmetric gaussian interference channel ieee transactions on information theory vol no pp domanovitz and erez combining block modulation with integer forcing receivers in electrical electronics engineers in israel ieeei ieee convention of nov pp ordentlich and erez precoded universally achieves the mimo capacity to within a constant gap information theory ieee transactions on vol no pp jan oggier rekaya belfiore and viterbo perfect block codes ieee transactions on information theory vol no pp elia kumar pawar kumar and feng lu explicit space time codes achieving the diversity multiplexing gain tradeoff information theory ieee transactions on vol no pp sept
7
coding and convex splitting for private communication over quantum channels aug mark march abstract the cq wiretap channel is a communication model involving a classical sender x a legitimate quantum receiver b and a quantum eavesdropper the goal of a private communication protocol that uses such a channel is for the sender x to transmit a message in such a way that the legitimate receiver b can decode it reliably while the eavesdropper e learns essentially nothing about which message was transmitted the private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel such that the privacy error is no larger than the present paper provides a lower bound on the private classical capacity by exploiting the recently developed techniques of anshu devabathini jain and warsi called coding and convex splitting the lower bound is equal to a difference of the hypothesis testing mutual information between x and b and the alternate smooth between x and the lower bound then leads to a lower bound on the coding rate for private classical communication over a memoryless cq wiretap channel introduction among the many results of information theory the ability to use the noise in a wiretap channel for the purpose of private communication stands out as one of the great conceptual insights a classical wiretap channel is modeled as a conditional probability distribution py in which the sender alice has access to the input x of the channel the legitimate receiver bob has access to the output y and the eavesdropper eve has access to the output z the goal of private communication is for alice and bob to use the wiretap channel in such a way that alice communicates a message reliably to bob while at the same time eve should not be able to determine which message was transmitted the author of proved that the mutual information difference max i x y i x z px is an achievable rate for private communication over the wiretap channel when alice and bob are allowed to use it many independent times since then the interest in the wiretap channel has not waned and there have been many increasingly refined statements about achievable rates for private communication over wiretap channels hearne institute for theoretical physics department of physics and astronomy center for computation and technology louisiana state university baton rouge louisiana usa many years after the contribution of the protocol of quantum key distribution was developed as a proposal for private communication over a quantum channel quantum information theory started becoming a field in its own right during which many researchers revisited several of the known results of shannon s information theory under a quantum lens this was not merely an academic exercise doing so revealed that remarkable improvements in communication rates could be attained for physical channels of practical interest if strategies are exploited one important setting which was revisited is the wiretap channel and in the quantum case the simplest extension of the classical model is given by the wiretap channel abbreviated as cq wiretap channel it is described as the following map x where x is a classical symbol that alice can input to the channel and is the joint output quantum state of bob and eve s system represented as a density operator acting on the tensorproduct hilbert space of bob and eve s quantum systems the goal of private communication over the cq wiretap channel is similar to that for the classical wiretap channel however in this case bob is allowed to perform a collective quantum measurement over all of his output quantum systems in order to determine alice s message while at the same time we would like for it be difficult for eve to figure out anything about the transmitted message even if she has access to a quantum computer memory that can store all of the quantum systems that she receives from the channel output the authors of independently proved that a quantum generalization of the formula in is an achievable rate for private communication over a cq quantum wiretap channel if alice and bob are allowed to use it many independent times namely they proved that the following holevo information difference is an achievable rate max i x b i x e px where the information quantities in the above formula are the holevo information to bob and eve respectively and will be formally defined later in the present paper since the developments of there has been an increasing interest in the quantum information community to determine refined characterizations of communication tasks strongly motivated by the fact that it is experimentally difficult to control a large number of quantum systems and in practice one has access only to a finite number of quantum systems anyway one such scenario of interest as discussed above is the quantum wiretap channel hitherto the only work offering achievable oneshot rates for private communication over cq wiretap channels is however that work did not consider bounding the coding rate for private communication over the cq wiretap channel the main contribution of the present paper is a lower bound on the private capacity of a cq wiretap channel namely i prove that mpriv i h x b e x represents the maximum number of bits that can be sent from in the above mpriv alice to bob using a cq wiretap channel once such that the privacy error to be defined formally later does not exceed with the quantities on the side of the above inequality are particular generalizations of the holevo information to bob and eve which will be defined later it is worthwhile to note that the information quantities in can be computed using programming and the computational runtime is polynomial in the dimension of the channel thus for channels of reasonable dimension the quantities can be efficiently estimated numerically the constants and are chosen so that and by substituting an independent and identically distributed cq wiretap channel into the side of the above inequality using expansions for the holevo informations and picking n we find the following lower bound on the coding rate for private classical communication mpriv n n i x b i x e p p nv x b nv x e o log n n represents the maximum number of bits that can be sent in the above mpriv from alice to bob using a cq wiretap channel n times such that the privacy error does not exceed the holevo informations from make an appearance in the term proportional to the number n of channel uses on the side above while the second order term proportional to n consists of the quantum channel dispersion quantities v x b and v x e which will be defined later they additionally feature the inverse of the cumulative gaussian distribution function thus the bound in leads to a lower bound on the coding rate which is comparable to bounds that have appeared in the classical information theory literature to prove the bound in i use two recent and remarkable techniques coding and convex splitting the main idea of coding is conceptually simple to communicate a classical message from alice to bob we allow them to share a quantum state ra before communication begins where m is the number of messages bob possesses the r systems and alice the a systems if alice wishes to communicate message m then she sends the mth a system through the channel the reduced state of bob s systems is then b where b nam am and nam is the quantum channel for all m the reduced state for systems and b is the product state however the reduced state of systems rm b is the generally correlated state b so if bob has a binary measurement which can distinguish the joint state from the product state sufficiently well he can base a decoding strategy off of this and the scheme will be reliable as long as the number of bits m to be communicated is chosen to be roughly equal to a mutual information known as hypothesis testing mutual information this is exactly what is used in coding and the authors of thus forged a transparent and intuitive link between quantum hypothesis testing and communication for the case of communication convex splitting is rather intuitive as well and can be thought of as dual to the coding scenario mentioned above suppose instead that alice and bob have a means of generating the state in perhaps by the strategy mentioned above but now suppose that alice chooses the variable m uniformly at random so that the state from the perspective of someone ignorant of the choice of m is the following mixture m x b m the lemma guarantees that as long as m is roughly equal to a mutual information known as the alternate smooth information then the state above is nearly indistinguishable from the product state r both coding and convex splitting have been used recently and effectively to establish a variety of results in quantum information theory in the present paper i use the approaches in conjunction to construct codes for the cq wiretap channel the main underlying idea follows the original approach of by allowing for a message variable m m and a local key variable k k local randomness the latter of which is selected uniformly at random and used to confuse the eavesdropper eve before communication begins alice p bob and eve are allowed share to m k copies of the common randomness state xb xe x px x xb xe we can think of the m k copies of xb xe as being partitioned into m blocks each of which contain k copies of the state xb xe if alice wishes to send message m then she picks k uniformly at random and sends the m k xa system through the cq wiretap channel in as long as m k is roughly equal to the x b then bob can use a decoder to hypothesis testing mutual information ih figure out both m and as long as k is roughly equal to the alternate smooth e x then the lemma guarantees that the overall state of eve s information iemax systems regardless of which message m was chosen is nearly indistinguishable from the prodk uct state thus in such a scheme bob can figure out m while eve can not figure xe out anything about this is the intuition behind the coding scheme and gives a sense of why x b e x is an achievable number of bits that can be m m k k ih max sent privately from alice to bob the main purpose of the present paper is to develop the details of this argument and furthermore show how the scheme can be derandomized so that the m k copies of the common randomness state xb xe are in fact not necessary the rest of the paper proceeds as follows in section i review some preliminary material which includes several metrics for quantum states and pertinent information measures section develops the coding approach for communication channels coding was developed in to highlight a different approach to communication but i show in section how the approach can be used for shared communication i also show therein how to derandomize codes in this case the shared randomness is not actually necessary for classical communication over cq channels section represents the main contribution of the present paper which is a lower bound on the private classical capacity of a cq wiretap channel the last development in section is to show how the lower bound leads to a lower bound on the coding rate for private classical communication over a memoryless cq wiretap channel therein i also show how these lower bounds simplify for cq wiretap channels and when using binary keying as a coding strategy for private communication over a bosonic channel section concludes with a summary and some open questions for future work preliminaries i use notation and concepts that are standard in quantum information theory and point the reader to for background in the rest of this section i review concepts that are less standard and set some notation that will be used later in the paper trace distance fidelity and purified distance let d h denote the set of density operators acting on a hilbert space h let h denote the set of subnormalized density operators with trace not exceeding one acting on h and let h denote the set of positive operators acting on the trace distance between two quantum states d h is equal to where tr c c for any operator it has a direct operational interpretation in terms of the distinguishability of these states that is if or are prepared with equal probability and the task is to distinguish them via some quantum measurement then the optimal success probability in doing so is equal to the fidelity is defined as f and more generally we can use the same formula to define f p q if p q h uhlmann s theorem states that f max ur ia ira u where ira and ira are fixed purifications of and respectively and the optimization is with respect to all unitaries ur the same statement holds more generally for p q h the fidelity is invariant with respect to isometries and monotone with respect to channels the sine distance or between two quantum states d h was defined as p c f and proven to be a metric in it was later under the name purified distance shown to be a metric on subnormalized states h via the embedding p c tr tr the following inequality relates trace distance and purified distance p relative entropies and variances the quantum relative entropy of two states and is defined as d tr whenever supp supp and it is equal to otherwise the quantum relative entropy variance is defined as v tr d whenever supp supp the hypothesis testing relative entropy of states and is defined as dh inf tr i tr the entropy for states and is defined as n o dmax inf r the smooth entropy for states and and a parameter is defined as n o dmax inf r e p e and d the following expansions hold for dh max when evaluated for states p dh nd nv o log n p dmax nd nv o log n the above expansion features the cumulative distribution function for a standard normal random variable z a a dx exp and its inverse defined as sup a r a mutual informations and variances the quantum mutual information i x b and information variance v x b of a bipartite state are defined as i x b d v x b v in this paper we are exclusively interested in the case in which system x of is classical so that can be written as x px x x where px is a probability distribution x is an orthonormal basis and x is a set of quantum states the hypothesis testing mutual information is defined as follows for a bipartite state and a parameter ih x b dh from the smooth entropy one can define a mutual quantity for a state as follows dmax note that we have the following expansions as a direct consequence of and definitions q ih x n b n ni x b nv x b o log n q dmax nv x b o log n xb ni x b another quantity related to that in is as follows iemax b a inf p dmax we recall a relation lemma between the quantities in and giving a very slight modification of it which will be useful for our purposes here lemma for a state and the following inequality holds e imax b a dmax proof to see this recall claim for states there exists a state ab such that p ab and dmax ab b dmax let denote the optimizer for dmax then in taking a a we find that there exists a state such that p ab and dmax dmax by the triangle inequality for the purified distance we conclude that p p p ab since the quantity on the side includes an optimization over all states satisfying p ab we conclude the inequality in operator inequality a key tool in analyzing error probabilities in communication protocols is the operator inequality given operators s and t such that s i and t the following inequality holds for all c i s t s s t c i s c lemma the lemma from has been a key tool used in recent developments in quantum information theory we now state a variant of the lemma which is helpful for obtaining bounds for privacy and an ensuing lower bound on the coding rate its proof closely follows proofs available in but has some slight differences for completeness appendix a contains a proof of lemma lemma convex split let be a state and let b be the following state b k x b k let and if k iemax b a then p b for some state such that p public classical communication definition of the classical capacity we begin by defining the classical capacity of a cq channel x we can write the channel in fully quantum form as the following quantum channel x nx x where x is some orthonormal basis let m n and an m classical communication code consists of a collection of probability distributions m one for each m such that message m and a decoding positive measure povm b m m x m tr ib b b b m m we refer to the side of the above inequality as the decoding error in the above m as is an orthonormal basis we define the state b x b x and the measurement channel as x tr b m the equality in follows by direct calculation b x m tr b ihm x m m m tr b ihm tr x m m m tr b tr m tr ib b where for a given channel nx and the classical capacity is equal to mpub is the largest m such that can be satisfied for a fixed mpub one can allow for shared randomness between alice and bob before communication begins in which case one obtains the shared randomness assisted capacity of a cq channel m we could allow for a decoding povm to be b consisting of an extra operator ib needed pm b if lower bound on the classical capacity we first consider a protocol for randomness assisted public classical communication in which the goal is for alice to use the cq channel in once to send one of m messages with error probability no larger than the next section shows how to derandomize such that the shared randomness is not needed the main result of this section is that ih x b for all is a lower bound on the classical capacity of the cq channel in although this result is already known from the development in this section is an important building block for the wiretap channel result in section and so we go through it in full detail for the sake of completeness also the approach given here uses decoding for the cq channel fix a probability distribution px over the channel input alphabet consider the following state x px x x which we can think of as representing shared randomness let denote the following state which results from sending the x system of through the channel nx x nx px x x the coding scheme works as follows let alice and bob share m copies of the state so that their shared state is m x xm xx alice has the systems labeled by x and bob has the systems labeled by x if alice would like over the to communicate message m to bob then she simply sends system xm channel in such a case the reduced state for bob is as follows x m b b is related to the first one m b by a permutation m of the x m observe that each state xm b systems m m wx m m b wx m xm b m where wx m is a unitary representation of the permutation m if bob has a way of distinguishing the joint state from the product state then with high probability he will be able to figure out which message m was communicated let txb denote a test measurement operator satisfying txb ixb which we think of as identifying with high probability and for which the complementary operator ixb identifies with the highest probability subject to the constraint tr txb from such a test we form the following measurement operator x m b txm b ixm which we think of as a test to figure out whether the reduced state on systems xm b is or observe that each message operator is related to the first one m b by a xm b m permutation m of the x systems m m wx m m b wx m xm b if message m is transmitted and the measurement operator acts then the probability of it xm b accepting is m tr x m b m b tr txb if however the measurement operator acts where m then the probability of it accepting xm b is m tr x m b m b tr txb from these measurement operators we then form a measurement as follows m m x x xm b xm b xm b xm b again each message operator is related to the first one m b by a permutation of the x m xm b systems m m wx m m b wx m m x m wx m xm b m b xm b m wx m m x m wx m m x xm b m m m m wx m wx m m b wx m wx m m x xm b m wx m m x m m wx m x m b wx m xm b m x m x m m wx m m x m b wx m xm b xm b m x xm b this is called the decoder and was analyzed in for the case of entanglementassisted communication the error probability under this coding scheme is as follows for each message m m tr irm b rm b b the error probability is in fact the same for each message due to the observations in and m m m m tr irm b b b tr irm b b wx m wx m b wx m wx m m m m m wx m b wx m wx m b wx m m rm b b tr irm b tr irm b so let us analyze the error probability for the first message m applying the p operator inequality in with s m b t ci c and cii c xm b for c we find that this error probability can be bounded from above as tr irm b b b x ci tr ix m b m b m b cii tr x m b m b ci tr ixb txb cii x tr trb ci tr ixb txb cii m tr txb consider the hypothesis testing mutual information ih x b dh dh inf tr i tr where take the test txb in bob s decoder to be where is the optimal measurement operator x b for then the error probability is bounded as for ih tr ix m b m b m b ci tr ixb cii m tr ci cii m x b now pick c and we get that the last line above for x b m ih indeed consider that we would like to have c such that ci cii m x b rewriting this we find that m should satisfy m ih x b ci cii picking c then implies after some algebra that ci cii so the quantity ih x b represents a lower bound on the randomnessassisted classical capacity of the cq channel in the bound holds for both average error probability and maximal error probability and this coincidence is due to the protocol having the assistance of shared randomness lower bound on the classical capacity now i show how to derandomize the above code the main result of this section is the following lower bound on the shot classical capacity of the cq channel in holding for all mpub ih x b again note that although this result is already known from the development in this section is an important building block for the wiretap channel result in section as stated previously the approach given here uses decoding for the cq channel by the reasoning from the previous section we have the following bound on the average error probability for a code m x m tr ix m b x m b m b m m ih x b if so let us analyze the expression tr ix m b by definition it follows that xm b xm b x px px xm xm xm m x b xm x b which implies that also recall that is optimal for ih tr x b tr but consider that tr tr x px x px x tr px x tr qxb x x x x x where we define qxb similarly we have that tr tr x px x px x tr px x tr qxb x x x x x p this demonstrates that it suffices to take the optimal measurement operator to be x qxb with qxb defined as in and this will achieve the same optimal value as does taking as such now consider that x m b b ixm x ihxm qxbm ixm xm x xm xm m qxbm xm then this implies that m x xm b m x x x xm xm m qbm xm x xm xm m x qbm xm x m x xm xm m m x x qbm xm so that xm b m x xm b xm b x xm b m x xm xm m xm where m x x qbm m x qxbm x qbm p xm and can be completed to a of m observe that m is a povm on the support qb p povm on the full space by adding ib m q by employing and we b find that x m tr ix m b px px xm tr ib x m b m b xm so that the average error probability is as follows m x m tr ix m b x m b m b m m x x px px xm tr ib m xm m x x xm xm tr ib px px xm m x x m the last line above is the same as the usual shannon trick of exchanging the average over the messages with the expectation over a random choice of code by employing the bound in we find that m x x xm xm tr ib px px xm m x x m then there exists a particular set of values of xm such that m x tr ib m this sequence xm constitutes the codewords and m is a corresponding povm that can be used as a decoder the number of bits that the code can transmit is equal to m ih x b no shared randomness is required for this code it is now derandomized remark to achieve maximal error probability one can remove the worst half of the codewords and then a lower bound on the achievable number of bits is ih x b ih x b private classical communication definition of the private classical capacity now suppose that alice bob and eve are connected by a cqq channel of the following form x where bob has system b and eve system the fully quantum version of this channel is as follows x nx x where x is some orthonormal basis we define the private classical capacity in the following way let m n and an m private communication code consists of a collection of probability distributions m m m one for each message m and a decoding povm such that m be m we refer to the side of the above inequality as the privacy error in the above m is an orthonormal basis the state can be any state we define the state as be x be x and the measurement channel as x tr b m for a given channel nx and the private classical capacity is equal to mpriv is the largest m such that can be satisfied for a fixed where mpriv the condition in combines the reliable decoding and security conditions into a single average error criterion we can see how it represents a generalization of the error criterion in which was for public classical communication over a cq channel one could have a different definition of private capacity in which there are two separate criteria but the approach above will be beneficial for our purposes in any case a code satisfying satisfies the two separate criteria as well as is easily seen by invoking the monotonicity of trace having a single error criterion for private capacity is the same as the approach taken in and and in the latter paper it was shown that notions of asymptotic private capacity are equivalent when using either a single error criterion or two separate error criteria lower bound on the private classical capacity the main result of this section is the following lower bound on the shot private capacity of a cq wiretap channel holding for all such that and and mpriv ih x b e x to begin with we allow alice bob and eve shared randomness of the following form x x px x x where bob has the x system alice the x system and eve the x system it is natural here to let eve share the randomness as well and this amounts to giving her knowledge of the code to be used let be denote the state resulting from sending the x system through the channel nx in x be px x x the coding scheme that alice and bob use is as follows there is the message m m and a local key k k the local key k represents local uniform randomness that alice has but which is not accessible to bob or eve we assume that alice bob and eve share m k copies of the state in before communication begins and we denote this state as k x m k x k x k xx x m k xm k xm k indeed starting and applying monotonicity of trace distance under partial trace of the e system pm with m we get that m recalling we can interpret this as asserting that the decoding error p probability does not exceed doing the same but considering a partial trace over the b system m m implies that m which is a security criterion so we get that the conventional two separate criteria are satisfied if a code satisfies the single privacy error criterion in to send the message m alice picks k uniformly at random from the set k she then sends the m k th x system through the channel nx thus when m and k are chosen the reduced state on bob and eve s systems is be k xm k k xm k k m xm m xm x m k x k be and the state of bob s systems is k k b k xmk b for bob to decode he uses the decoder to decode both the message m and the local key let k denote his decoding povm by the reasoning from section as long x m k b m k as m k ih x b where and then we have the following bound holding for all m k tr ix m k b k k xmk b xmk b where k is defined as in sections and by the reasoning from section we can also xmk b write as m k x xx x x px px xm k tr ib k k m k x x m k x with k defined as in section define the following measurement channels x x tr k m k x x tr k m k with it being clear that consider that x k x x x tr k k ihk m k x x x x x tr k k ihk tr k k m k x x tr k k tr ib x x k k now averaging the above quantity over m k and xm k and applying the condition in we get that x x x k px px xm k mk x x m k m k applying convexity of the trace distance to bring the average over k inside and monotonicity with respect to partial trace over system to the side of we find that m k x x m k px px xm k m k xm k m k x x xm k px px xm k m k x x x m k let us define the state x x m k xm k k trb pk q k x x xm k tr k q k k k consider that x x q xm k x xm k k then we can write k x xm k k x so that k x xm k k x q xm x q using these observations we can finally write m k k x xm k x xm k m k k m m m m m x x x x x x x q m q m m xx xm xm q ihm m x x q ihm m m x x q m m k x xm k k combining with the above development implies that x px px xm k xm k m m k x xm k k k x xm k k now we consider the state on eve s systems and the analysis of privacy if m and k are fixed then her state is k k e k xmk e for simplicity of notation in the above and what follows we are labeling her systems x as x however k is chosen uniformly at random and so conditioned on the message m being fixed the state of eve s systems is as follows xmk e k x m k m k e k k k x k e k k k we would like to show for that m m k m k x e for some state by the invariance of the trace distance with respect to states we find that m m k m k x e m k k e k x k e k k k from lemma and the relation in between trace distance and purified distance we find that if we pick k such that k e x then we are guaranteed that m m k m k x e where is some state such that p e consider that we can rewrite m m k m k x e k x x x px px xm k xm k xm k m k k k x m k x px px xm k xm k xm k m k k k x xm k k k x xm k px px xm k k x k applying to we find that k x x xm k px px xm k k x m k putting together and we find that if h i m i x b e x h ih x b e x then we have by the triangle inequality that x px px xm k x x k x xm k k m k so this gives what is achievable with shared randomness again no difference between average and maximal error if shared randomness is allowed we now show how to derandomize the code we take the above and average over all messages we find that m x m x x xm k x px px xm k px px xm k xm k m m x k x xm k k k k x x m k so we can conclude that there exist particular values xm k such that m x m k x xm k k thus our final conclusion is that the number of achievable bits that can be sent such that the privacy error is no larger than is equal to m ih x b e x asymptotics for private classical communication in this section i show how the lower bound on private capacity leads to a lower bound on the coding rate of private communication over an cq wiretap channel i also show how the bounds simplify for cq wiretap channels and when using binary keying as a coding strategy for private communication over a bosonic channel applying lemma to with we can take m ih x b e x ih x b while still achieving the performance in substituting an cq wiretap channel into the bounds evaluating for such a case in and d and using the expansions for ih max in while taking n for sufficiently large n we get that mpriv n n i x b i x e q q nv x b nv x e o log n example cq wiretap channel let us consider applying the inequality in to a cq wiretap channel of the following form x x x in which the classical input x leads to a pure quantum state x x for bob and a pure quantum state for eve this channel may seem a bit particular but we discuss in the next section how one can induce such a channel from a practically relevant channel known as the bosonic channel in order to apply the inequality in to the channel in we fix a distribution px x over the input symbols leading to the following state x px x x x x it is well known and straightforward to calculate that the following simplifications occur i x b h b h i x e h e h where h tr denotes the quantum entropy of a state and x px x x x x x px x x proposition below demonstrates that a similar simplification occurs for the information variance quantities in in the special case of a cq wiretap channel by employing it we find the following lower bound on the coding rate for a cq wiretap channel mpriv n n h h p p nv nv o log n where v and v are defined from below proposition let x px x x x x be a state corresponding to a ensemble px x x ib x then the holevo information variance p v x b v is equal to the entropy variance v of the expected state x px x x x where v tr h that is when takes the special form in the following equality holds v x b v proof for the cq state in consider that i x b h b h furthermore we have that x px x x x x x px x x x x which holds because the eigenvectors of are x ib x then v x b v tr i x b tr ib ix h b by direct calculation we have that ib ix x x x px x x x px x ib x x x x x x px x ib x observe that ib x x is the projection onto the space orthogonal to x ib then we find that ib ix x px x ib x x x x px x ib x x x furthermore we have that px x ib x x px x ib x x px x ib x x px x ib x x so then by direct calculation tr ib ix x x x x tr px x ihx px x ib x x n h io px x tr x x px x ib x x n o px x tr x x x x x tr in the equality we used the expansion in and the fact that x x and ib x x are orthogonal finally putting together and we conclude example bosonic channel we can induce a cq wiretap channel from a bosonic channel in what follows we consider a coding scheme called binary keying bpsk let us recall just the basic facts needed from gaussian quantum information to support the argument that follows a curious reader can consult for further details the channel of transmissivity is such that if the sender inputs a coherent state with c then the outputs for bob and eve are the coherent states and respectively note that the overlap of any two coherent states and is equal to and this is in fact the main quantity that we need to evaluate the information quantities in the average photon number of a coherent state is equal to a scheme induces the following cq wiretap channel from the channel p p that is if the sender would like to transmit the symbol then she prepares the coherent state at the input and the physical channel prepares the coherent state for bob and for eve a similar explanation holds for when the sender inputs the symbol a scheme is such that the distribution px x is unbiased there is an equal probability to pick or when selecting codewords thus the expected density operators at the output for bob and eve are respectively as follows p p p a straightforward computation reveals that the eigenvalues for are a function only of the overlap and are equal to pb pb similarly the eigenvalues of are given by pe pe we can then immediately plug in to to find a lower bound on the coding rate for private communication over the bosonic channel mpriv n n pb pe q q pb pe o log n where and respectively denote the binary entropy and binary entropy variance a benchmark against which we can compare the performance of a bpsk code with is the private capacity of a bosonic channel given by g g where g x x x x x figure plots the normal approximation of the lower bound on the coding rate of bpsk coding for various parameter choices for and comparing it against the asymptotic performance of bpsk and the actual private capacity in the normal approximation consists of all terms in besides the o log n term and typically serves as a good approximation for capacity even for small values of n when is not necessarily valid as previously observed in conclusion this paper establishes a lower bound on the private classical capacity of a cq wiretap channel which in turn leads to a lower bound on the coding rate for private communication over an cq wiretap channel the main techniques used are decoding in order to guarantee that bob can decode reliably and convex splitting to guarantee that eve can not determine which message alice transmitted it is my opinion that these two methods represent a powerful approach to quantum information theory having already been used effectively in a variety of contexts in for future work it would be good to improve upon the lower bounds given here extensions of the methods of and might be helpful in this endeavor note after the completion of the results in the present paper naqueeb warsi informed the author of an unpublished result from which establishes a lower bound on the private capacity of a cq wiretap channel in terms of a difference of the hypothesis testing mutual information and a smooth information acknowledgements i am grateful to anurag anshu saikat guha rahul jain haoyu qi qingle wang and naqueeb warsi for discussions related to the topic of this paper i acknowledge support from the office of naval research and the national science foundation private communication rate private communication rate normal approximation bpsk asymptotic private capacity normal approximation bpsk asymptotic private capacity number of channel uses a normal approximation bpsk asymptotic private capacity normal approximation bpsk asymptotic private capacity b private communication rate private communication rate number of channel uses number of channel uses c number of channel uses d figure the figures plot the normal approximation for bpsk private communication using the asymptotic limit for bpsk and the asymptotic private capacity for various values of the channel transmissivity the mean photon number and a proof of lemma for the sake of completeness this appendix features a proof of lemma let be the optimizer for iemax b a inf dmax p we take as the marginal of we define the following state which we think of as an approximation to b b k x b k in fact it is a good approximation if is small consider from joint concavity of the root fidelity that f e b b k f b k b k f e b b k f e which in turn implies that f e b b f e so the inequality in the definition of the purified distance and the fact that p imply that p e b b p let y py y y for py a probability distribution and y y a set of states then the following property holds for quantum relative entropy and a state such that supp supp x d py y d y d y y applying it follows that d e b k x d b k k x d b ke b k the first term in on the side of the equality simplifies as k x d b k k x d b k d we now lower bound the last term in consider that a partial trace over systems ak gives b d b ke d b ke b d k where the equality follows because b thus averaging the inequality in over k implies that k x d b ke b k d k putting together and we find that d e b d d k by the definition of in we have that h i which means that an important property of quantum relative entropy is that d d if applying it to and the side of we get that d d k h i d d k h i d d h i then the well known inequality d f and imply that h i f e b which in turn implies that f e b k k k so if we pick k such that k inf p dmax then we are guaranteed that p e b by the triangle inequality for the purified distance we then get that p b p b b p e b this concludes the proof references anurag anshu vamsi krishna devabathini and rahul jain quantum message compression with applications february anurag anshu rahul jain and naqueeb ahmad warsi one shot entanglement assisted classical and quantum communication over noisy quantum channels a hypothesis testing and convex split approach february charles bennett and gilles brassard quantum cryptography public key distribution and coin tossing in proceedings of ieee international conference on computers systems and signal processing pages bangalore india december francesco buscemi and nilanjana datta the quantum capacity of channels with arbitrarily correlated noise ieee transactions on information theory march salman beigi nilanjana datta and felix leditzky decoding quantum information via the petz recovery map journal of mathematical physics august imre and janos broadcast channels with confidential messages ieee transactions on information theory may ning cai andreas winter and raymond yeung quantum privacy and quantum wiretap channels problems of information transmission october nilanjana datta and entropies and a new entanglement monotone ieee transactions on information theory june igor devetak the private classical capacity and quantum capacity of a quantum channel ieee transactions on information theory january arxiv nilanjana datta hsieh and jonathan oppenheim an upper bound on the second order asymptotic expansion for the quantum communication cost of state redistribution journal of mathematical physics may nilanjana datta and felix leditzky asymptotics for source coding dense coding and entanglement conversions ieee transactions on information theory january nilanjana datta marco tomamichel and mark wilde on the asymptotics for communication quantum information processing june vittorio giovannetti saikat guha seth lloyd lorenzo maccone jeffrey shapiro and horace yuen classical capacity of the lossy bosonic channel the exact solution physical review letters january arxiv alexei gilchrist nathan langford and michael nielsen distance measures to compare real and ideal quantum processes physical review a june arxiv saikat guha and mark wilde polar coding to achieve the holevo capacity of a optical channel proceedings of the ieee international symposium on information theory pages masahito hayashi general nonasymptotic and asymptotic formulas in channel resolvability and identification capacity and their application to the wiretap channel ieee transactions on information theory april arxiv masahito hayashi tight exponential analysis of universally composable privacy amplification and its applications ieee transactions on information theory november karol horodecki michal horodecki pawel horodecki and jonathan oppenheim general paradigm for distilling classical key from quantum states ieee transactions on information theory april arxiv masahito hayashi and hiroshi nagaoka general formulas for capacity of classicalquantum channels ieee transactions on information theory july arxiv ke li second order asymptotics for quantum hypothesis testing annals of statistics february yury polyanskiy vincent poor and sergio channel coding rate in the finite blocklength regime ieee transactions on information theory may alexey rastegin relative error of cloning physical review a october alexey rastegin a lower bound on the relative error of cloning and related operations journal of optics b quantum and semiclassical optics december arxiv alexey rastegin sine distance for quantum states february arxiv joseph renes and renato renner noisy channel coding via privacy amplification and information reconciliation ieee transactions on information theory nov alessio serafini quantum continuous variables crc press vincent tan achievable coding rates for the wiretap channel in ieee international conference on communication systems iccs pages november mehrdad tahmasbi and matthieu bloch second order asymptotics for degraded wiretap channels how good are existing codes in annual allerton conference on communication control and computing allerton pages september marco tomamichel mario berta and joseph renes quantum coding with finite resources nature communications may marco tomamichel roger colbeck and renato renner a fully quantum asymptotic equipartition property ieee transactions on information theory december marco tomamichel and masahito hayashi a hierarchy of information quantities for finite block length analysis of quantum tasks ieee transactions on information theory november marco tomamichel and vincent tan asymptotics for the classical capacity of quantum channels communications in mathematical physics august armin uhlmann the transition probability in the state space of a reports on mathematical physics hisaharu umegaki conditional expectations in an operator algebra iv entropy and information kodai mathematical seminar reports naqueeb ahmad warsi bounds in classical and quantum information theory phd thesis tata institute of fundamental research mumbai india december not publicly available communicated by email on march mark wilde from classical to quantum shannon theory mark wilde and haoyu qi private and quantum capacities of quantum channels september ligong wang and renato renner capacity and hypothesis testing physical review letters may mark wilde marco tomamichel and mario berta converse bounds for private communication over quantum channels ieee transactions on information theory march aaron wyner the channel bell system technical journal october mohammad hossein yassaee mohammad reza aref and amin gohari nonasymptotic output statistics of random binning and its applications in ieee international symposium on information theory pages july wei yang rafael schaefer and vincent poor bounds for wiretap channels in ieee international symposium on information theory isit pages july march
7
implementation of travelling salesperson problem using genetic algorithm a comparative study of python php and ruby aryo novanto fajar program teknologi informasi dan ilmu universitas brawijaya malang indonesia aryo yudistira world is connected through the internet as the abundance of internet users connected into the web and the popularity of cloud computing research the need of artificial intelligence ai is demanding in this research genetic algorithm ga as ai optimization method through natural selection and genetic evolution is utilized there are many applications of ga such as web mining load balancing routing and scheduling or web service selection hence it is a challenging task to discover whether the code mainly server side and web based language technology affects the performance of travelling salesperson problem tsp as non problem is provided to be a problem domain to be solved by ga while many scientists prefer python in ga implementation another popular interpreter programming language such as php php hypertext preprocessor and ruby were benchmarked line of codes file sizes and performances based on ga implementation and runtime were found varies among these programming languages based on the result the use of ruby in ga implementation is recommended genetic algorithm language i introduction ai refers to intelligent behaviour unexceptionally in webbased application a combination of the web application and ai has been becoming the future trends of application moreover the trend of cloud computing has already risen there are many applications use ga for various purposes ga as a well known algorithm that use heuristic approach to gather fully optimized solutions has been used widely in various web applications such as search engine and web mining application ga has become the effective algorithm in terms of pattern recognition recently it is found that new trend like social graph technology with optimization is promising from the scientific point of view data processing and analysis scripts often time consuming and require many hours to be computed on a computer device so the iteration process along with its debugging process will be longer moreover scientists have a different focus on his work compared to professional programmers they are keen on methodology rather than the tools they are utilizing faster completion of programming task is surely dreamed by many scientists or even beginner programmers who have a this research supported by program teknologi informasi dan ilmu komputer ptiik universitas brawijaya consideration of being effective naturally they will choose that kind of programming language pl it is also based on a psychological review as narrated in to catch up with the rapidly progressive research in ai speed and simplicity become necessary in ai programming therefore it is needed to give scientists pls that could quickly iterate while preserve the tidiness and simplicity so that it can be easily used even though compiled pl is faster at run time than interpreter pl it is not as simple as recent emerging pl such as python php and ruby the advantages of using python basically lie in its ease of use interpreted and object oriented programming language that can bridge many scientists need without loosing the sense of object oriented style however the effectiveness of pl can be measured by how many lines of code should be written or how much syntaxes should be initiated to implement the same another drawback of using compiled pl is by looking at the denial of service type of attacks ii computational there are many researches use ga for benchmarking purposes this paper benchmarks interpreter pl in supporting ai problem domain to be solved by ga in this case is travelling salesperson problem tsp in which has become a benchmark for several heuristics in ga performance test the use of tsp varies based on its domain problems tsp is an problem which solution is optimized by ga based on natural selections and genetic evolutions to solve np complete domain problems at tsp it is the idea of finding a route of a given number of cities by visiting each city exactly once and return to the starting city where the length of the route is minimized a path that visits every city and returns to the starting city creating a closed circuit is called a route the simplest and direct method to solve tsp of any number of cities is by enumerating every possible route calculating every route length and choosing one route with the shortest length it is possible that every city may become the starting point of the route and one route may have the same length regardless of the direction of the route taken logically the problem can be set up by using integer n and the distance between every pair of n cities and represented by an matrix each possible route can be represented as a permutation of n where n is the number of cities thus the number of possible routes is a factorial of n n it turns out to be more computationally burdened and difficult to enumerate and find the length of every route as n may become larger in polynomial complexity hence tsp needs an algorithm that able to find a route that produces the minimum length without having to enumerate all possible routes from cities given genetic algorithm ga is basically a search algorithm that is used to find solutions of evolution in such search space to solve problems such as tsp to this end a solution is so called an individual while a set of solutions is normally called as population individuals are evolving through many generations in populations the important thing that must be initiated is the genetic representation of the solution domain and the fitness function in order to evaluate the quality of such individuals moreover selection is a stochastic function it selects individuals based on the fitness value of a generated population and yields so called the individual even though ga that utilizes crossover and mutation steps does not define the end of the iteration every generation has the probability to obtain better individuals than its ancestors in terms of their fitness cost it has been a that solving the tsp problem using ga is optimal but it depends on crossover and mutation method used in this research crossover crossover is the step to produce various chromosomes individuals which is chosen from the chromosomes in previous generation in this research selecting two best chromosomes that have the best fitness cost to crossover does the crossover steps then randomly picking a gene or point from both chromosomes and finally pick the rest of remaining genes randomly until all unique genes has been picked up a new population later will be generated from previously selected chromosomes therefore recombination process of two parent individuals will generate new chromosomes an example in producing an individual from two individuals as parents which represented using a and b by doing crossover is explained below a abcdefgh b efghabcd each gene of a new individual was taken from both parent genes the crossover process is done until it generates a new individual c like the following example c aebcgfdh mutation mutation is an extension of the crossover process that executed by such a probability rate it is used to avoid a local optimum if the mating process only depends on crossover it probably yields to a local optimum because the chance to approach the fitness value is relatively high mutation process is given to cover the problem in such a way to reach global optimum solution the mutation method used in this research was done by choosing a gene in such an index and switch the pointed gene to be the first index along with the rest of the genes that follows in the sequence for example if an offspring c produced by the crossover process is abcdefgh and the pointed gene for mutation is e then the resulting mutation offspring is efghabcd iii interpreted languages pls are related to programmable and dynamic environment in which components were bound together at a high level the emergence of compiled languages such as java or c has led to be world wide web www transformation however these pl leave drawbacks such as parsed codes produced by compilers are stored in scattered files hence in a compiled environment some of those files must be included such as the instruction codes parsed codes header files and the executable or linked codes altogether nonetheless the significant distinction of compilers and interpreters is that compilers parse and execute in different actions sequentially though its speed and standalone executable existence of compiled pl there is an increasing complexity in its process nevertheless many recent compilers are able to compile and execute the code directly in memory giving such an interpreted language il fashion the use of interpreted pl is beneficial since in an interpreted environment every instruction of pl is executed right away after being parsed this has eased the programmers since the result can be presented quickly moreover the interpreted code can be run without compilers and linkers to produce executable codes however there are disadvantages related to il such as its poor performance no executable program and interpreter dependence compared to a compiled pl after all ils such as php python and ruby were merely made by hobbyists without any research goal such as lisp but they are widely applicable interpreter support for ga there are many ga implementations built into python such as pyga the exact result of effectiveness between python php and ruby in ga implementation is still a domain of interest to be explored ga despite of its pros and cons is quite easy to be implemented but yields into a very slow process in its usage to solve such problems iv methodology tsp based ga in this research is implemented into three commonly used object oriented programming language and without any frameworks they are python php and ruby all of ga codes that represent each pl are written based on the given pseudo code they were implemented using the same variable names methods and initialization logic during implementation they were tested using one data source and the same parameter values for all pl ga codes of each pl are implemented as close as possible if any parts of pseudo code are implemented on one pl using multiple methods functions or variables then the other pl had to be implemented in the same manner keeping the code to be as close as possible supposedly yields to objective measurement results one of many important functions used in ga is a random number generator in ga random number generator functions utilized to generate a random number in order to compensate the probability of doing a crossover copy or mutate the parents the random number generator is implemented using a number generator prng function prng is a random number generator function in which when it was given the same seed number it will always return the same random value this technique is used to ensure that all programs will go through the same method and loop inside the program when run under the same circumstances thus giving the same result the prng function used in this research is implemented using a separate script that is run by a system call by each of implemented pl in performance measurement of implemented pl a small modification was added to the scripts by adding current time or micro time function at several points in code while taking account into current generation best fitness cost value before carrying out any measurement values testing units were made using a specified value and data to ensure all implemented scripts are using the same seed number and random values therefore all implemented pl will mate the equal parents and generating the same candidate population in every generation they should return the same best individual in the same generation at the end of script execution the scripts were run and measured on a macbook pro computer running on mac os x ghz intel core processor gb mhz memory and intel hd graphics with mb the version of python ruby and php interpreter used in measurement processes are and respectively the main objective of this paper is to automatically infer more precise bounds on execution times and best fitness that depends on input data sizes of the three different pl thus a recommendation to a widely used pl in ga implementation is given a genetic algorithm pseudo code for tsp table i shows how ga can solve tsp it will be written in python php and ruby as of the initial population individuals are generated from a comma separated value csv file that contains names of cities along with its x and y coordinates table i genetic algorithm pseudo code ga pseudo code class city function initialize name x y initialize name of city along with its coordinate in x and y class individual function initialize route initialization of an individual which consists of cities or nodes function makechromosome file generate an individual chromosomes from a file function evaluate calculate the length of given population a set of cities using euclid distance formula function crossover other do a recombination to return a new offspring from given spouse class environment function initialize initialization to population data population size maximum number of generations crossover rate mutation rate and optimum number of generation related to the best fitness cost convergence value function makepopulation create an initial population consists of two parents from file as parent a other parent is generated from shuffled parent a function evaluate sorts all individual based on the its calculated length value starting from the least length best until the largest individual in current population do getbestindividual function getbestindividual if the best length obtained so far is less than the best individual in population obtained from crossover then the best fitness cost remains unchanged else the best fitness cost obtained is updated from the first individual order of population in the current crossover s offspring population function run while the goal is not achieved do a generation step increase generation number by function goal if current generation reached maximum generation or current generation reached optimum when current generation minus best individual generation is larger than optimum number given then the goal has been achieved otherwise continue iteration function step do a crossover evaluate fitness costs of all individual in the current population function crossover initialize candidate population select two best parent candidates parent a and parent b from current population while candidate population is still less than given size do randomise crossover rate if in crossover rate then offspring crossover parent a with parent b else offspring copy parent a randomise mutation rate if in mutation rate then offspring mutate current offspring evaluate offspring fitness cost if offspring not exists in candidate population then add offspring individual into candidate population candidate population become the new population function mutate individual mutating individual in the manner of switching half of given individual orderly run ga environment with parameters csv file containing population data number of population number of maximum generation crossover rate mutation rate and optimum best generation pl implementation of ga pseudo code ga pseudo code used to solve tsp in this research was implemented in python php and ruby every method and variables are ensuring they have the same results and values across all implementations of given the codes are utilizing the same population data it is impossible to implement methods and variables in the exact ways due to the different natures of programming style of each pl however all implemented codes of methods and variables across all pl are ensured to behave the same and having the same values under the same ga environment variables and population data random number generation as previously mentioned the random number generator such as prng is used in ga are implemented using a separate code it is called by a system call function from the implemented pl therefore all implemented scripts will have the same random number performance and value under the same circumstances the prng code is implemented in ruby because of its known good performance and it is shown in table ii native random number generator functions across different pl behave differently separation of prng code from main script is required to ensure all pl are using the same random number during running time and therefore the execution time measurement would be objective in real world ga implementation the use of native random function is recommended this research proved that the use of system call causes bottleneck in program execution table ii prng implementation in ruby prng code implementation in ruby if seed argv max argv srand seed if argv puts rand else puts rand argv end else puts end seed number generation the seed number which is used to generate random values is incremented by one from the previous seed value before calling the prng script the seed itself implemented as a global variable therefore its value will always be available before it is fed to the prng and its value will be maintained during runtime data in this experiment genome data for an individual that used in unit tests and measurements are shown on table iii they are stored in a csv file as a plain text values stored on each line in a csv file consist of the name of cities along with their coordinates in x and the file is read during initialization of ga environment and its values are used as population s initial genome data table iii initial populations of gnome data no city name x y balikpapan malang jayapura manado bandung banjarmasin pontianak jakarta medan makassar pl execution time measurement workarounds have to be used in order to do a portable time measurement where timing is difficult or possible to achieve synthetic benchmark approach were followed which on purpose repeatedly execute the instructions under estimation for a large enough time and later averaging the total execution time by the number of times it is being run generally it is not possible to run a single instruction repeatedly within the abstract machine since the resulting sequence would not be legal and may break the abstract machine run out of memory etc therefore more complex sequences of instructions must be constructed and be repeated instead as of previous measurement research conducted in the case of knapsack problem were measured by using each pl s native timing function the execution time measurements in our tsp problem were also doing so in getting program execution time the difference between the start time and end time of script execution is calculated these measurements are done several times under the same environment circumstances on all pl each measurement is done with different number of population data from csv file program execution times are measured using five six seven eight nine and ten cities in consecutive ways ga environmental parameters were set to maximum generations populations within each generation with a crossover rate mutation rate and generations limit of best fitness cost as a ga convergence termination limit each ga environment is measured times and then the average values and their standard deviations are estimated result and analysis implementation of ga pseudo code provided is resulting in python php and ruby codes python php and ruby codes were implemented in and lines of code respectively python php and ruby file size are and bytes respectively the shortest line of codes is python because of python scripting nature does not require closing tags on its method function or loop implementation as in php and ruby but when it comes to code file size python consumes more bytes to implement ruby programming style characteristic is shorter and simpler compared to the other pl thus resulting the smallest file size on the implementation of ga the results of program execution measurement in this research are shown in table iv python was used as the basis of performance measurement because it is the most widely used pl for research purposes therefore we compare the execution time of all pl to python when it comes to the web environment as of research conducted by jafar et al php outperform python based on last seed best fitness cost and best generation measurement results in table iv we can infer that all of the tests returning the same best individual on the same generation for the same number of cities therefore proving that all pl executions and their flow of run are exactly the same during tests ruby proved to outperform python and php in execution time of ruby s performance improvements vary from to while php performance are to slower over python table iv measurement result php coefficient of variant best fitness cost length python ruby php python ruby php python ruby php python ruby php python ruby php python ruby number of cities pl maximum ms minimum ms average ms standard deviation fig program execution time in milliseconds by number of cities in tsp expected solution of fitness cost is the minimum the smaller the value the result will be better execution time grows longer when the number of cities in an individual increase as shown in fig the relation between fitness cost total distance on selected routes and number of cities in an individual is shown in fig because of all pl were implemented in the same way the resulting fitness cost value for the same number of cities will be the same regardless the pl used regression analysis using analysis of variance anova was conducted to the measurement result shown in table iv from the regression analysis result we can infer that script execution times are highly correlated to data sizes on all implemented pl under the same ga environment the more cities included in an individual which mean the higher the number of inputs the longer program execution time would be best generation last seed performance over python fig fitness cost route s total distance by number of cities it is proved that the fitness cost route length is highly correlated to data size on all implemented pl under the same ga environment the higher the number of inputs cities included in an individual the larger the fitness cost would be an individual which has larger fitness cost is worse vi conclusion based on measurement and analysis process program execution time and best fitness cost are highly relied on data size the number of cities in all pl the larger the data size the longer execution time will be and the worse the outcome of ga as tsp solution this research ga pseudo code implementation shows that ruby code has the smallest file size compared to python and php but python has the least line of codes in testing of the overall program execution time ruby is faster than python and php therefore the usage of ruby is recommended to gain performance in implementation of ga for tsp in a web environment over python and php acknowledgment the authors would like to acknowledge the assistance of program teknologi informasi dan ilmu komputer ptiik and universitas brawijaya ub for their research facilities and financial support the authors also acknowledge for the assistance of colleagues in ptiik ub for their great assistance in improving the manuscript references andaur rios roman and velasquez best web site structure for users based on a genetic algorithm approach department of industrial engineering university of chile santiago chile ren from cloud computing to language engineering effective computing and advanced intelligence international journal of advanced intelligence vol number pp punch iii using genetic algorithms for data mining optimization in an educational systems michigan state university gursel and improving search in social networks by agent based mining guo and engler toward practical incremental recomputation for scientists an implementation for python language stanford university python http php http ruby http branigan risk with web programming technologies lucent technologies dunlop varrette and pascal on the use of a genetic algorithm in high performance computer benchmark tuning university of luxembourg najera tsp three evolutionary approaches local search university of birmingham sarmady an investigation on genetic algorithm parameters universiti sains malaysia strachey fundamental concepts in programming languages higher order and symbolic computation pp kluwe academic publishers riley interpreted vs compiled languages internet http november august purer php python ruby the web scripting language shootout vienna university of technology pospichal parallel genetic algorithm solving knapsack problem running on the gpu brno university of technology koepke reasons python rocks for research and a few reasons it doesn t jafar anderson and abdullat comparison of dynamic web content processing language performance under a lamp architecture west texas a m university canyon view publication stats
6
qualitative assessment of recurrent human motion andre ebert michael till beck andy mattausch lenz belzner and claudia nov mobile and distributed systems group institute for computer science munich germany email belzner linnhoff applications designed to track human motion in combination with wearable sensors during physical exercising raised huge attention recently commonly they provide quantitative services such as personalized training instructions or the counting of distances but qualitative monitoring and assessment is still missing to detect malpositions to prevent injuries or to optimize training success we address this issue by presenting a concept for qualitative as well as generic assessment of recurrent human motion by processing continuous time series tracked with motion sensors therefore our segmentation procedure extracts individual events of specific length and we propose expressive features to accomplish a qualitative motion assessment by supervised classification we verified our approach within a comprehensive study encompassing athletes undertaking different body weight exercises we are able to recognize six different exercise types with a success rate of and to assess them qualitatively with an average success rate of assessment activity recognition physical exercises segmentation i ntroduction regular physical exercising improves an athlete s health and sufferings from chronic diseases or even the alzheimer s disease are lowered in that context mobile phone applications for training support running crossfit etc became popular they provide customized workout plans detailed exercise instructions as well as quantitative and statistical functions but by providing about challenging exercises without supervision to athletes arises new problems wrong execution of exercises malpositions or the absence of sufficient warming up phases may lead to less training success or even to serious injuries especially athletes are likely to harm themselves during an unsupevised workout we believe that a and automated monitoring reduces such injuries drastically while a training s success could be improved significantly moreover a generic concept capable of recognizing and assessing various recurrent human motions is also applicable in other areas medical observations gait analysis or optimization of workflows to address this unsolved issue we previously introduced sensx which is a distributed sensor system for capturing and processing human motion we established a paradigm for qualitative analysis of human motion consisting of four fundamental steps see figure detection of a motion event its recognition its qualitative assessment and the characterization of personal use of this preprint copy is permitted republication redistribution and other uses require the permission of ieee this paper was published within the proceedings of the european signal processing conference eusipco kos greece doi c ieee fig four fundamental steps of human motion analysis within the logical layer the hardware layer functions as a sensor and feedback provider as proposed by reasons for a specific assessment step and were treated within while this paper focuses on step by using the existing sensx architecture as a basis thereby our contributions within this paper are as follows we propose a novel concept for qualitative assessment of complex recurrent human motion it covers the extraction of motion events into segments of individual length an expressive feature set as well as a system for supervised classification are selected and implemented to validate our concept we conducted an comprehensive exemplary study with athletes executing more than repetitions of six different types of body weight exercises we present results concerning the assessment of human motion as well as for human activity recognition on basis of motion sensor data ii r elated w ork in the following we provide a brief overview on related work concerning segmentation recognition and assessment of human motion thereby we are focusing on complex motion sequences which are described by multiple coordinated movements conducted by several extremities at the same time body weight exercises instead of more simple activities which have often been subject to activity recognition within previous research walking or sitting before analysis and assessment of reoccurring events within time series become feasible they need to be extracted into segments first bulling et al name sliding windows segmentation restposition based segmentation and the use of additional sensors or context resources as applicable procedures sliding window approaches move a window of static size sequentially across an incoming stream of data and extract the window s current content for further analysis authors of recofit and climbax used a sliding window which they moved in discrete steps across a motion data stream these approaches offer valuable ideas for our segmentation concept still due to the absence of a length adjustment to an events actual duration they do not cover all of our needs the actual start and endpoints of short events a pushup are not captured accurately which leads to noise within an event s segment fragments of preceding or following events this noise may disturb the qualitative assessment process of a specific event significantly energybased solutions perform well for segmentation of long term activities which are describable by different energy potentials sitting running we examine individual repetitions of short movements their energy potential is not diverse enough from each other and is not suitable for segmentation restposition based segmentation is also not feasible for there are no rest positions within a continuous event set the use of external information sources gps is not suitable for such movements targeted by us other approaches also facilitated manual segmentation which is not suitable for great numbers of events or realtime analysis the review of these procedures led us to the necessity of developing an individualized segmentation process which considers our requirements concerning dynamic and accurate extraction of complex and motion events quantitative counting of repetitive activities as well as its recognition and classification are fields of research which is why we do not present much work bound to that topic within this paper jiang et al recognize simple activities like lying walking and sitting while morris et al are dealing with more complex exercises facilitated techniques are neuronal networks as well as typical classifiers for supervised learning and combinations of both in contrast to that qualitative assessment of human motion data was examined only sparsely yet ladha et al as well as pansiot et al assessed the performance of climbers by extracting and analyzing features such as power control stability and speed without examining individual climbing moves their work provides valuable information concerning the handling and preprocessing of motion data still it does not allow the assessment of individual movements of specific extremities within a chain of multiple climbing features velloso et al assessed repetitions of recurrent motion by recording five wrongly executed exercises and classified them afterwards by template comparison though they were able to classify exercise mistakes with a success rate of their approach is only able to identify a fixed number of predefined error cases thus it is not suitable for generic assessment of human motion gymskill is a system for qualitative evaluation of exercises conducted on a balance board exercises are examined and assessed with an individualized principal component analysis but gymskill is bound to analysis in combination with a balance board therefore it is also not capable of generic motion analysis concluding we are not aware of a concept which enables qualitative assessment of human motion in a generic way and without being bound to predefined motions or equipment as a solution we now present a novel approach for tracking recognizing and assessing human motion iii a n approach for qualitative assessment of human motion in the following we explain our advance for extracting recurring events out of time series afterwards we describe the preprocessing and selection of expressive features to prepare a qualitative assessment via supervised learning data input of our analysis procedure are individual streams of motion data see for technical details therefore five sensor devices are tracking acceleration and rotation information in and two are fastened on the tracked person s ankles two are attached to the wrist and the fifth is worn on the chest in combination with a processing unit preprocessing and segmentation to extract individual events out of the incoming multidimensional signal set we developed a dynamic and segmentation algorithm which is capable of segmenting heterogeneous sets of motion events individually each signal in each segment may be of specific length and contains all information about exactly one rotation or acceleration axis of exactly one specific event all extracted event segments are also of individual temporal length in comparison to each other to identify individual motion events we first examine the most meaningful signal vector within a signal set typically this signal contains the highest dynamics and variance within its values and allows a distinct identification of a segment s start point and its end point for that we are calculating the standard deviation of all signals whereby x defines the current signal xi is the measured value and is the expectancy value n p xi p v ar x n the signal with the highest deviation is taken for further analysis we assume that every type of motion event can be described by an individual set of local extrema and we use these sets to identify distinct events figure depicts the whole segmentation process for a set of bicycle crunches within this setting the acceleration of the ankle sensors along the proved to provide the most meaningful signal as depicted in figure signals of individual repetitions contain a high amount of noise as well as unique peaks which are fig step by step procedure of segmenting recurrent motion events of individual length out of a set of bicycle crunches fig signal of a set of mountain climbers and determination of the ideal cutoff factor cf not representative for a specific class of movements these peaks may contain information which is critical for assessing a movement in terms of quality but they are irrelevant for segmentation that is why we designed and applied an aggressive butterworth low pass filter to the signal see figure thus all information unnecessary for segmentation is extinguished and only essential periodicities are left the filter s fc is determined by multiplying the sampling frequency fs with a cutoff factor fc fs which is essential for the filter s effect onto the signal figure shows the results of the empirical determination of it indicates that our sensor setup cf must be within a range of in order to recognize all individual event occurrences within a set of repetitions thereby a different cf setup is used for each individual exercise due to the low pass filtering in combination with the usage of extrema patterns the identification of the individual duration of a segment as well as its starting point tsx and its ending point tex see figure becomes feasible this is achieved as follows a bicycle twist can be described by a set of one local minimum and one local maximum other movements may be characterized by differing combinations of multiple local extrema as depicted by the for a set of mountain climbers in figure at least two different signal parts are identifiable for this example our filtered signal is now scanned sequentially for this pattern when a new occurrence is detected a window of the size of the estimated event length is applied to the signal in a first phase the estimated event length is derived from the event sets as depicted in figure but since every repetition is of individual length and content we need to adjust the segments start and end points individually within a second phase precondition for the following is the assumption that origin and terminus of a repeated motion event is located at the signal s zero crossing in between the rest periods now we check if the segment window encompasses the demanded number and types of extrema if not we sequentially add sub segments of a predefined length l until the relevant extrema pattern is matched after this matching phase some fine tuning is undertaken to capture the exact segment ending if the last element within the segment is a positive value we wind forward and add single samples until we reach the next zero crossing otherwise if the last element within the segment is a negative value we wind back to the last zero crossing and remove all values on this way after determining the individual length of the current segment we only keep the timestamps of its starting and its endpoint tsx and tex subsequently these are used to cut out the specific segment from the slightly smoothed original signals see figure which still contain all important movement information output of this procedure is a quantity s sx of event segments of differing length whereby each segment has its exact borders and contains only information of exactly one motion event b feature selection and labeling commonly a feature vector within machine learning scenarios consists of a fixed number of features describing one instance due to the fact that all of our activity segments are of individual length and consist of individual signals this issue is challenging if we use the segmented time series directly for feature set creation their length would need to be trimmed or interpolated to match the fixed length of a feature vector interpolation would result in unwanted artificial noise trimming could lead to the loss of important information and finally all preceding efforts to extract each event into a segment of individual length would be worthless furthermore one event of the dataset that we recorded for evaluation see section iv consists of sample values in average roughly per signal building feature vectors of this length and greater leads to massive computational load during classification to overcome these challenges were exploited some observations we made during the examination of our dataset figure visualizes the standard deviations of the acceleration and rotation signals of randomly selected lunges labeled with quality class very good and lunges labeled with quality class poor in general lunges labeled class show a much higher deviation in rotation and acceleration values for the users feet bl and br while class values are significantly lower this is because a proper dataset fig standard deviations of all rotation and acceleration signals of a set of lunges labeled class and class fig components of the feature vector which is describing one individual motion event and the confusion matrix visualizing the results for qualitative assessment of bicycle crunches lunge is described by a big step forward as well as bringing one knee nearly to the ground which results in a greater movement energy while not decently conducted lunges result in less energy within these signals the same happens forwards and downwards for the wrist s acceleration tl and tr which are placed onto the users hip during the workout in contrast to that the rotation is low for proper lunges and higher for the improper ones this is related to a smooth movement conducted by skilled athletes and more unsteady movements conducted by unskilled athletes due to a relatively smooth and steady movement of the athletes torso the chest sensor ch did not provide significant information concerning this activity these observations show that even the individual signal s standard deviation contains enough information to assess an activity in a qualitative way based on this cognition we designed a feature vector to describe each individual activity instance it contains the standard deviations of all signals plus the time interval of the specific instance in milliseconds see figure all in all one event out of our evaluation data set see section iv consists of an average of sampling values by utilizing the procedure described above we are able to compress this information by a ratio of additionally we added a label r concerning the individual motion events quality rating from very good to very poor see section for evaluation we recorded six body weight exercises crunches cr lunges lu jumping jack mountain climber bicycle crunches and squats sq conducted by athletes of male and of female sex and aged from till each athlete had to complete sets with repetitions of each exercise between the individual sets we scheduled a mandatory break of an instruction video was shown to the athletes for each exercise and in prior of its execution all in all we tracked motion data of individual exercise repetitions additionally all conducted exercises were taped on video for later on labeling by experts the labeling range is very good to very bad the data is labeled as follows all exercises were labeled initially with class for each mistake each specific deviation from the video instructions steps are too small for a mountain climber the initial class gets added small deviation or severe deviation error points the final class is the rounded result of the overall error score hence completely different errors during the performance of a motion event may lead to the same error score and therefore the same quality class qualitative motion assessment all in all we used two different classification approaches for supervised learning one with manual and one with automated hyper parameter optimization within we manually configured four popular classification algorithms for human activity recognition see section ii the decision tree driven random forest rf and a support vector machine svm classifier and the naive bayes nb algorithm table i classifier rf svm nb cr lu sq average duration table i correct classification rates for qualitative assessment with manual classifier selection and configuration presents the performance of manual classifier selection within a cross validation rf provides the best results with an average correct classification rate of while taking for building its evaluation model nb performed way faster but with worse results cr success rate training time classifier rf lu rf k ibk sq rf k average table ii correct classification rates for facilitating a hyper parameter optimization with iv e valuation in this section we first describe the setup of our study subsequently we present the results of our evaluation and give insights into the performance of the segmentation algorithm approach facilitates as hyper parameter optimization layer for automated selection of appropriate classifiers and hyper parameter tuning table ii shows significantly improved results by facilitating rf neighbor ibk and k also for a cross validation four out of six exercise types are assessed correctly with a success rate of while the average rate is about despite varying time spans for different classifiers all test models except one were trained within less than a second for thousands of event instances this demonstrates the efficiency and scalability of our light weight feature vector design and offers promising chances for mobile and realtime usage volatile classification rates in between different exercise types may be explained with the discrete value domain of our labels as well as with the subjective labeling procedure this assumption is also indicated by figure which shows a confusion matrix for the qualitative assessment of bicycle crunches incorrectly classified events became assigned to neighboring quality classes because an event s label originates from its rounded error score it may occur that the label score is a border value the event gets the label although its quality is rated between and by contrast the classifier may now decide that the feature vector looks more like a member of class which finally leads to a wrong classification segmentation results all in all we were able to extract out of recorded motion events into individual segments of specific length using the segmentation approach described in section which makes a total of of extracted events activity recognition our preliminary study in evaluated the automated recognition of eight different body weight exercises on basis of acceleration data in this paper our feature vectors are built on basis of the individual acceleration and rotation signals standard deviations rather than of time series with a fixed and significantly bigger length we applied this new design to our dataset see section and achieved a correct recognition rate of for manual classifier configuration rf were reached for applying automated hyper parameter optimization within cross validation training of the evaluation model took for instances compared to our preliminary studies and related approaches this performance can be regarded as within the field of complex human motion recognition c onclusion and f uture w ork in this paper we presented a generic approach for dynamic and individual segmentation of recurrent human motion events as well as for qualitative assessment of complex human motion for evaluation we recorded an exemplary dataset containing repetitions of six different different body weight exercises and extracted them into segments of individual length additionally all segments were tagged with a quality label we are able to estimate a generic quality class of an individual event occurrence with an average correct classification rate of and up to for individual exercise types by adapting an expressive and heavily compressed feature vector for sheer recognition of activities we actually reach a correct classification rate of automatic hyper parameter optimization performed significantly better than manual approaches our concept features a generic analysis approach and we conjecture it is applicable to various recurrent human motions and transferable into multiple operational areas such as sports medical observation or even workflow optimization these results offer promising options for future work a more assessment process by adapting compressed feature vectors information which is valuable for identifying tangible errors or the positioning of malpositions gets lost thereby a conducted movement can be rated good or bad in a qualitative manner but neither can the exact reason for that be identified nor can we carve out concrete characteristics of a specific quality assessment new features and principal components may be crucial to explore these issues more dynamic and generic analysis approaches neural networks may bring new insights and are subject of ongoing research r eferences radak hart sarga koltai atalay ohno and boldogh exercise plays a preventive role against alzheimer s disease journal of alzheimer s disease vol no pp koplan siscovick and goldbaum the risks of exercise a public health view of injuries and public health reports vol no ebert kiermeier marouane and sensx about sensing and assessment of complex human motion in networking sensing and control icnsc ieee international conference on ieee bulling blanke and schiele a tutorial on human activity recognition using inertial sensors acm computing surveys csur vol no morris saponas guillory and kelner recofit using a wearable sensor to find recognize and count repetitive exercises in proceedings of the annual acm conference on human factors in computing systems acm pp ladha hammerla olivier and climbax skill assessment for climbing enthusiasts in proceedings of the acm international joint conference on pervasive and ubiquitous computing acm pp ding shangguan yang han zhou yang xi and zhao femo a platform for exercise monitoring with rfids in proceedings of the acm conference on embedded networked sensor systems acm pp mortazavi pourhomayoun alsheikh alshurafa lee and sarrafzadeh determining the single best axis for exercise repetition recognition and counting on smartwatches in wearable and implantable body sensor networks bsn international conference on ieee pp jiang and yin human activity recognition using wearable sensors by deep convolutional neural networks in proceedings of the acm international conference on multimedia acm pp pansiot king mcilwraith lo and yang climbsn climber performance monitoring with bsn in medical devices and biosensors international summer school and symposium on ieee pp velloso bulling gellersen ugulino and fuks qualitative activity recognition of weight lifting exercises in proceedings of the augmented human international conference acm pp roalter diewald scherr kranz hammerla olivier and gymskill a personal trainer for physical exercises in pervasive computing and communications percom ieee international conference on ieee pp thornton hutter hoos and combined selection and hyperparameter optimization of classification algorithms in proceedings of the acm sigkdd international conference on knowledge discovery and data mining acm pp
1
the earth system grid supporting the next generation of climate modeling research david bernholdt shishir bharathi david brown kasidit chanchio meili chen ann chervenak luca cinquini bob drach ian foster peter fox jose garcia carl kesselman rob markel don middleton veronika nefedova line pouchard arie shoshani alex sim gary strand and dean williams invited paper understanding the earth s climate system and how it might be changing is a preeminent scientific challenge global climate models are used to simulate past present and future climates and experiments are executed continuously on an array of distributed supercomputers the resulting data archive spread over several sites currently contains upwards of tb of simulation data and is growing rapidly looking toward and beyond we must anticipate and prepare for distributed climate research data holdings of many petabytes the earth system grid esg is a collaborative interdisciplinary project aimed at addressing the challenge of enabling management discovery access and analysis of these critically important datasets in a distributed and heterogeneous computational environment the problem is fundamentally a grid problem building upon the globus toolkit and a variety of other technologies esg is developing an environment that addresses authentication authorization manuscript received march revised june this work was supported in part by the department of energy under the scientific discovery through advanced computation scidac program grant bernholdt chanchio chen and pouchard are with the oak ridge national laboratory oak ridge tn usa bernholdtde chanchiok chenml pouchardlc bharathi chervenak and kesselman are with the usc information sciences institute marina del rey ca usa shishir annc carl brown cinquini fox garcia markel middleton and strand are with the national center for atmospheric research boulder co usa dbrown luca pfox jgarcia don strandwg drach and williams are with the lawrence livermore national laboratory livermore ca usa drach i foster and nefedova are with the argonne national laboratory argonne il usa foster nefedova shoshani and sim are with the lawrence berkeley national laboratory berkeley ca usa shoshani asim digital object identifier for data access data transport and management services and abstractions for remote data access mechanisms for scalable data replication cataloging with rich semantic and syntactic information data discovery distributed monitoring and portals for using the system modeling data management earth system grid esg grid computing i introduction global climate research today faces a critical challenge how to deal with increasingly complex datasets that are fast becoming too massive for current storage manipulation archiving navigation and retrieval capabilities and simulations performed with more advanced components the atmosphere oceans land sea ice and biosphere produce petabytes of data to be useful this output must be made easily accessible by researchers nationwide at national laboratories universities other research laboratories and other institutions thus we need to create and deploy new tools that allow data producers to publish their data in a secure manner and that allow data consumers to access that data flexibly and reliably in this way we can increase the scientific productivity of climate researchers by turning climate datasets into community resources the goal of the earth system grid esg project is to create a virtual collaborative environment linking distributed centers users models and data the esg research and development program was designed to develop and deploy the technologies required to provide scientists with virtual proximity to the distributed data and resources that they need to perform their research participants in esg include ieee proceedings of the ieee vol no march the national center for atmospheric research ncar boulder co lawrence livermore national laboratory llnl livermore ca oak ridge national laboratory ornl oak ridge tn argonne national laboratory anl argonne il lawrence berkeley national laboratory lbnl berkeley ca usc information sciences institute marina del rey ca and most recently los alamos national laboratory lanl los alamos nm over the past three years esg has made considerable progress towards the goal of providing a collaborative environment for earth system scientists first we have developed a suite of metadata technologies including standard schema automated metadata extraction and a metadata catalog service second we have developed and deployed security technologies that include user registration and authentication and community authorization data transport technologies developed for esg include data transport for access robust multiple file transport integration of this transport with mass storage systems and support for dataset aggregation and subsetting finally we have developed a web portal that integrates many of these capabilities and provides interactive user access to climate data holdings we have cataloged close to tb of climate data all with rich scientific metadata which provides the beginnings of a digital scientific notebook describing the experiments that were conducted we have demonstrated a series of increasingly functional versions of this software at events such as the ncar community climate system model ccsm workshop and the supercomputing sc conference and we deployed a system in spring that will provide community access to assessment datasets produced by the intergovernmental panel on climate change ipcc esg aims to improve greatly the utility of shared community climate model datasets by enhancing the scientist s ability to discover and access datasets of interest as well as enabling increased access to the tools required to analyze and interpret the data current global climate models are run on supercomputers within the and beyond the earth simulator center in japan and produce terabytes of data that are stored on local archival storage the analyses of these data contribute to our understanding of our planet how we are influencing climate change and how policy makers react and respond to the scientific information that we as a global community produce future trends in climate modeling will only increase computational and storage requirements scientific advances will require a massive increase in computing capability and a sublinear but still extremely substantial increase in the volume and distribution of climate modeling data we will see increases in the physical resolution of the models an elevation of the number of ensemble runs enhanced quality in terms of clouds aerosols biogeochemical cycles and other parameters and a broadening of overall scope that extends into the regions if we project into the future it is clear that our global earth system modeling activities are going to produce petabytes of data that are increasingly plex and distributed due to the nature of our computational resources the climate modeling community has been creating shared or data archives for some time however while these archives help enable the work of specific groups they limit access to the community at large and prohibit analysis and comparison across archives with the current infrastructure analysis of these large distributed datasets that are confined to separate storage systems at geographically distributed centers is a daunting task esg s goal is to dramatically improve this situation by leveraging emerging developments in grid technology we can break down these artificial barriers creating a environment that encompasses multiple computational realms spanning organizational boundaries with the goal of delivering seamless access of diverse climate modeling data to a broad user base this task is challenging from both the technical and cultural standpoints in this paper we describe what the esg project has accomplished during the last three years and discuss ongoing and future efforts that will increase the utility of the esg to the climate community over the coming decade ii functional objectives next we present functional objectives and use cases related to climate model dataset management and access various climate model simulations have been performed and the output from these simulations has been stored on archival storage with each simulation output comprising several thousand files and several hundred gigabytes of data the resulting data are of great interest to many researchers policy makers educators and others however with current technology not only must the data manager spend considerable time managing the process of data creation but any user who wishes to access this data must engage in a difficult and tedious process that requires considerable knowledge of all the services and resources necessary metadata for search and discovery specifics of the archival system software system accounts analysis software for extracting specific subsets and so on because of this complexity data access tends to be restricted to privileged specialists the goal of the esg system is to simplify both the data management task by making it easier for data managers to make data available to others and the data access task by making climate data as easy to access as web pages via a web browser a making data available to the esg community a first requirement is for tools that allow climate model data managers to make model data available to the esg community these tools include the means to create searchable databases of the metadata provide catalogs of the data that locate a given piece of data on an archival system or online storage and make catalogs and data accessible via the web prior to the advent of esg these capabilities did not exist so potential users of the model data had to contact the data proceedings of the ieee vol no march managers personally and begin the laborious process of retrieving the data they wanted it is important that tools for data publishers and curators are robust and easy to use data publishers should not have to be familiar with all the implementation details of the system thus the tools must be intuitive reflect the user s perspective of the data and be accessible on the user s local workstation although climate simulation datasets are relatively static they can change over time as errors are corrected existing runs are extended and new postrun analyses are added thus it is not sufficient just to be able to publish a dataset once it must be easy for publishers to update metadata over time the publishing tools must allow sufficiently privileged users to search and update the metadata database ensuring that the metadata and physical datasets stay synchronized b defining virtual datasets in addition to the physical datasets that are generated directly by climate simulations and maintained on archival storage data producers want to define virtual datasets datasets that are defined by a set of transformation operations on other physical or virtual datasets these virtual datasets can then be instantiated to yield physical datasets by the esg system either proactively or in an fashion following a user request physical datasets can be discarded following use or alternatively cached to avoid the cost of subsequent regeneration virtual datasets are important as they allow data producers to define abstractions over the physical files that may be more convenient efficient for users to access the transformations used to define virtual datasets may include concatenation of files from one or more datasets subsetting along one or more dimensions computation reduction operations in our work in esg we consider only concatenation and subsetting but our architecture can be extended in the future to support arbitrary transformations as in the chimera system the following is an example of a virtual dataset defined via concatenation and subsetting a dataset that contains from each of two datasets and the ps field for time the net effect is to provide the data consumer with a virtual view of a dataset that hides the underlying organization and complexity of thousands of related files providing simple convenient access a member of the esg community should be able to browse search discover and access distributed climate model data easily using the esg web portal she can browse the esg data holdings hierarchically and a simple text search capability allows her to perform searches of data catalogs subsequent to browsing search she can select the data she wants after possibly narrowing her choices via additional possibilities presented by the search results she can also select individual fields regions of interest specifying a selection that will ultimately come from multiple physical files the web portal creates an efficient means of finding and accessing published model data the user states what she wants to access and esg performs the tasks required to deliver the desired result retrieving large or numerous datasets if an esg user wishes to access large amounts of climate model data particularly data located on an archival system esg has a tool called datamover that efficiently and robustly accomplishes this task for example an esg user may wish to access many simulated years of model data and store that data on his local online storage datamover allows the user to move this large volume of data without requiring him to personally monitor all the steps involved it manages the processes and takes corrective actions as needed to transmit the data from the archival storage system to a location of the user s choosing using a climate analysis application to access esg holdings an esg user may choose to utilize a common data analysis package like climate data analysis tools cdat or ncar graphics command language ncl to access the data published via esg for example a web portal search will give her the information she needs to pull the data directly into her software package this implies a requirement for interfaces that are compatible with these applications and distributed data access protocols that provide for remote access such a capability allows the esg user community far greater access to climate model data than was previously available as well as the ability to do extensive and complicated analyses on large sets of data efficiently since there is no need to retrieve and store all the data locally before beginning analysis iii esg architecture next we present a description of the overall esg architecture fig shows the major components involved in the esg system which are from bottom to top the following database storage and application servers these are the basic resources on which esg is constructed and include the computational resources used for creation of virtual data disk caches and mass storage systems used to hold data assets database servers used to store metadata and application servers for hosting the esg portal services infrastructure this provides remote authenticated access to shared esg resources enabling access to credentials data and metadata replica location services rlss and the submission and management of computational jobs and services these services span esg resources and provide capabilities such as site to site data movement distributed and reliable metadata access and data aggregation and filtering esg applications the most important of these is the esg portal which provides a convenient interface to esg services other applications include user level tools for data publication bernholdt et al the earth system grid supporting the next generation of climate modeling research fig esg architecture schematic showing major components as well as assorted clients for data analysis and visualization iv accomplishments bringing to the user community climate datasets during our first three years of esg development we have reached milestone achievements in the following areas speed of access and transport of data at user request via the integration of advanced gridftp data transfer technologies into the widely used project for a network data access protocol opendap software robust multifile movement between various storage systems via the development of datamover technology standardization of data formats metadata schemas and tools for discovery use analysis managing and sharing of scientific data security and access controls supporting both user registration and authentication for purposes of auditing and highly secure access control for more sensitive operations development of a portal enabling interactive searching of esg data holdings browsing of associated metadata and retrieval of datasets that may be large our first official release of esg to the greater climate community began in spring the portal is being used to deliver to the community data produced by the intergovernmental panel on climate change ipcc assessment program with this first production release esg s vision of providing the foundation for the data publication analysis applications portals and collaborative environments for climate research has been demonstrated esg web portal we have developed the esg web portal which provides the entry point to esg data holdings its purpose is to allow climate scientists from around the world to authenticate themselves to the esg services and to browse search subset and retrieve data the esg portal uses esg services to associate metadata with datasets determine data location and deliver data to the location desired by the user the portal includes an aggregated data selection option that allows the user to select a variable and a subregion and time and level ranges metadata the role of data descriptions metadata is critical in the discovery analysis management and sharing of scientific data we have developed standardized metadata descriptions and metadata services to support this important functionality esg has developed metadata schema oriented toward the types of metadata generated by atmospheric general circulation and coupled models these schema are defined in terms of the network common data format netcdf markup language ncml and provide an representation of metadata that appears in netcdf encoded simulation data the esg metadata schema are supported by a range of services that are used to provide access to specific metadata values the esg metadata catalog is based on the open grid services architecture data access and integration service and uses relational technology for browsing and searching esg metadata holdings we have also deployed a separate metadata service called the replica location service rls to provide information about the location of datasets allowing data to be replicated and migrated without having to modify entries in the main catalog of ncml data proceedings of the ieee vol no march esg ca generates a certificate and stores it in the myproxy service and gives the user an for myproxy the registration system also has an administrator s interface that allows the ca administrator to accept or reject a user request the most important benefits of this system are that users never have to deal with certificates and that the portal can get a user certificate from the myproxy service when needed shared data access requires the ability to specify and enforce access control for individual datasets we have implemented a prototype authorization service by integrating the community authorization service cas into the gridftp server used to implement data queries this system supports access control based on community specified policy stored in the cas server s policy database data transport and access services fig registration system architecture we provide the ability to bridge esg metadata into dataset catalogs conforming to the thematic realtime environmental data distributed services thredds specification thredds catalogs are automatically generated from the data description information contained in the database and the location information retrieved from the distributed rls databases security esg s large and diverse user community and the multiple laboratories that participate in esg make security a challenging problem we have devoted considerable effort to analyzing and documenting security requirements esg has adopted a security approach that includes user authentication user registration with the esg portal and access control the grid security infrastructure gsi is used as a common underlying basis for secure single and mutual authentication this machinery provides for secure access control to all esg resources using public key infrastructure pki with credentials from the doe grids certificate authority ca the esg is recognized by the doe ca as an accredited virtual organization and participates in the policy management authority board because of the overhead associated with obtaining public key credentials an online registration process is used to access pki credentials without the overhead normally associated with certificate generation this mechanism allows for a broad base of users to access public data while providing an auditing of access by remote users with whom esg has no existing trust relationship as required by research sponsors esg has developed a registration system that allows users to register easily with the esg portal the registration system architecture is shown in fig the focus of this system is ease of use we developed portal extensions cgi scripts that automate user registration requests the system solicits basic data from the user generates a certificate request for the opendap is a protocol and suite of technologies that provide flexible easy access to remote data based on the widely accepted community opendap software esg has extended opendap to create a opendap server and clients that support gsi security and can use gridftp as a data transport mechanism in addition seamless joining of data stored in different opendap servers aggregation is now part of the esg production release esg has also built a prototype data access and transport system demonstrating the use of cdat and ncl to access remote datasets using the gridftp protocol via the client continued work will be necessary before this goes into production including support of aggregation but the goal is that this client software will be the foundation for distributed climate applications to gain access to esg resources multiple file transport esg has developed tools for the efficient and reliable transfer of multiple files among storage sites the transport of thousands of files is a tedious but extremely important task in climate modeling and analysis applications the scientists who run models on powerful supercomputers often need the massive volume of data generated to be moved reliably to another site often the source and destination storage systems are specialized mass storage systems such as the high performance storage system hpss at the national energy research scientific computing center nersc berkeley ca or the mass storage systems mss at ncar during the analysis phase subsets of the data need to be accessed from various storage systems and moved reliably to a destination site where the analysis takes place the automation of the file replication task requires automatic space acquisition and reuse monitoring the progress of thousands of files being staged from the source mass storage system transferring the files over the network and archiving them at the target mass storage system we have leveraged the software developed by the storage resource manager srm project to bernholdt et al the earth system grid supporting the next generation of climate modeling research achieve robust multifile transport for esg srms are software components that are placed in front of storage systems accept multifile requests manage the incremental access of files from the storage system and then use gridftp to transport the files to their destination two important results related to multiple file transport were achieved in the first phase of the esg project the first was motivated by a practical need to the mss at ncar that is to be able to store files into and get files from the mss directly from remote sites to achieve this goal we adapted a version of an srm to ncar s mss this allowed multifile movement between various sites such as and ncar this system has been used repeatedly to move reliably thousands of file between esg sites second reliability is achieved in the srm by monitoring the staging transfer and archiving of files and by recovering from transient failures a tool was deployed to dynamically monitor the progress of the multifile replication in response to esg user requirements for the ability to move entire directories in a single command a software module called the datamover was developed the datamover interacts with the source and destination storage systems through srms to setup the target directory and instructs the srms to perform the robust multifile movement the datamover and an srm are used by the esg portal to copy files into the portal s disk space from the files source locations which may be remote storage systems the user is then able to access those files directly from the portal this capability makes it possible for the portal to act on behalf of a user to get files from remote sites without requiring the user to have direct access to those sites monitoring we have built monitoring infrastructure that allows us to keep track of the state of resources distributed across seven esg institutions monitoring is required to provide users with a robust and reliable environment they can depend upon as part of their daily research activities we monitor the hardware and software components that make up the esg and provide status information to esg operators and users our monitoring infrastructure builds upon the globus toolkit s grid information services ontologies for esg metadata we developed a prototype esg ontology that is available on the esg portal the ontology contains the following classes scientific investigation with subclasses simulation observation experiment analysis datasets with subclasses campaign ensembles pedigree with subclasses identity provenance project service access other relationships supported by the ontology include ispartof isgeneratedby isderivedfrom hasparameter and usesservice while climate simulation data are the current focus of esg the scientific investigation metadata defined by our ontology accommodates other types of scientific investigation for example observational data collected by oceanographers dataset metadata includes time and space coverage and other parameters pedigree or provenance metadata trace the origins of a dataset and the successive operations that were performed by recording the conditions under which a dataset of interest was produced for instance provenance describes what models what versions of a model and what software and hardware configurations were used for a run of interest service metadata associates earth science data formats with servers providing functionality such as subsetting in coordinate space visualization and expression evaluation the esg ontology clearly separates metadata potentially reusable by other scientific applications such as project information from metadata specific to an application the iterative work of detailed concept definition and the rigor needed for specifying relationships between entities required in ontology authoring substantially improved the esg schema current esg data holdings one of the key goals of esg is to serve a critical mass of data of interest to the climate community we are steadily increasing the amount of data provided to the climate community through esg in the first phase or the project esg data holdings consisted of a small set of ccsm and parallel climate model pcm data the number of early users accessing these data via our prototype esg system was deliberately kept small and ranged from one to five concurrent users by the end of pcm data totaled approximately tb at several sites including ncar the program for climate model diagnosis and intercomparison pcmdi nersc ornl and lanl about users requested accounts at pcmdi to access the pcm data holdings there representing around science projects including university researchers doe collaborators and various groups interested in climate simulation on a regional scale in ncar focused on publishing ccsm data while llnl published climate simulation data generated by the major climate modeling groups worldwide for the intergovernmental panel on climate change ipcc fourth assessment report far the ipcc which was jointly established by the world meteorological organization wmo and the united nations environment program carries out periodic assessments of the science of climate change which serve governments and others interested in the potential impact of future climate change fundamental to this effort is the production collection and analysis of climate model simulations carried out by major international research centers although these centers individually often produce useful analyses of the individual results the collective model output from several modeling centers often yields a better proceedings of the ieee vol no march fig schematic of esg data portals services and archives understanding of the uncertainties associated with these models toward this end the wmo s working group on coupled modeling wgcm in consultation with the ipcc leadership coordinates a set of standard simulations carried out by the major modeling centers in order for this tremendous commitment of computational and scientific effort to be of maximum benefit it is essential that a wide community of climate scientists critically analyze the model output the ipcc and wgcm have therefore asked pcmdi to collect model output data from these ipcc simulations and to distribute these to the community via esg by late the ccsm and ipcc model runs totaled approximately tb of data ccsm is one of the major climate simulation efforts worldwide so interest in this data from the scientific community and others will be significant on the order of hundreds of scientists researchers policy makers and others esg is providing access to this considerable store of data based on the same code configured differently to address esg and ipcc requirements i deployment of portals services and archives in this section we discuss future work in the esg project over the next two years the goal of the esg collaborative is to increase the amount of data made available to scientists through the esg portal and to enhance the functionality performance and robustness of esg components the critical role of the ipcc data and the commitments made by llnl s pcmdi to provide community access to that data led us to adopt the overall deployment strategy depicted in fig at the lowest level data are maintained in archives at pcmdi for ipcc data and ncar for other climate simulation data from ccsm and pcm above these services enable remote access to data archives by maintaining metadata used for data discovery implementing authentication and authorization and so on a copy of these services located primarily at ncar is available for use and another copy at pcmdi will provide access only to ipcc data currently the software has been installed at pcmdi but the site is not yet supporting users of ipcc data similarly there are two web data portals for data access one based at pcmdi for ipcc data only and an portal that will provide access to ipcc ccsm pcm lanl and eventually other data the ncar and pcmdi instantiations of the data portal and services are j external collaborations where appropriate we have developed esg tools and services in collaboration with national and international groups currently esg is in close collaboration with the british atmospheric data centre in the the national oceanic and atmospheric administration noaa operational model archive and distribution system nomads the earth science portal esp consortium thredds noaa s geophysical fluid dynamics laboratory and the opendap project we have also held discussions with the linked environments for atmospheric discovery lead project the geosciences cyberinfrastructure network geon project and the earth system modeling framework esmf future work increasing the utility of esg to the climate modeling community a additional deployment of data archives services and portals the esg data services and a web data portal have been installed at pcmdi to support ipcc data distribution there and they will soon be registering users who want to access the data eventually we plan to include more sites as data archives providing climate datasets to esg users including lbnl and ornl b web portal and overall system integration the esg data portal integrates the different esg services authentication and authorization metadata data processing and data transport we will revise the existing web portal as required and support new user communities ocean bernholdt et al the earth system grid supporting the next generation of climate modeling research modelers as needed either by providing areas on the same portal or by developing customized virtual portals for each community we will also be engaged in formal usability testing with select members of the research community security services the current esg security architecture deals primarily with authentication of users to the various grid resources grid services storage systems several key points remain to be addressed including authorization or access control to the resources and support of specific site requirements such as passwords otps authorization our authorization infrastructure will build upon the community authorization service cas being developed as part of the globus toolkit building on the globus toolkit grid security infrastructure gsi cas allows resource providers to specify access control policies and delegate access control policy management to the community which specifies its policies via the cas resource providers maintain ultimate authority over their resources but are spared policy administration tasks adding and deleting users modifying user privileges esg distinguishes between two types of authorization and authorization makes access control decisions based on file access permissions for groups of users the same data on the same server could be accessed by one group of users but inaccessible for another group authorization makes access control decisions based on a group s permission to access particular services for example some privileged users might be allowed to access data efficiently from hierarchical storage systems using the datamover service while less powerful users would be able to access that data only through the server for both types of authorization users will be given their access rights at their initial with the esg portal via the esg registration system any further changes to user access rights would be done by the resource providers using special tools to be developed password otp authentication there are various options for supporting the use of passwords including an online ca that generates gsi credentials automatically for a user who has authenticated with the otp system thus allowing them to access other esg sites without further authentication another possibility is for gsi credentials generated by an online ca at one site following otp authentication to be accepted at another site with the same requirement a third option would include changing gsi credentials to specify whether authentication to generate the user certificate used otp we will investigate these options and implement the one that best fits esg metadata schema and services esg has focused strongly on metadata remaining tasks include the following metadata schema from the start it has been our aim in esg to develop a system that has the potential for longevity and interoperation with other emerging data systems worldwide new standards like are emerging that can facilitate interoperation of multiple data systems and we plan to spend some effort evolving our metadata in this direction this work will be undertaken primarily by llnl and ncar but in concert with a number of other projects british atmospheric data center thredds etc that face the same future needs the metadata changes required by esg researchers will be incorporated into the existing ogsa metadata service metadata catalog support for virtual data work is required to extend our metadata catalogs to support virtual data definitions so that data producers can define virtual datasets and data consumers can discover and request virtual datasets we intend to work with ncml as our virtual data definition language multiple metadata catalog services we will implement a replicated metadata catalog to avoid a single point of failure the initial goal of this work will be to improve reliability but we also note that replication can improve performance by load balancing of the metadata query workload initially we plan to use the metadata catalog at ncar as the master catalog and do periodic updates to a second catalog located at lawrence livermore national laboratory we will deploy more general distributed metadata services as they are developed we have also done some exploratory work in using the open archive initiative oai protocols to accomplish this function and we will evaluate the relative effectiveness and level of effort required to do this browse search and query we will support richer metadata catalog query capabilities for esg scientists federation we will provide for the interoperation of two heterogeneous metadata services our esg metadata services and the thredds catalogs we will provide simple distributed queries across the two catalog types datamover services our work thus far with the datamover application has resulted in a unique capability that copies directories effectively providing the equivalent of a unix copy command across a heterogeneous storage environment including ncar s locally developed mss strong critical prerequisite for such required and thus the use of datamover is largely restricted to the power users users that have formal accounts at all sites involved in data transfer but this leaves out the average user who may want to use the web portal to locate and identify a large number of files that must be moved back to a local system the web portal provides a workable interface for the selection of a small number of files limited to a few gigabytes however if a user wants a large number of files then we need a different sort of capability because such transfers require automation to deal with queuing proceedings of the ieee vol no march fig steps involved in defining discovering requesting and instantiating a virtual dataset management of cache disk space network problems and recovery from transient failures we will continue to refine enhance and increase the robustness and scaling of the datamover tool beyond that we will evolve the current datamover into a application that will work seamlessly with the web portal allowing a user to make a selection of a large number of files and then trigger the launch of this application we plan to provide data movement capabilities for two types of users which we refer to as casual users and frequent casual users are those that do not have grid credentials and frequent users are those that are willing to go through the process of acquiring grid credentials to achieve more efficient transport of files for casual users the requested files will have to be moved to the disk cache managed by the web portal first and only then can they be pulled by the users to their systems for frequent users we plan to develop a that will pull the files directly from their source location to the user s location avoiding transport through the web portal s disk cache this more efficient method of file transport is necessary when moving a large volume of data also to conform to security policies will work from behind the firewall and adapt to the local security policy aggregation services and additional development of will enhance virtual data services for the web portal native client access performance and robustness clients esg clients include the web portal and user desktop applications such as cdat and ncl we plan to complete the netcdf client interface to these applications providing full functionality of the api and transparently handling aggregation and subsetting virtual data services to return useful data to users it will be necessary for client applications to operate with various implementations of esg security and to utilize an url to access data in some cases the client may need to generate these urls thus requiring access to the esg catalogs servers we will migrate the services to the latest release of the globus toolkit web components and the new striped gridftp server integration of esg security models and connections to catalog services from client interface are expected to require modifications to the core libraries in addition we plan to complete the integration of the data access and transport with rlss and srms for hierarchical storage systems monitoring we will provide enhanced monitoring capabilities to improve the robustness of the esg infrastructure these will include monitoring a larger number of esg services and components and providing better interfaces for querying historical information about resource availability virtual data support we will provide support for virtual data in the next phase of esg fig depicts the tasks involved in publishing discovering and accessing a virtual dataset which include the following a data provider publishes to an esg metadata catalog a definition of a new dataset indicating its name associated metadata p and either its constituent component files if a physical dataset or its definition if a virtual dataset as here f a client user or program issues a query to the metadata catalog for datasets of interest the names of any datasets matching the user query are returned in the figure this query happens to match the properties associated with the virtual dataset and so the name is returned a client requests that dataset by requesting the virtual data service shown as data service in the figure for the dataset or any subset of the virtual data service retrieves the recipe for from the metadata catalog the virtual data service instantiates or a specified subset of by fetching appropriate data from and and assembling those pieces to create a new dataset finally the virtual data service returns the requested data to the user step not shown the virtual data service may also publish the location of the new physical dataset into the metadata catalog so as bernholdt et al the earth system grid supporting the next generation of climate modeling research to accelerate the processing of subsequent requests for the dataset step also not shown vi related work esg has worked closely with several existing efforts these include the doe science grid project whose work on authentication infrastructure and related security issues has been particularly useful to esg esg uses the doe science grid certification authority as the basis for its authentication esg has also worked closely with unidata on several initiatives including joint development of the ncml specification that provides a standard xml encoding for the content of netcdf files ncml has been extended to support coordinate system information logic and gis interoperability ncml is still being developed and is being established as a community standard esg also makes use of the thredds specification developed within unidata the esg collaborative includes members from the globus alliance esg has made extensive use of globus toolkit components for example esg acted as an early adopter and stringent beta tester of the gridftp code identifying a number of subtle errors in the implementation esg has collaborated with globus on compelling technology demonstrations involving data movement at close to over wide area networks esg also made use of the gsi the rls remote job submission capabilities monitoring and information systems and the cas provided by the globus toolkit esg also includes members of the srm project at lbnl srms provide dynamic storage management and support for multifile requests the lbnl team adapted the srm developed for hpss to the mss at ncar as part of esg the srm team also developed datamover providing the ability to move entire directories robustly with recovery from failures between diverse mass storage systems at ncar lbnl and ornl as well as disk systems at ncar and llnl the datamover has been used repeatedly by members of the esg team to move thousands of files robustly between mass storage systems at these sites the esmf is a project that has engaged staff at ncar nasa and doe laboratories to develop a standard framework for the construction of modular climate models in a next phase the scope of esmf and esg both expand to embrace environments for earth system researchers we see strong opportunities for mutually beneficial interaction between esg and esmf as models are extended both to access remote data and to publish data a number of scientific projects face challenges similar to those being explored in esg the grid physics network griphyn project uses grid technologies to support physicists with emphasis on support for virtual data data management and workflow management the particle physics data grid also employs grid technologies to support physics research the lead project is developing scientific and grid infrastructure to support mesoscale meteorological research the geon project supports geoscientists with an emphasis on data modeling indexing semantic mediation and visualization esg technology is based on globus toolkit and srm middleware other scientific grid projects use the storage resource broker srb middleware unlike the layered approach taken by the globus toolkit srb provides tightly integrated functionality that includes extensive data management capabilities including support for metadata organizing data in collections and containers and maintaining consistency among replicas vii conclusion the increasingly complex datasets being produced by global climate simulations are fast becoming too massive for current storage manipulation archiving navigation and retrieval capabilities the goal of the esg is to provide data management and manipulation infrastructure in a virtual collaborative environment that overcomes these challenges by linking distributed centers users models and data an important role of the esg project is to provide a critical mass of data of interest to the climate community including ccsm pcm and ipcc simulation model output over the past three years the esg project has made considerable progress toward the goal of a community esg we have developed a suite of technologies including standard metadata schema tools for metadata extraction metadata services for data discovery security technologies that provide authentication registration and some authorization capabilities a version of for data access robust multiple file transport using srm and datamover a monitoring infrastructure and the development of a web portal for interactive user access to climate data holdings to date we have catalogued close to tb of climate data all with rich scientific metadata in the next phase of esg we will increase the utility of esg to the climate modeling community by expanding both the data holdings that we provide and the capabilities of the system over the next two years the esg will provide enhanced performance and reliability as well as richer authorization metadata data transport and aggregation capabilities and support for virtual data references earth system grid esg online available http climate system model online available http chervenak et remote access to climate simulation data a challenge problem for data grid technologies parallel vol no pp intergovernmental panel on climate change ipcc online available http i foster et chimera a virtual data system for representing querying and automating data derivation presented at the int conf scientific and statistical database management edinburgh proceedings of the ieee vol no march sim gu shoshani and natarajan datamover robust replication over networks presented at the int conf scientific and statistical database management ssdbm santorini island greece climate data analysis tools cdat online available http ncar command language ncl online available http i foster et the earth system grid ii turning climate datasets into community resources presented at the annu meeting amer meteorological orlando fl caron et the netcdf markup language ncml unidata boulder co atkinson et data access integration and management in the grid blueprint for a new computing infrastructure san mateo ca morgan kaufmann chervenak et giggle a framework for constructing scalable replica location services presented at the sc conf high performance networking and computing baltimore md chervenak palavalli bharathi kesselman and schwartzkopf performance and scalability of a replica location service presented at the high performance distributed computing honolulu hi thematic realtime environmental data distributed services thredds online available http butler et a authentication infrastructure ieee computer vol no pp i foster et a security architecture for computational grids in proc acm conf computer and communications security pp pearlman et a community authorization service for group collaboration in proc ieee int workshop policies for distributed systems and networks pp allcock et al gridftp protocol extension to ftp for the grid global grid forum online available http opendap online available http allcock et data management and transfer in computational grid environments parallel vol no pp storage resource management project online available http http czajkowski et grid information services for distributed resource sharing in proc ieee int symp high performance distributed computing pp pouchard et an ontology for scientific information in a grid environment the earth system grid presented at the symp cluster computing and the grid ccgrid tokyo japan pouchard et data discovery and semantic web technologies for the earth sciences int j dig libraries to be published blackmon et the community climate system model bull amer meteorol vol no pp parallel climate model pcm online available http earth system modeling framework online available http linked environments for atmospheric discovery lead online available http geon the geosciences network online available welch et security for grid services in proc ieee int symp high performance distributed computing pp british atmospheric data center badc online available http sompel and lagoze the open archives initiative protocol for metadata harvesting the open archives initiative online available http doe science grid project online available http fulker bates and jacobs unidata a virtual community sharing resources via technological infrastructure bull amer meteorol vol pp the globus toolkit online available http shoshani sim and gu storage resource managers essential components for the grid in grid resource management state of the art and future trends nabrzyski schopf and weglarz eds new york kluwer the griphyn project toward petascale virtual data grids avery and i foster online available http particle physics data grid ppdg project online available http the storage resource broker online available http baru et the sdsc storage resource broker presented at the annu ibm centers for advanced studies toronto on canada authors photographs and biographies not available at the time of publication bernholdt et al the earth system grid supporting the next generation of climate modeling research
5
high performance codes for flash memories ahmed hareedy homa esfahanizadeh and lara dolecek mar electrical eng department university of california los angeles los angeles ca usa ahareedy hesfahanizadeh and dolecek dense flash memory devices operate at very low error rates which require powerful error correcting coding ecc techniques an emerging class of ecc techniques that has broad applications is the class of spatiallycoupled sc codes where a block code is partitioned into components that are then rewired multiple times to construct an sc code here our focus is on sc codes with the underlying structure in this paper we present a approach for the design of high performance sc nbsc codes optimized for practical flash channels we aim at minimizing the number of detrimental general absorbing sets of type two gasts in the graph of the designed code in the first stage we deploy a novel partitioning mechanism called the optimal overlap partitioning which acts on the protograph of the sc code to produce optimal partitioning corresponding to the smallest number of detrimental objects in the second stage we apply a new circulant power optimizer to further reduce the number of detrimental gasts in the third stage we use the weight consistency matrix framework to manipulate edge weights to eliminate as many as possible of the gasts that remain in the code after the first two stages that operate on the unlabeled graph of the code simulation results reveal that nbsc codes designed using our approach outperform codes when used over flash channels i ntroduction because of their excellent performance codes are among the most attractive error correction techniques deployed in modern storage devices nb codes offer superior performance over binary codes and are thus well suited for modern flash memories the nature of the detrimental objects that dominate the error floor region of nonbinary codes depends on the underlying channel of the device unlike in the case of canonical channels in a recent research it was revealed that general absorbing sets of type two gasts are the objects that dominate the error floor of nb codes over practical inherently asymmetric flash channels we analyzed gasts and proposed a combinatorial framework called the weight consistency matrix wcm framework that removes gasts from the tanner graph of nb codes and results in at least order of magnitude performance gain over asymmetric flash channels a particular class of codes that has received recent attention is the class of sc codes sc codes are constructed via partitioning an underlying ldpc code into components and then coupling them together multiple times recent results on sc codes include asymptotic analysis and finite length designs sc codes designed using cutting vector cv partitioning and optimized for magnetic recording applications were introduced in the idea of partitioning the underlying block code by minimizing the overlap of its rows of circulants so called minimum overlap mo was recently introduced and applied to awgn channels in in this paper we present the first study of codes designed for practical flash channels the underlying block codes we focus on are cb codes our combinatorial approach to design codes comprises three stages the first two stages aim at optimizing the unlabeled graph of the sc code the graph of the sc code with all edge weights set to while the third stage aims at optimizing the edge weights the three consecutive stages are we operate on the binary protograph of the sc code and express the number of subgraphs we want to minimize in terms of the overlap parameters which characterize the partitioning of the block code then we solve this discrete optimization problem to determine the optimal overlap parameters we call this new partitioning technique the optimal overlap oo partitioning given the optimal partitioning we then apply a new heuristic program to optimize the circulant powers of the underlying block code to further reduce the number of problematic subgraphs in the unlabeled graph of the sc code we call this heuristic program the circulant power optimizer cpo having optimized the underlying topology using the first two stages in the last stage we focus on the edge weight processing in order to remove as many as possible of the remaining detrimental gasts in the code to achieve this goal we use the wcm framework we also enumerate the minimum cardinality sets of edge weight changes that are candidates for the gast removal the three stages are necessary for the code design procedure we demonstrate the advantages of our code design approach over approaches that use cv partitioning and mo partitioning in the context of column weight sc codes the rest of the paper is organized as follows in section ii we present some preliminaries in section iii we detail the theory of the oo partitioning in the context of column weight sc codes the cpo is then described in section iv next in section v we propose a further discussion about the wcm framework our code design steps and simulation results are presented in section vi finally the paper is concluded in section vii ii p reliminaries in this section we review the construction of codes as well as the cv and mo partitioning techniques furthermore we recall the definition of gasts and the key idea of the wcm framework throughout this paper each column row in a paritycheck matrix corresponds to a variable node vn check node cn in the equivalent graph of the matrix moreover each entry in a matrix corresponds to an edge in the equivalent graph of the matrix let h be the matrix of the underlying regular cb code that has column weight vn degree and row weight cn degree the binary image of h which is hb consists of circulants each circulant is of the form fi j where i i is the row group index j j is the column group index and is the p p identity matrix cyclically shifted one unit to the left a circulant permutation matrix circulant powers are fi j and ab codes are cb codes with fi j ij p and p prime in this paper the underlying block codes we use to design sc codes are cb codes with no zero circulants the code is constructed as follows first hb is partitioned into m disjoint components of the same size as hb hbm where m is defined as the memory of the sc code each component hby y m contains some of the circulants of hb and zero circulants elsewhere pm b b such that h hy in this work we focus on m hb second and are coupled together l times see and to construct the binary image of the matrix of the code hbsc which is of size a replica is any submatrix of bt t hbsc that contains hbt and zero circulants elsewhere see replicas are denoted by rr r overlap parameters for partitioning as well as circulant powers can be selected to enhance the properties of hbsc third the matrix h is generated by replacing each in hb with a value gf q we focus on q fourth the matrix of the code hsc is constructed by applying the partitioning and coupling scheme described above to the binary protograph matrix bpm of a general binary cb matrix is the matrix resulting from replacing each p p circulant with and each p p zero circulant with bp the bpms of hb and are hbp hbp and b respectively and they are all of size the bpm of hsc bp is hbp sc and it is of size l this hsc also has l replicas rr r l but with circulants a technique for partitioning hb to construct hbsc is the cv partitioning in this technique a vector of ascending integers is used to partition hb into and the matrix has all the circulants in hb with the indices i j j and zero circulants elsewhere and the matrix is hb another recently introduced partitioning technique is the mo partitioning in which hb is partitioned into and such that the overlap of each pair of rows of circulants in both and is minimized moreover the mo partitioning assumes balanced partitioning between and and also balanced distribution of circulants among the rows in each of them the mo partitioning significantly outperforms the cv partitioning in this paper we demonstrate that the new technique outperforms the mo technique gasts are the objects that dominate the error floor of nb codes on asymmetric channels practical flash channels we recall the definitions of gasts and unlabeled gasts definition cf consider a subgraph induced by a subset v of vns in the tanner graph of an nb code set all the vns in v to values gf q and set all other vns to the set v is said to be an a b general absorbing set of type two gast over gf q if the size of v is a the number of unsatisfied cns connected to v is b the number of and cns connected to v is and all the unsatisfied cns connected to v if any have either degree or degree and each vn in v is connected to strictly more satisfied than unsatisfied neighboring cns for some set of given vn values definition cf let v be a subset of vns in the unlabeled tanner graph of an nb code let o t and h be the set of and cns connected to this graphical configuration is an a unlabeled gast ugast if it satisfies the following two conditions a and each vn in v is connected to strictly more neighbors in t h than in o examples on gasts and ugasts are shown in fig the wcm framework removes a gast by careful processing of its edge weights the key idea of this framework is to represent the gast in terms of a set of submatrices of the gast adjacency matrix these submatrices are the wcms and they have the property that once the edge weights of the gast are processed to force the null spaces of the wcms to have a particular property the gast is completely removed from the tanner graph of the nb code see and iii oo partitioning t heoretical a nalysis in order to simultaneously reduce the number of multiple ugasts we determine a common substructure in them then minimize the number of instances of this substructure in the unlabeled tanner graph of the sc code the graph of hbsc we propose our new partitioning scheme in the context of sc codes with the scheme can be extended to higher column weights for the overwhelming majority of dominant gasts we have encountered in nb codes with simulated over flash channels the ugast occurs as a common substructure most frequently see fig thus we focus on the removal of ugasts a b fig a two dominant gasts for nb codes with over flash a and a gasts appropriate edge weights w s are assumed b a ugast a cycle of length in the graph of hbp sc the binary protograph of the sc code which is defined by the entries in hbp sc results in p cycles of length in the graph of hbsc if and only if z z x x mod p where fh is the power of the circulant indexed by h in hbsc otherwise this cycle results in cycle s of length in the graph of hbsc where is an integer that divides p it is clear from fig b that the ugast is a cycle of length thus and motivated by the above fact our oo partitioning aims at deriving the overlap parameters of hbp that result in the minimum number of cycles of length in the graph of hbp sc which is the binary protograph of the sc code then we run the cpo to further reduce the number of ugasts in the graph of hbsc which is the unlabeled graph of the sc code by breaking the condition in with z for cycles of length for as many cycles in the optimized graph of hbp sc as possible the goal here is to minimize the number of cycles of length in the binary protograph of the sc code via the oo partitioning of hbp which is also the oo partitioning of hb to achieve this goal we establish a discrete optimization problem by expressing the number of cycles of length in the graph of hbp sc as a function of the overlap parameters and standard code parameters then solve for the optimal overlap parameters we start off with the following lemma lemma in the tanner graph of an sc code with parameters p m and l which is the binary protograph the number of cycles of length is given by f lfs l fd where fs is the number of cycles of length that have their vns spanning only one particular replica say and fd is the number of cycles of length that have their vns spanning two particular consecutive replicas say and proof from lemma the maximum number of consecutive replicas spanned by the vns of a cycle of length in an sc code with m is thus the vns of any cycle of length span either one replica or two consecutive replicas since there exist l replicas and l distinct pairs of consecutive replicas and because of the repetitive nature of the sc code follows let the overlapping set of x rows of a binary matrix be the set of positions in which all the x rows have s simultaneously overlap now define the overlap parameters as follows ti i is the number of s in row bp bp i of hbp from the definitions of and bp ti and is the size of the overlapping set of rows and of hbp and is the size of the overlapping set of rows and of hbp from the definitions is the size of the overlapping set of bp rows and of hbp moreover let x max x we define the following functions to be used in theorem a b c and d theorem uses combinatorics to give the exact expressions for fs and fd in terms of the above overlap parameters theorem in the tanner graph of an sc code with parameters p m and l which is the binary protograph fs and fd are computed as follows fs and fd where and are a a b b c c d and d proof the term fs represents the number of cycles of length that have their vns spanning h only one ireplica the nont zero submatrix of a replica is hbpt hbpt there are four possible cases of arrangement for the cns of a cycle of length that has its vns spanning only one replica these cases are listed below all the three cns are within hbp the number of cycles of length that have all their cns inside hbp is denoted by all the three cns are within hbp the number of cycles of length that have all their cns inside hbp is denoted by bp two cns are within hbp and one cn is within the number of cycles of length in this case is denoted by bp two cns are within hbp and one cn is within the number of cycles of length in this case is denoted by these four different cases of arrangement are illustrated in the upper panel of fig next we find the number of cycles of length in each of the four cases in terms of the overlap parameters and standard code parameters particularly in case a cycle of length is comprised of an overlap between rows and an overlap between rows and and an overlap between rows and of hbp note that each overlap must have a distinct associated column index position to result in a valid cycle of length the overlap between rows and can be selected among possible choices among these overlaps there exist overlaps that have the same associated column indices as some overlaps between other pairs of rows thus these overlaps need to be considered separately to avoid incorrect counting the same argument applies when we choose the overlap between the other two pairs of rows as a result the number of different ways to choose these overlaps and form a cycle of length is a and a is defined in in case the number of cycles of length is computed exactly as in case but using the overlap parameters of the matrix hbp thus a in case one overlap solely belongs to hbp and the two bp other overlaps cross hbp to h see fig for the overlap we have three options to choose two rows out of three in hbp for example suppose that the overlap is chosen between rows and of hbp then the cross overlaps will be between bp row of hbp and row of and also between row bp bp bp of and row of note that since hbp and are bp the result of partitioning h there are no overlaps between bp row i of hbp and row i of i based on which option of the three is chosen the number of cycles of length bp is computed using the overlap parameters of hbp and the total number of cycles of length in this case is b and b is defined in in case the number of cycles of length is computed as in case the only difference is that in case one overlap solely belongs to hbp and the two other bp overlaps cross hbp to h see fig consequently b on the other hand the term fd represents the number of cycles of length that have their vns spanning two consecutive replicas the submatrix of two consecutive replicas is bp hbp hbp hbp there are four possible cases of arrangement for the cns and vns of a cycle of length that has its vns spanning two consecutive replicas these cases are listed below bp all the three cns are within hbp two vns belong to the first replica and one vn belongs to the second replica the number of cycles of length in this case is denoted by bp all the three cns are within hbp one vn belongs to the first replica and two vns belong to the second replica the number of cycles of length in this case is denoted by one cn is within hbp and two cns are within bp hbp besides two vns belong to the first replica and one vn belongs to the second replica the number of cycles of length in this case is denoted by bp two cns are within hbp and one cn is within bp besides one vn belongs to the first replica and two vns belong to the second replica the number of cycles of length in this case is denoted by these four different cases of arrangement are illustrated in the lower panel of fig next we find the number of cycles of length in each of the four cases in terms of the overlap parameters and standard code parameters particularly in case two overlaps belong to hbp in the first replica and one overlap belongs to hbp in the second replica see fig for the overlap in hbp we have three options to choose two rows out of three for each option the two overlaps inside hbp must have distinct associated column indices positions to result in a valid cycle of length the overlap inside hbp can not have the same column index as any of the other two overlaps thus the number of different ways to choose these overlaps and form a cycle of length is given by c and c is defined in in case the number of cycles of length is computed as in case the only difference is that in case one overlap belongs to hbp in the first replica and two overlaps belong to hbp in the second replica see fig thus c in case one overlap solely belongs to hbp in the second bp replica and the two other overlaps cross hbp to in bp the first replica see fig for the overlap in of the second replica we have three options to choose two rows out of three the two overlaps that belong to the first replica must have distinct corresponding column indices positions consequently the total number of cycles of length in this case is given by d and d is defined in in case the number of cycles of length is computed as in case the only difference is that in case one overlap solely belongs to hbp in the first replica and the bp two other overlaps cross hbp to in the second replica see fig thus d note that the operator is used to avoid counting options that are not valid where is the number of distinct solutions optimal vectors fig different cases for the cycle of length in red in a sc binary protograph the upper panel lower panel is for the case of the vns spanning and the main idea of theorem is that both fs and fd can be computed by decomposing each of them into four more tractable terms each term represents a distinct case for the existence of a cycle of length in the sc binary protograph and the union of these cases covers all the existence possibilities each case is characterized by the locations of the cns bp and vns comprising the cycle with respect to hbp and of the replica for fs or the replicas and for fd fig illustrates these eight cases along with the terms in fs and fd that corresponds to each case remark consider the special situation of rows and in hbp do not have a overlap here reduces to which is simply the number of ways to select one position from the overlapping set of each pair now define f to be the minimum number of cycles of length in the graph of hbp sc the binary protograph thus our discrete optimization problem is formulated as follows min lemma the total number of oo partitioning choices for an sc code with parameters p m and l which is the binary protograph given an optimal vector is given by n t the constraints of our optimization problem are the conditions under which the overlap parameters are valid thus these constraints on the seven parameters in are and the last constraint in guarantees balanced partitioning bp between hbp and and it is needed to prevent the case that a group of elements a group of s in either hbp or hbp are involved in significantly more cycles than the remaining elements the remaining s the solution of our optimization problem is not unique however since all the solutions result in the same number of oo partitioning choices and the same f we work with one of these solutions and call it an optimal vector lemma gives the total number of oo partitioning choices proof the goal is to find the number of partitioning choices that achieve a general set of overlap parameters not necessarily optimal in particular we need to find the number of different partitioning choices of an sc code with p m and l such that bp the number of s in row i i of is ti the size of the overlapping set of rows and and of hbp is the size of the overlapping set of rows and overlap of hbp is we factorize the number of partitioning choices n g into three more tractable factors choose positions in which row of hbp has s out of positions the number of choices is choose positions in which row of hbp has s out of positions among these positions there exist positions in which row simultaneously has s the number of choices is choose positions in which row of hbp has s out of positions among these positions there exist positions in which rows and simultaneously have s positions in which only rows simultaneously has s and positions in which only rows simultaneously has s the number of choices is g t t in conclusion the number of partitioning choices that achieve a general set of overlap parameters is the solution of the optimization problem in is not unique and there are distinct solutions optimal vectors that all achieve f because of the symmetry of these optimal vectors each of them corresponds to the same partitioning choices the factors and are obtianed by replacing each t with from an optimal vector in the equations of and respectively thus the total number of oo partitioning choices given an optimal vector is n which proves lemma remark the first seven constraints of the optimization problem which are stated in can be easily verified from in lemma by replacing each with iv c irculant p ower o ptimization bp after picking an optimal vector t to partition h and design hbp sc we run our heuristic cpo to further reduce the number of ugasts in the graph of hbsc which has the steps of the cpo are initially assign circulant powers as in ab codes to all the s in hbp results in no cycles of length in hb and hbsc bp design hbp and such that hbp using h contains only two replicas and circulant powers bp of the s in hbp are copied from the s in h bp locate all the cycles of lengths and in specify the cycles of length in hbp that have satisfied and call them active cycles let fda be the number of active cycles having their vns spanning only or only both and compute the number of ugasts in hbsc using the following formula fsc lfsa l fda hbp count the number of active cycles each in is involved in give weight to the number of active cycles having their vns spanning only or only both and map the counts from step to the s in hbp and sort these s in a list descendingly according to the counts pick a subset of s from the top of this list and change the circulant powers associated with them using these interim new powers do steps and if fsc is reduced while maintaining no cycles of length in hbsc update fsc and the circulant powers then go to step otherwise return to step iterate until the target fsc is achieved note that step is performed heuristically a b hbp hb fig a the oo partitioning of or of the sc code in example bp entries with circles squares are assigned to hbp b the b circulant power arrangement for the circulants in h example suppose we want to design an sc code with p m and l using the oo partitioning and the cpo solving the optimization problem in yields an optimal vector which gives f cycles of length in the graph of hbp sc fig a shows how the partitioning is applied on hbp or hb next applying the cpo results in only ugasts in the unlabeled graph of the sc code which is the graph of hbsc fig b shows the final circulant power arrangement for all circulants in hb the technique for designing hbsc is based on solving a set of equations then applying a heuristic program on two replicas to optimize the circulant powers moreover the oo partitioning has orders of magnitude fewer number of partitioning choices compared to the mo partitioning see lemma we can even use any choice of the oo partitioning choices without having to compare their performances explicitly all these reasons demonstrate that the technique is not only better in performance see section vi for details but also much faster than the mo technique wcm f ramework o n t he r emoval of gast s after applying the technique to optimize the unlabeled graph of the sc code we optimize the edge weights in particular we use the wcm framework to remove gasts from the labeled graph of the code through edge weight processing there are multiple parameters that control the difficulty of the removal of a certain gast from the tanner graph of a code the number of distinct wcms associated with the ugast and the minimum number of edge weight changes needed to remove the gast denoted by egast min are among these parameters a third parameter is the number of sets of edge weight changes that have cardinality egast min and are candidates for the gast removal process the first two parameters are studied in we discuss the third parameter in this section as the number of candidate sets of cardinality egast min increases the difficulty of the gast removal decreases in this section unless otherwise stated when we say nodes are connected we mean they are directly connected or they are neighbors the same applies conceptually when we say an edge is connected to a node or vice versa remark a gast is removed by performing egast min edge weight changes for edges connected to cns only see also whether a candidate set of edge weight changes indeed results in the gast removal or not is determined by checking the null spaces of the wcms to minimize the number of edge weight changes performed to remove a gast we need to work on the vns that are connected to the maximum number of unsatisfied cns and egast min g bvm see where g bvm is the maximum number of existing unsatisfied cns per vn in the gast define emu as the topological upper bound on egast min and vm as the maximum number of existing cns per vn in the gast thus from emu g vm egast min note that follows from vm bvm in this section we study gasts with b which means the upper bound is achieved egast min emu moreover for simplicity we assume that all the vns that are connected to vm cns each are only connected to cns of degree theorem consider an a b gast with b in an nb code defined over gf q that has column weight and no cycles of length the number of sets of edge weight changes with cardinality egast min or emu that are candidates for the gast removal process is given as follows if vm g vm smu avm q emu emu where avm is the number of vns connected to vm cns each if vm g smu avm nco q where nco is the number of cns connecting any two of these avm vns proof whether vm g or not to minimize the number of edge weight changes we need to target the vns that are connected to the maximum number of unsatisfied cns by definition and since b the number of vns of this type is avm and each is connected to vm unsatisfied cns in the case of vm g which is the general case for any vm vn of the avm pertinent vns there are different emu ways of selecting emu satisfied cns connected to this vn each of these cns has edges we can change their weights not simultaneously moreover each edge can have q different new weights excluding the and the current weight thus the number of candidate sets is vm emu q emu smu avm emu which is a rephrased version of in the case of vm g from emu the gast is removed by a single edge weight change moreover vm g substituting and emu into gives that the number of candidate sets follows the inequality q smu avm in the equality is achieved only if there are no shared cns between the vns that have g unsatisfied cns otherwise nco has to be subtracted from which proves avm note that the subtraction of nco is not needed if vm the reason is that if vm g or emu multiple edges connected to the same cn can not exist in the same candidate set additionally since our codes have girth at least there does not exist more than one cn connecting the same two vns in a gast a b fig a a gast b an gast appropriate edge weights are assumed example consider the gast over gf q in fig a for this gast g vm avm and from egast min emu moreover nco only one shared cn between two vns having two unsatisfied cns each thus from the number of candidate sets of cardinality is smu q q contrarily for the gast over gf q in fig b g vm avm and from egast min emu thus from the general relation the number of candidate sets of cardinality is smu q q vi c ode d esign s teps and s imulation r esults in this section we present our code design approach for flash memories and the experimental results demonstrating its effectiveness the steps of our approach are specify the code parameters p and l with m solve the optimization problem in for an optimal vector of overlap parameters using hbp and apply the circulant power optimizer to reach the powers of the circulants in hb and hbsc now the binary image hbsc is designed assign the edge weights in hb to generate next partition h using and couple the components to construct hsc using initial simulations over a practical flash channel and combinatorial techniques determine the set g of gasts to be removed from the graph of hsc use the wcm framework see algorithm to remove as many as possible of the gasts in in this section the cv and mo results proposed are the best that can be achieved by these two techniques table i n umber of ugast s in sc codes with m and l designed using different techniques design technique uncoupled with ab sc cv with ab sc mo with ab sc best with ab sc with cb number of ugasts p p p we start our experimental results with a table comparing the number of ugasts in sc codes designed using various techniques all the sc codes have m and l ab codes are used as the underlying block codes in all the sc code design techniques we are comparing the proposed technique against table i demonstrates reductions in the number of ugasts achieved by the technique over the mo technique the cv technique that ranges between and and more intriguingly the table shows that the technique provides lower number of ugasts than the best that can be achieved if ab underlying block codes are used note that this best is reached using exhaustive search and that is the reason why we could not provide its counts for p next we provide simulation results verifying the performance gains achieved by our code design approach for flash memories the flash channel we use is a practical flash channel which is the mixture nlm flash channel here we use reads and the sector size is bytes we define rber as the raw bit error rate and uber as the uncorrectable bit error rate one formulation of uber which is recommended by industry is the frame error rate fer divided by the sector size in bits simulations were done in software on a high speed cluster of machines all the codes we simulated are defined over gf and have p m and l block length bits and rate code is uncoupled ab code is designed using the cv technique code is designed using the oo technique with no cpo applied the underlying block codes of codes and are ab codes code is designed using the technique the edge weights of codes and are selected randomly code code is the result of applying the wcm framework to code code to optimize the edge weights code code and code has and ugasts additionally code code and code has and ugasts the ugast is the second most common substructure in the dominant gasts of nb codes with simulated over flash channels uber uncoupled w ab rand weights uncoupled w ab opt weights sc cv w ab rand weights sc oo w ab rand weights sc w cb rand weights sc w cb opt weights rber fig simulation results over the nlm flash channel for sc codes with m and l designed using different techniques fig demonstrates the performance gains achieved by each stage of our code design approach code outperforms code by about of an order of magnitude which is the gain of the first stage oo code outperforms code by about of an order of magnitude which is the gain of the second stage cpo code outperforms code by about orders of magnitude which is the gain of the third stage wcm moreover the figure shows that the code designed using our approach which is code achieves about more than rber gain compared to code code over a practical flash channel an intriguing observation we have encountered while performing these simulations is the change in the error floor properties when we go from code to code in particular while the gast was a dominant object in the case of code we have encountered very few gasts in the error profile of code vii c onclusion we proposed a combinatorial approach for the design of codes optimized for practical flash channels the oocpo technique efficiently optimizes the underlying topology of the code then the wcm framework optimizes the edge weights codes designed using our approach have reduced number of detrimental gasts thus outperforming existing codes over flash channels the proposed approach can help increase the reliability of ultra dense storage devices emerging flash devices acknowledgement the research was supported in part by a grant from astcidema and by nsf r eferences wang vakilinia chen courtade dong zhang shankar and wesel enhanced precision through multiple reads for ldpc decoding in flash memories ieee sel areas vol no pp may maeda and kaneko error control coding for multilevel cell flash memories using nonbinary codes in proc ieee dfts chicago il usa pp hareedy lanka and dolecek a general ldpc code optimization framework suitable for dense flash memory and magnetic storage ieee sel areas vol no pp parnell papandreou mittelholzer and pozidis modelling of the threshold voltage distributions of nand flash memory in proc ieee globecom austin tx usa pp hareedy lanka guo and dolecek a combinatorial methodology for optimizing codes theoretical analysis and applications in data storage jun online available http felstrom and zigangirov periodic convolutional codes with matrix ieee trans inf theory vol no pp kudekar richardson and urbanke spatially coupled ensembles universally achieve capacity under belief propagation ieee trans inf theory vol no pp pusane smarandache vontobel and costello deriving good ldpc convolutional codes from ldpc block codes ieee trans inf theory vol no pp mitchell dolecek and costello absorbing set characterization of spatially coupled ldpc codes in proc ieee isit honolulu hi jun pp iyengar papaleo siegel wolf and corazza windowed decoding of ldpc convolutional codes over erasure channels ieee trans inf theory vol no pp apr esfahanizadeh hareedy and dolecek codes optimized for magnetic recording applications ieee trans vol no pp esfahanizadeh hareedy and dolecek a novel combinatorial framework to construct codes minimum overlap partitioning in proc ieee isit aachen germany jun pp bazarsky presman and litsyn design of quasicyclic ldpc codes by ace optimization in proc ieee itw sevilla spain pp fossorier codes from circulant permutation matrices ieee trans inf theory vol no pp
7
jan characters for finite simple groups gunter malle and alexandre zalesski abstract let g be a finite group and for a prime p let s be a sylow of a character of g is called sylp if the restriction of to s is the character of the regular representation of if in addition vanishes at all elements of order divisible by p is said to be for every finite simple group g we determine all primes p for which g admits a character except for alternating groups in characteristic moreover we determine all primes for which g has a projective f of dimension where f is an algebraically closed field of characteristic introduction let g be a finite group and for a prime p let s be a sylow of a character of g is called sylp if u for every u s and if additionally then we say that is sylp if g whenever is divisible by p then is called and if additionally then we say that is and sylp characters for chevalley groups in defining characteristic p are studied in specifically for all simple groups of lie type in characteristic p except bn q n and dn q n the characters for the prime p have been determined in our main motivation to study this kind of characters is their connection with characters of projective indecomposable modules the study of projective indecomposable modules of dimension was initiated by malle and weigel they obtained a full classification of such modules for arbitrary finite simple groups g assuming that the character of the module has the trivial character as a constituent in this restriction was removed for simple groups of lie type with defining characteristic some parts of the proofs there were valid not only for characters of projective modules but also for or even sylp characters in this paper we complete the classification of projective indecomposable modules of dimension for simple groups the first main result is a classification of characters for simple groups with the sole exception of alternating groups for the prime p theorem let g be a finite simple group p a prime dividing and let be a character of g with respect to then one of the following holds is irreducible and the triple g p is as in proposition sylow of g are cyclic and g p is as in proposition g is of lie type in characteristic p see p and g q with q or date january mathematics subject classification key words and phrases characters of projective indecomposable modules characters the first author gratefully acknowledges financial support by sfb trr gunter malle and alexandre zalesski p and g an n in fact in many instances we even classify all sylp characters examples for case when n or are presented in corollaries and we are not aware of any further examples our second main result determines reducible projective modules of simple groups of minimal possible dimension theorem let g be a finite simple group p a prime dividing and s a sylow of then g has a reducible projective fp of dimension if and only if one of the following holds g q q q g psln q n is an odd prime q q n q g ap p g or g note that irreducible projective fp of dimension are in bijection with irreducible characters of defect of that degree listed in proposition for simple groups the paper is built up as follows after some preliminaries we recall the classification of irreducible characters in section proposition in section we classify sylp characters in the case of cyclic sylow proposition in section we treat the sporadic groups theorem the alternating groups are handled in section theorem for p odd and in section some partial results for p see theorems and the exceptional groups of lie type are considered in section theorem the rest of our paper deals with the classical groups of lie type we start off in section by ruling out the remaining possibilities in defining characteristic from the case of large sylow psubgroups for primes p is settled in section in section we discuss the small cases when p while the proof of our main theorems is achieved in section by treating the case when p preliminaries we start off by fixing some notation let fq be the finite field of q elements and fq an algebraic closure of fq the cardinality of a set x is denoted by the greatest common divisor of integers m n is denoted by m n if p is a prime then is the of n that is n m where m p if m n m we write for a finite group g irr g is the set of its irreducible characters and g is the set of all linear characters of g that is of degree we denote by the trivial character and by g the regular character of we write s sylp g to mean that s is a sylow of a group of order coprime to p is called a further z g denote the center and the derived subgroup of g respectively if h is a subgroup of g then cg h ng h denote the centraliser and normaliser of h in g respectively if is a character of g then we write for the restriction of to the of is the maximal integer l such that l h is a proper character of if a prime p is fixed then the lp of is the of for s sylp g for groups with cyclic sylow irreducible characters of l are studied in characters for finite simple groups respectively the inner product of characters of g is denoted by sometimes by g the character of g induced from a character of h is denoted by let p g be finite groups n a normal subgroup of p and l let f be a field and m an f then m n cm n becomes an f which is called generalised g m in if is the brauer or ordinary restriction of m to l and denoted by g for the brauer or ordinary character of l afforded character of m then we also write by m n let e ep q p p q be the minimal integer i such that q i is divisible by if p and q is odd then we set q if q and q if q the next two lemmas follow from the definitions here g is a finite group and s sylp g lemma let be a sylp character of then every linear character occurs in with multiplicity in particular if s is abelian then is multiplicity free reg proof as s this follows from the corresponding properties of lemma let g be a direct product and let be irreducible characters of respectively then the of is the product of the of and lemma let n be a of g normalised by let be a faithful character of then n is abelian and cg s z g z s proof let h n then is as h is every character is the character of a projective module lemma as the module in question is indecomposable then is induced from an irreducible character say of n thm as n and it follows that let n be the derived subgroup of n then n is normal in h and n therefore that is n lies in the kernel of since and hence is faithful we have n so n is abelian as claimed note that cg s a z s where a is a take n a above so h a so now n s and n is abelian it follows that in any representation afforded by n consists of scalar matrices as is faithful we have n g as required thus if g is a simple group then cg s z s is a necessary condition for g to have a character remark a n normalised by a sylow of g is called a in the theory of finite groups thus lemma tells us that if g admits a faithful character then every is abelian and cg s z g z s lemma let g be a finite group p a subgroup with p p u a normal of p and let l let t s be sylow of l g respectively let be a character g of g and reg a if m s then m in other words lp lp in particular if is sylp then so is b if is a character of g then is a character of c let k op l if is a or sylp character of g then so is the character of gunter malle and alexandre zalesski proof we can assume that s p and t reg a as m s it follows that coincides with m whence the claim b we have to show that vanishespat all elements of let m be a afforded by then cm u ux x m observe that if g p has projection to l which is not a thenp gu is not a for any u u thus for any such element g it follows that g gu by assumption whence the claim c obvious lemma let g be a direct product suppose that lp k for every sylp character of then lp k for every sylp character of proof let sylp set u and p ng u so p then l where let be a sylp g be the generalised restriction of to by lemma is a character of let sylp character of l and lp lp then lp lp as is a by assumption lp k whence the result lemma p let g where and let be a character of then i p where irr are all distinct and are characters of in addition i lp is a character of and lp lp p proof write i where irr are all distinct and the s are some characters of p reducible in general let g and let x be then gx i g x as the characters are linearly independent it follows that x for every i that is the s p in addition lp i i lp is let lp so p m then lp whence p p p i i p lp as required corollary let g and be as in lemma and si a sylow of gi i let be the irreducible constituents of and suppose that lp m for every character of then lp m p proof let be as in lemma by assumption mi where mi p p reg reg so m is a subcharacter of therefore m i i i i i m is a subcharacter of now m m as is a multiple of we have lp and the result follows proposition let g be a finite group and n g a normal subgroup such that is a cyclic let be a character of then a g for some character of n b if h n is and the conjugacy classes of h in g and in n coincide then h c if is then is proof a let irr g be a linear character that generates irr as all elements p of are vanishes on g n it follows that thus if we write j aj as characters for finite simple groups a linear combination of irreducible characters irr g then aj is constant on orbits under multiplication with it clearly suffices to show the claim for a single orbit say pf f with irr g and f minimal such that f set m ker then is irreducible as so is so g now note that f f f m n thus as it follows generates irr so m for m that vanishes on m n and hence m is induced from some irr n then g m g g as claimed b for g g define the character g of n by g x gxg x n it is well know that g is a sum of pk characters g for suitable g by assumption g h h and hence h pk h whence b c if is then g and hence pk it follows that is whence the result remark let g n p be as in proposition then is not necessarily indeed let c hci be the cyclic group of order p and let be a square root of define irr c i by c then i character of let d c the regular p d be the dihedral group of order with normal subgroup then i d one observes d and hence d however is not a that d character of corollary let g n be as in proposition and let be a character of suppose that every irreducible character of n of degree at most is then g for some character of n in particular if n does not have characters then neither has proof by proposition a g for some character of n clearly n n so by assumption every irreducible constituent of is therefore so is and the claim follows from proposition c lemma let g be a finite group and n g a normal subgroup of index suppose that lp m for some integer m and every character of then lp m for every character of n proof suppose the contrary let be a character of n such that lp then g the induced character is and lp lp this is a contradiction the following fact is well known lemma let g be a finite group and n g a normal subgroup of index let f be an algebraically closed field of characteristic let be a projective indecomposable f then where is a projective indecomposable f n and lp lp proof it is that induction sends projective modules to projective modules furthermore by green s indecomposability theorem thm induction from normal subgroups of index preserves indecomposability so if is an indecomposable direct summand of then is projective is projective indecomposable and so the statement lp lp also follows as n n by assumption gunter malle and alexandre zalesski irreducible characters for simple groups here we complete the list of irreducible characters of simple groups g of degree for this it suffices to extract the characters of degree from the list of irreducible characters of degree obtained in thm this list already appeared in prop where the case with p g was inadvertently omitted note that an irreducible character is if and only if it is sylp proposition let g be a simple group suppose that g has an irreducible sylp character then one of the following holds g is a simple group of lie type in characteristic p and is its steinberg character g q q even and p q or g p and g q q odd q is a for p or p and q is a g psln q q n is an odd prime n q such that q n q is a g psun q n is an odd prime n q such that q n q is a g q n q r k with r an odd prime kn is a such that q n is a g n is a prime such that is a g and p g and g and g and g and g and g and or g and the problem of determining the minimal degree of irreducible characters of looks much more complicated remark let us point out the following cases not explicitly mentioned in proposition cyclic sylow in this section we determine the reducible characters for simple groups with cyclic sylow proposition let g be a finite group with a cyclic ti sylow s and assume that ng s is abelian then lp for all irr g proof let n ng s by assumption is abelian of order prime to p so it has irreducible characters of degree hence each of the corresponding pims of n has dimension since the brauer tree for any of n is a star all pims are uniserial ch vii cor but then by ch i thm any indecomposable f n where characters for finite simple groups f is a sufficiently large field of characteristic p is a quotient of a pim so has dimension strictly smaller than if it is not projective now let irr g if is of zero is a multiple of s and the claim follows else lies in a block of full defect and there exists an indecomposable f x with lift ch i thm then n y p where p is projective and hence of dimension divisible by and y is the green correspondent of x an indecomposable nonprojective f n ch vii lem thus dim y by what we said before so lp and the claim follows lemma let g be a simple group let p be a prime such that a sylow psubgroup of g is cyclic let denote the minimal degree of any irreducible character of then except in the case where g p p mod and p proof the values of g for every simple group g are either known explicitly or there is a good lower bound for the sporadic simple groups one can inspect for the alternating groups an we have an n for n and for simple groups g of lie type the values g are listed in the lemma follows by comparison of these data with proposition let p be a prime and let g be a simple group with a cyclic sylow let be a sylp character of then one of the following holds is irreducible of degree is irreducible and or g p p mod and where are distinct irreducible characters of degree p proof suppose that is reducible the result for g p easily follows by computation with the character table of this group suppose g p let be an irreducible constituent of by lemma k where k therefore is a constituent of by lemma k and proposition let p be a prime and let g be a simple group with a cyclic sylow then g has a reducible sylp character if and only if one of the following holds g q q even q g p p g psln q n is an odd prime q q n q g psun q n is an odd prime q q n q g ap p g or g furthermore in each case cg s s and is in addition is an irreducible character of g unless possibly when holds when may be the sum of two irreducible constituents of equal degree proof the additional statement follows from proposition if is reducible we have the case of proposition so we may assume that is irreducible and thus that the irreducible characters of g of level are determined in thm so belongs to the list in thm if we drop from that list the characters of degree other gunter malle and alexandre zalesski than the remaining cases are given in the statement of the proposition note that the list in thm includes groups so one first needs to delete the representations on the center for instance if g q then q n is odd and hence is even however every irreducible representation of q of even degree q n is faithful in other words g has no irreducible representation of even degree q n in contrast there do exist irreducible representations of g psln q and psun q for n odd of degree to prove the converse we have to show that in each case is sylp that is s let be a representations of g afforded by let s s with s hsi by cor the multiplicity of every eigenvalue of s is as det s it follows that is not an eigenvalue of s therefore s as required next we show that cg s in cases and this follows by inspection in the cases are trivial in cases one can take the preimage t say of s in sln q sun q respectively then t is irreducible on the natural module for the groups t are described by huppert it easily follows that t is in then cg s s unless g t z for some g t t by order consideration s is a sylow of g so g is not a let t t then g ti g i t for i so t t by the above this is a contradiction as s is a sylow it follows that every element of g is either a or a therefore is if and only if s lemma under the assumptions and in the notation of proposition we have a is unique unless or holds b is the character of a projective module when or holds and c is a proper character and if m is the minimal degree of a character of g then either m or holds and m or holds and m proof a let then and is irreducible unless holds we show that an irreducible character of this degree is unique unless or holds if g this follows from the character table of this group for ap this is well known for g psln q n and psun q n this is observed in table ii in case the number of characters equals the number of irreducible characters of degree p which is p if p mod otherwise p if g then there are three characters see b recall that the principal projective indecomposable module is the only pim whose character contains as a constituent all the characters in proposition contain as a constituent therefore if is the character of a projective module say then is indecomposable and principal so we compare the list of characters in proposition with the main result of the comparison rules out the case of proposition furthermore if g admits at least two characters then at most one of them can be the character of a projective module by a this leaves us with cases and as in each of these cases is unique it must be the character of the principal projective indecomposable module listed in c this follows by inspection in table ii characters for finite simple groups remark the group g p has several sylp characters all of them are and only one of them is projective sporadic groups theorem let g be a sporadic simple group then g does not have a reducible sylp character unless one of the following holds g p four characters with constituents of degrees and each all g p six characters none of them g or g proof for most groups and primes by there is a conjugacy class of taking strictly positive value at all irreducible characters of degree at most in a few cases like in and f at p or and at p one has to solve a little linear system of equations for integral solutions the only cases where such solutions exist are listed in the statement note that the cases occur also in proposition alternating groups in this section we consider characters of alternating groups alternating groups for p for odd primes we give a short proof using a recent result of giannelli and law which replaces our earlier more direct proof lemma let g ap p and irr g then lp in addition lp for p this fact has also been observed in proof the first part is just proposition in addition if p then g has no irreducible character of degree d for p d this implies the claim lemma let n kp where p and k let g an and let be a character then lp equivalently ap be commuting proof for k the lemma is trivial let k p let subgroups of set x then where the s are characters of and the s are distinct irreducible characters of lemma by induction lp if lp for some i then lp lp by lemma if lp and p then lp and hence either or is the unique irreducible character of degree p if p then we may have see suppose the lemma is false and p then we can rearrange the above to get where p and are characters of it follows that as well as for every irreducible constituent of contains no irreducible constituent distinct from it is well known and easily follows from the branching rule that this implies n or recall that g has a single character of degree n therefore a b where n let x be of order then x n p which implies x gunter malle and alexandre zalesski suppose p then there are two irreducible characters of of degree let us denote them by therefore assuming the lemma is false we can write let x be a then x and x where is some primitive root of unity as x x are integers so is x x this implies then p and the lemma follows unless if then and the above argument applies lemma let p be odd and let be a hook partition of n then the corresponding character of sn takes a positive value on proof it is well known that any hook character is the mth exterior power for some m n of the irreducible reflection character of sn the constituent of degree n of the natural permutation character let y with sp and be a young subgroup of sn and g g y a clearly and i for i p g for i thus min m m x x i i m g g g which clearly is positive for m n p since the binomial coefficients are strictly increasing up to the middle now observe that it suffices to prove the claim for n since the restriction of a hook character from sn to only contains hook characters but for n we are done since by symmetry we may assume that m p n p theorem let p be odd and g an with n max p then g has no sylp regular character if n p then every sylp character of g is irreducible unless n p proof if p n the sylow of g are cyclic and so the claim is in proposition now assume that n and let s be a sylow of sn first assume that n pk for some k and that n when p then by the main result of the restriction of any irreducible character of sn to s contains the trivial character a moments thought shows that the same is true for the restriction of any irreducible character of an to so by lemma any sylp character of an is irreducible now assume that n pk for some k and n when p then again by thm a the only irreducible characters of sn whose restriction to s does not contain the trivial character are the two characters of degree n so the only irreducible character of an whose restriction to s does not contain the trivial character is of degree n hence a sylp character of an has the form for some a and some irr an let g an be a pk then g and by the rule any irreducible character of sn takes value or on in particular if is reducible then we have that a and for some m g but then is parametrised by a hook partition of degree m but then takes positive values on by lemma a contradiction finally the cases when p and n can easily be checked individually for example all irreducible characters of of degree at most are on class and those which vanish there are positive either on class or so has no character as has the same sylow this also deals with n characters for finite simple groups corollary let g be a finite group and p suppose that g has a subgroup p containing a sylow of g and such that p an with n max p then g has no character proof this follows from lemma and theorem alternating groups for p the situation is more complicated in the case of p and we don t have complete results this is in part due to the existence of an infinite family of examples which p we now construct set where is the irreducible character of sn corresponding to the partition i for i and for i so the young diagram of is a hook with leg length n i and n n n i i pn so lemma let m n where m is even and g ch sm sn where c is an and h fixes all letters moved by let irr correspond to the hook h if i m partition k then g h if n m i h h if m i n proof one observes that the restriction of to sm is a sum of irreducible characters where are irreducible characters of sm and both are hook characters of the respective groups see lemma next we use lemma which states that p j h where g irr is the young diagram of and is j such that is a skew hook with leg length j in our case is a hook so the rim of is itself by definition a skew hook is connected so it is either a row or a column in our case and hence j or j m a column hook of length m has leg length m which is odd as m is even if j m then i m i respectively this is a proper diagram if and only if i m n i so if n i m then i m j and g h h if i m then i j m and g h h if m i n m then i m i and g h h as claimed proposition suppose that n is even then a is a character of sn b is if and only if n for some integer k proof a let g sn be of even order suppose first that g is a cycle of length by lemma g so g suppose that g is not a cycle of length then we can express g as the product ch of a cycle c of even size m say and an element h fixing all letters moved by then g sm by lemma we have g n x g n x h x h x h x h gunter malle and alexandre zalesski b if n then as by induction we have k write n l where l then by induction so k k k the statement follows as corollary let n be even and then is a character of an if n then this character is proof the characters remain irreducible under restriction to an and it follows that therefore g g by proposition for elements g of even order the last claim follows from proposition b suppose that n is odd set x x and observe that provided i n and e o e o e o therefore as we have proposition suppose that n is odd then a and are characters of sn b is if and only if n for some integer k proof a let g sn be of even order and g ch where c is a cycle of even size by lemma e g x x g h x h as is a proper diagram only for i and is a proper diagram only for p h i set k so the second sum can be written as e whence g similarly g x g x h x h as is a proper diagram only for i n m and is a proper diagram only for i m set k i then the second sum can be written as p h so g as well characters for finite simple groups b if n then see the proof of proposition b by the above so both and are if n is not a then by proposition b let n be odd then is irreducible for i n whereas is the sum of two irreducible constituents which we denote by and if n then set ea x and x while for n we set oa x and x corollary a let n then and are characters of k an if n then they are characters b let n then and are characters of an none of them is proof let g an be of even order a let i n then remains irreducible under restriction to an as and coincide under restriction to an it follows that and hence is a character as this is for n observe that g g and g g g it follows that g g g and thus g g by proposition therefore and are characters of an in addition suppose that n then so both and are b let i n then as above it follows that and hence is a character in addition as here we never have n is not g g g and g consider observe that e g it follows that g g g and so g g by proposition therefore and are characters of an but not steinberglike lemma let n then in addition to the character when n and the characters and when n the only characters of sn are a if n the sum of all irreducible characters gunter malle and alexandre zalesski b if n the irreducible character of degree c if n the sum of the two irreducible characters of degree proof for n this is easily checked from the know character tables for n we use a computer program to go through all possibilities for one checks that no character exists with the right restriction to and similarly for one considers the restriction to finally the cases n are treated by restricting to theorem suppose that the only character of k for p is the one constructed in proposition then an does not have characters for p for n unless n or n is a in the latter case the only characters are those listed in proposition proof let be a character for p of an with n then sn is for sn we argue by induction on n that sn does not have a steinberg like character unless n or n is a power of assume that n is not a power of and write n for distinct exponents ar by lemma we may assume n so one of the summands say is different from and then the young subgroup y of sn contains p a sylow so is then by lemma we have that p i where irr are all distinct the are characters of and i lp is a character of with thus by assumption is the character from proposition in particular is and so lp for all i so the are as well this is not possible unless n is a as well in the latter case by lemma we conclude that n the above argument shows that with j a character of yj so in particular and hence also is by possibly interchanging we may assume that now consider a sum of hooks by the branching rule any character of sm restricted to contains a character except when m which is excluded here thus inductively can not contain any constituent this in turn means that all constituents of are hooks and thus by induction that now observe that by the rule lemma contains if and only if j l i i thus on the one hand side and have a common constituent and so at most every second hook character occurs in on the other hand every second hook must indeed occur thus either or as defined above if n then our claim follows from proposition otherwise the degree of is larger than projective characters for p lemma let p then an has a projective character of degree if and only if sn has a projective character of degree proof this follows from lemma theorem let p and g an or sn for n then g has no reducible projective character of degree proof one can inspect the decomposition matrix modulo of g an for n to observe that g has no projective character of degree analysing the character table of g an for characters for finite simple groups n one observes that g has no characters and hence no pim of dimension one can inspect the decomposition matrix of g to observe that the minimal dimension of a pim is analysing the character table of g an for n one observes that g has no characters and hence no pim of dimension let n using the known character table of one finds that there is a unique regular character viz the character it is with constituents of degrees and recall that the principal pim is the only one that has as a constituent however by the principal pim is not of degree let n where k then g has a subgroup y such that y for some normal n and indeed let be a partition of n with all parts of size if g sn then n is the direct product of copies of a sylow of if g an then we take n an for the subgroup in question then y is a semidirect product of n with the latter permutes in the natural way one easily observes that y is odd if is a pim of degree then so is by lemma the generalised restriction of is a pim of dimension such a pim does not exist as we have just seen let n be not a and write n m where m let x sn where and sm then the index is odd so is a pim of degree therefore is a direct product where is a pim for xi for i obviously dim this is a contradiction as has no pim of degree for g an the result follows from lemma exceptional groups of lie type theorem let g be a simple group of lie type which is not classical then g does not have a sylp character in characteristic except for the group g which has two reducible characters and two irreducible characters for p proof as in the proof of thm we compare the maximal order of a sylow of g which is bounded above by the order of the normaliser of a maximal torus with the smallest irreducible character degrees given for example in tab i this shows that except for very small q there can not be any examples of sylp characters a closer inspection of the finitely many remaining cases shows that g has two reducible characters and two irreducible character for p but no further cases arise groups of lie type in their defining characteristic it was shown in that simple groups of lie type of sufficiently large rank don t have characters with respect to the defining characteristic apart from the irreducible steinberg character more precisely the characters were classified except for groups of types bn with n and dn with n here we deal with the remaining cases gunter malle and alexandre zalesski proposition let g q n with q pf odd then g has no reducible character with respect to proof we freely use results and methods from first assume that n according to prop it suffices to consider a group h coming from an algebraic group with connected centre such that h h q let be a reducible character of then has a linear constituent by thm multiplying by the inverse of that character we may assume that the trivial character occurs in exactly once by lemma then all constituents of belong to the principal so we may in fact replace h by h that is we may assume that h is of adjoint type let p h be a parabolic subgroup of h and u op p then by lemma c the restriction rlh is a character of l we will show that there is no possibility for compatible with restriction to all levi subgroups clearly rlh also contains the trivial character so is again reducible the reducible characters of all proper levi subgroups of h are known by lemmas and in particular we must have q and for l of type we have rlh with irr l of degree q thus lies in the lusztig series of a regular semisimple element s the dual group of l with centraliser a maximal torus of order q q thus has to contain a constituent lying in the lusztig series of it is easily seen that the centraliser of s in is either a maximal torus or of type q q correspondingly q q q q q q q q but the first and the last are bigger than q so q q now if contains any other constituent apart from in the principal series then its generalised restriction to l is contradicting rlh next the restriction to a levi subgroup l of type has the form rlh with q q and q q in particular lies in the lusztig series of a regular semisimple element t of order dividing q with centraliser a maximal torus of order q q the centraliser of t in then either is the same maximal torus or of type q q correspondingly has a constituent in the lusztig series of t of degree q q q q q q q q q or q q q the last one is larger than q so furthermore by lemma contains at least one regular constituent this is either of degree or if then one can check from the known list of character degrees of h which can be found at that the only regular character of small enough degree has degree q q q q observe that so the sum of remaining character degrees is d q q q q now note that can not have further unipotent constituents since they would lead to unipotent constituents of rlh as h has no cuspidal unipotent characters it turns out that all remaining candidates except for one of degree q q have degree divisible by q now mod q q while d mod q q it follows that would have to occur at least q q times in as q q d this is not possible this contradiction concludes the proof for the case n characters for finite simple groups the cases of q and q now follow from the previous one by application of the inductive argument in the proof of thm f proposition let g q n q p then g has no reducible character with respect to proof first consider the case n as in the previous proof by prop and lemma we may work with h of adjoint type let be a reducible character of then contains by thm and hence so does its restriction rlh to a levi subgroup of type then by lemma we have q mod and rlh with a cuspidal character labelled by a regular element s in a torus of order q of order dividing q q in but then s is also regular in h that is has a constituent of degree q q q this holds for all three conjugacy classes of levi subgroups of type comparison of degrees shows that this is not possible the case of q again follows from the previous one by application of the argument in the proof of thm classical groups of large rank as an application of results obtained in section we show here that classical groups of large rank have no steinberg like character for p provided p is not the defining characteristic of throughout p is an odd prime not dividing q and we set e ep q the order of q modulo we first illustrate our method on the groups gln q lemma let g gln q p and let s be a sylow of a write n me where then there exist subgroups u s n g where u is an abelian normal of n and am b if m max p or then g has no character proof a see b if then g contains a subgroup x such that x q and cg x contains a sylow of as x is a the result follows from lemma let suppose the contrary and let be a character of by lemma am must have a character however this is false by theorem for other classical groups the argument is similar but involves more technical details let d ep be the order of modulo p equivalently d if e is odd d if e mod and d e if so d if and only if e equivalently q note that e q if e is even lemma let g gun q and p then the sylow of g are isomorphic to those of h where h q if e is odd h q if and h gln q if e mod lemma let g gun q p and let s be a sylow of suppose that e mod equivalently d is odd a write n md where then there exist subgroups u s n g where u is an abelian normal of n and am b if m max p or then g has no character gunter malle and alexandre zalesski proof a suppose first that e let v be the natural then v is a direct sum vi where vi s are subspaces of dimension let x be the stabiliser of this decomposition that is x x g xvi vj for some j j x n then x sn a semidirect product where q q n factors let u be the sylow of then u is normal in x and abelian it is well known that x contains a sylow of therefore n u an satisfies the statement let e as d is odd there is an embedding gum q d gumd q see hilfssatz note that ep q d and q d q as gumd q is isomorphic to a subgroup of g the result follows b is similar to the proof of lemma b lemma let p me where e ep q is even and x gum q a if m is even resp odd then x is isomorphic to a subgroup of q resp q b x is isomorphic to a subgroup of q of q of q as well as of q in addition x contains a sylow of the respective group proof a follows from lemma as well as b for q the second case in b follows from a as the groups q q and q contain subgroups isomorphic to q and q the additional statement can be read off from the orders of the groups in question the cases with q q and q are considered in lemmas and that of q is similar lemma let p let h be one of the following groups h gun q with n md where d h q q q q with n me m where e is odd and e h q q with me where e is even and e h q with me m where e is even m e and either m or m and then either h q m is even or h q m is odd let e be even m e and h q m is odd or h q and m is even let s denote a sylow of then there exist subgroups u s p h where u is an abelian normal of p and am proof the case e mod is handled in lemma in the remaining cases the result follows from lemmas and as gl q is isomorphic to a subgroup of by lemma q so the result follows from lemma this follows from lemmas and note that h q contains subgroups isomorphic to q and q and one of them contains a sylow of similar to note that if then h contains subgroups isomorphic to me q q and one of them contains a sylow of and me in this case a subgroup x of h isomorphic to q and q respectively contains a sylow of one can easily check that so the result follows from characters for finite simple groups our result for alternating groups corollary implies the following proposition let p and e ep q let m max p let g be one of the following groups psln q and n em psun q and n dm where d ep q q odd or q n and n em if e is odd otherwise em q and n em if e is odd otherwise em q and n em if e is odd otherwise em then g has no character this remains true for any group h such that g is normal in h and h is abelian proof suppose first that h is as in lemma let s sylp h then there are subgroups u s p h where u is normal in p and am with m m max p so am is perfect let be the derived subgroup of set p s u then sylp and am as am is perfect a similar statement is true for the quotient of by a central subgroup then the result follows from theorem using lemma minimal characters and sylow p in this section we show that if p and g is a simple classical group not satisfying the assumptions in proposition p is not the defining characteristic of g and a sylow s of g is not cyclic then g has no sylp character and hence no character observe that s is cyclic if and only if m and abelian if and only if m p where m is as in lemma the case where s is cyclic has been dealt with in section for a group g let g g g denote the sequence of integers such that for i g has an irreducible character of degree g and no irreducible character with g g for universal covering groups of finite classical groups the values g g g were determined in in our analysis below these three values play a significant role but mainly for classical centerless groups g such as pgln q pgun q q q and q for these groups mainly for n pe with p we observe that g and sometimes g in the latter case it is immediate to conclude that g has no sylp character in the other cases we observe that there exists an element g g of order p such that g for each irreducible character of degree at most g for n pe we use a different method recall that e ep q denotes the minimal integer i such that q i is divisible by the groups gln q n set dn q n q let g sln q the minimal degrees of projective irreducible representations of psln q are given in table iv table is obtained from this by omitting the representations that are not realisable as ordinary representations of sln q lemma let p e and g glen q suppose that n p and if q suppose that either n p or p then g and g has no sylp character proof if n p then q e n q n as p is coprime to q and g q en q q so the statement is obvious in this case gunter malle and alexandre zalesski n q n n n n n n n q q q q q q q g g g q q q q q q n q dn dn dn q q q n dn dn dn table minimal degrees of irreducible characters of sln q e p ep q let n if q then p q p while g as p the exceptions in table can be ignored except for e and n q or these two cases are trivial we have p q e p q q ep q as p q for q and q e p q ep q if q and p then p and p p is less than g as p remark lemma does not extend to the case q with n p as then p p p p g so the case e leaves us with q which we deal with next lemma let e and g glep then g and g has no sylp character proof we have p p by table we have g and g or ep as p is odd and e we have ep so in the exceptional cases e p where g let be the permutation character of g associated with the action of g on the vectors of the natural then where is a character of g of degree there is a unique irreducible character of g of degree table iv and hence it coincides with let be a sylp character of as it follows that the irreducible constituents of are either or as g for every g g we get a contradiction as soon as we show that g for some g this is equivalent to showing that g this can be easily verified lemma suppose that p q and let sln q g gln q for n then g has no sylp character proof let g then and q n q q q as above so and has no sylp character then neither has g by lemma the groups gun q n in this section we consider the case where p and a sylow of gun q is abelian or this implies n where d is the order of modulo p equivalently d if e is odd d if e mod and d e if so d if and only if e equivalently q note that e q if e is even characters for finite simple groups lemma let p be odd let s be a sylow of g gun q then s is abelian and not cyclic if and only if n dp proof if x gln q then sylow of x are abelian if and only if n ep let s be a sylow of we use lemma if e mod then q so s is abelian if and only if n ep q p dp if e is odd then q so s is abelian if and only if ep equivalently n dp if then q so s is abelian if and only if ep q p equivalently n ep dp similarly s is cyclic if and only if n so the lemma follows lemma let p d and sudp q g gudp q then g and g has no sylp character this remains true if sun q g gun q with n dp proof note that d means that p does not divide q so q so it suffices to prove the lemma for g sudp q first assume that d is odd so e mod and d then we have p q p q p p q d p q p and g q dp q q in this case g similarly if e is odd then d p q d p q p and g q dp q so again g finally assume that then d e p q e p q p and g q ep q so g again this implies that g has no sylp character the proof of the additional statement is similar thus we are left with primes such that q we first consider the case where n lemma let q n p and sun q g gun q then g has no sylp character proof let g g det g is an element of q of then where zp is the sylow of z g as zp it follows that g zp by lemma the result for g follows if we show that has no sylp character in turn this follows from the same result for sun q as is coprime to so we deal with suppose the contrary and let be a sylp character of first let n then q for p by table v q q let q then q q q so has a single irreducible constituent and again by table v q q q q as is sylp lemma therefore q q which is false the case with q can be read off from the character table of let n and let v be the natural let bn be an orthogonal basis in v and let w i then w is a subspace of v of dimension set x h hw w and hbi hbi i for i n and u op x then u z x and every element of u acts scalarly on w let x be the derived subgroup of x and p x u then x q and q as p by the above q has no sylp character as p contains a sylow of the result follows from lemma lemma let g then g has no character proof by lemma it suffices to prove that and have no character suppose the contrary and let be a character of any of these groups gunter malle and alexandre zalesski as we have let be an irreducible constituent of then by and the characters of degree are positive at the class whereas those of degree vanish at this class it follows that but then is positive at the class this is a contradiction lemma let h sup q p q or h slp q p q and is a primitive root of unity let be an h diag h where irreducible character of h whose kernel has order prime to then h proof the element h is written in an orthogonal basis of the underlying vector space in the unitary case then h e where e h is an extraspecial group of order such that z e z h the restriction of to e is a direct sum of irreducible representations of e on z e it is well known and can be easily checked that the character of every such representation vanishes at so the claim follows let h gun q or gln q with n weil representations of these groups were studied by howe and other authors and have many applications mainly due to the fact that their irreducible constituents which we call irreducible weil representations essentially exhaust the irreducible representations of degree h and h more details are given below for n p p odd let m be the underlying space of the weil representation of then m z h where m m zm z m for z z h in general h is irreducible on except for the case where h glp q and h in this case is a sum of a and an irreducible subspace so the irreducible weil representations of h of dimension greater than are parameterised by their restriction to z h and each of them remains irreducible under restriction to h sun q or sln q by every irreducible representation of h of degree h and h is an irreducible weil representation moreover every irreducible representation of h of degree h and h is obtained from an irreducible weil representation by tensoring with a onedimensional representation lemma let p and h gup q q p q or glp q q let irr z h let be the character of an irreducible constituent of the weil representation of h labeled by where let h be as in lemma then h p p except for the case with g glp q and h where h p in addition h if and only if z for an element z z h of order proof we only consider the case h gup q as the case h glp q is similar let z z h irr z and be the irreducible constituent of labeled by this means that z z let x hz hi let be the character of hhi such that h i where is a fixed pth root of unity i then the multiplicity of the eigenvalue i of h equals recall that x d where d is the multiplicity of the eigenvalue of x as a matrix in gup q therefore q n and if x zhk and z p then x if z p h then x q also for z characters for finite simple groups p we compute x z h where x zh note that x is an integer so is let zp be the subgroup of order p in z and xp hzp hi then x x x x z h x z h x z h p we first show that the second sum equals if i note that x zh xp is equivalent to zp therefore d d x for x x and x for z fixed we have a partial sum p p p z h h and h h as claimed if i p and then x x x x z z z x z h z p p as z for z zp if i p and then x x z h q p p next we compute x z h observe that z for z zp so this sum p simplifies to zh h note that d zh if h and any z zp so if h then zh q if h then d zh d z for z so z and q p therefore we have x x x z q p zh h zh h x q h x q p pq x h p q p p i let i then h and the last sum equals p q p p ii let i then h p and the last sum equals pq p p q p q p q pq therefore q p p q if i p q p p p q if i p and q p p p q if i p in particular the of eigenvalue i for i p of h on the module for fixed p multiplicities i are the same as the trace of h on for with zp equals q p p p q q p p q p as p q similarly if then the trace in question equals p in other words if is the character of and zp then h p for and p otherwise lemma let h gup q q p p q or glp q q then h has no sylp character the same is true for h sup q and slp q and for all groups x with h x proof set g z h by lemma it suffices to prove the lemma for g in place of suppose the contrary and let be a sylp character of g and let be an irreducible constituent of we first observe that g and hence by g g gunter malle and alexandre zalesski indeed note that in the unitary case respectively in the p p q p q q p q linear case by table iv g h if p this value is greater than let p then g h q q q q q for q again g unless g or the former case is settled in lemma let g then in this case g g and g so implies the character of degree is positive at class a contradiction so g as mentioned prior lemma is either or can be seen as a character of h obtained from an irreducible weil character by tensoring with a linear character of let h h as in lemma then h h so tensoring can be ignored and we can assume that is an irreducible weil character of then by lemma h p p in the unitary case and h p p in the linear case if then h so h as is and p we have h so h for every irreducible constituent of this is false as is trivial on op z h by the definition of g and hence h by lemma this is a contradiction as irreducible weil representations of h remain irreducible upon restriction to h this argument works for intermediate groups x too lemma let p and let g be a group such that sln q g gln q with n ep or sun q g gun q with n dp and n q then g and g have no sylp character proof for the unitary case with d the result for g is stated in lemma the case with d and n p is dealt with in lemma and the remaining case d and n p is examined in lemma let h gln q the result for e q follows from lemma and that for e q is proved in lemma the result for e n p is stated in lemma the case with e n p is examined in lemma the statement on g follows from lemma lemma for p let h gln q h sln q with ep n or h gun q h sun q with dp n let g be a group such that h g then g has no sylp character unless p and h proof suppose the contrary and let be a sylp character of suppose first that e d note that g has a subgroup x say isomorphic to slep q q sudp q q and let s be a sylow of q q let y s slep q s sudp q as y by lemma is a sylp character of y slep q sudp q this contradicts lemma unless possibly if g sun and p as d this case does not occur next suppose that e d that is q or q then we refine the above argument set d glp q or gup q then y set g y then is normal in y and hence op op y as y d it follows that y y d z d and hence e is a normal subgroup of z d by lemma is a sylp character of e however by lemma e has no characters for finite simple groups sylp character unless d and p so we are left with the case h gun p and n the group h is excluded by assumption so consider first h as h h z h it suffices to deal with g suppose the contrary and let be a character of then let be an irreducible constituent of so if g g is an element from class then g unless or as g there is a constituent say of such that then pick h g from the class then h and if then h unless so must have a constituent say of degree then g if then g and hence g as g if it follows that but then h h h a contradiction so and the other constituents are of degree at most as g g we have g in particular for the other constituents of we have g by this implies and must occur with multiplicity whence a contradiction let h by lemma it suffices to deal with x set x then and the irreducible characters of x of degree less than are positive on class in addition and the irreducible characters of x of degree less than and not equal to are positive on class let be a character of x or x then the irreducible character of degree is a constituent of note that if irr x then the sum of the other constituents of is at most by they are of degree or the trivial character can not occur with multiplicity greater than so or must be a multiple of which is false let irr x note that the multiplicity of in is at most and if occurs with multiplicity then the sum of the other constituents of is at most the irreducible characters of degree at most have degrees and all them as well as are positive at class this is a contradiction suppose that occurs once then the sum of the other constituent values at class is it follows that these constituents may only be of degrees inspecting one observes that all them as well as are positive at class this is a contradiction so the multiplicity of must be then the sum of the other constituent values at class is therefore the degrees of the other constituents may only be let be the character of degree then if then the sum of the other constituent values at class is the trivial character is the only one whose value is at most as this can not occur twice we get a contradiction therefore as above this contradicts this completes the analysis of the case with n let n then h contains a subgroup isomorphic to which contains a sylow of h so the result for this case follows from n in addition h h z h so we are done by lemma similarly the result for n follows from that with n remark the group has an irreducible projective character of degree for p and hence h h has a projective character of degree theorem let p and g be a group such that sln q g gln q or sun q g gun q suppose that sylow of g are not cyclic then g has no character unless p and g gu gunter malle and alexandre zalesski proof if n in the linear case and n in the unitary case then the result follows from proposition for g in place of g and then for g in view of lemma if ep n in the linear case and dp n in the unitary case then the result follows from lemma if n ep in the linear case and n dp in the unitary case then the result follows from lemma if n in the linear case and n in the unitary case then sylow of g are cyclic remark proposition gives a better bound for n but this does not yield an essential advantage as the cases with n e p and d p are not covered by proposition and we have to use lemma anyway the symplectic and orthogonal groups for p lemma let g q q even n n q g q q odd n n q or g q n suppose that sylow of g are abelian then g proof let s sylp g as s is abelian we have p and q n if g q q even n n q or then g q n q n q q see table ii this is greater than q n if g q q odd n then g q q again g q n whence the result the cases with g q n are similar see thm proposition let e be odd p and let h q with n q with n q with n or q with n suppose that n ep then h has no sylp character proof let s sylp h by lemma s is conjugate to a sylow of a subgroup gln q of by lemmas and gln q for n has no sylp character unless possibly when n let n so h q e and q then if q is even then g q q for q this is greater than whence the result if q is odd then q and h q so again h proposition let e be even p and let h q n n q q q odd n or q with n suppose that ep then h has no sylp character proof write ek m with m e where k is an integer as h contains a subgroup with p where spke q or q respectively it suffices to prove the lemma for ke let ke by lemma a sylow of h is contained in a subgroup isomorphic to guk q by lemma for k p and lemma for p k with d and q in place of q the group guk q with k q has no sylp character whence the claim the exceptional case h p is considered below let k then h q and q then if q is even then g q e q e q q for q this is greater than whence the result if q is odd then q and h q e so h whence the result characters for finite simple groups a similar argument works if h q as well as for h ke q with k odd and for h q with k even except when h go and e ke and e so p then and the irreducible characters of let h degree less than are of degrees let be a sylp character of degree and an irreducible constituent of by whenever this is a contradiction as h q e q q let k and h p q then for q and q e if q e then p and h so the result follows the case e has been examined above suppose that h q with k even or h q with k odd then some sylow of h is contained in a subgroup isomorphic to q or q respectively note that e k for the groups the result has been proven above except for the cases where k or k e however if k then sylow of and hence of h are cyclic and this case has been examined in propositions and let k e as k we have k e as e is even then h q and h q q if p then otherwise then h unless q let q p then by the irreducible characters of degree less than are of degrees therefore only these characters can occur as irreducible constituents of a character however the values of these characters at an element g g in class are in particular positive as g this is a contradiction suppose that g and p then let be an irreducible constituent of then let g g belong to the conjugacy class in notation of by inspection of the character table of g one observes that g whenever therefore g for every irreducible constituent of this implies see however such a character takes positive values at the elements of class so this case is ruled out remark let g be the universal covering group of one observes that g has a character and has no characters if h then h has steinberglike characters for p both reducible and irreducible classical groups at p in this section we investigate sylp and characters of simple classical groups over fields of odd order q at the prime p linear and unitary groups at p we first deal with the smallest case proposition let q be odd a let g q then g has a reducible character if and only if q for some k or if q b let g q then g has a reducible character if and only if q is a proof a the of is if q mod and else the smallest character degree is q in the first case q in the second it follows that there can not be characters in the first case unless q in the second case it follows from the gunter malle and alexandre zalesski character table of g that the sum of the trivial and the steinberg character is when q is a power of and there are no cases otherwise if q then there are two reducible characters of degree by b let be a reducible character let z z g then where z and z by lemma with p g and u z g is a character for g q if is irreducible then q is a by proposition by a this is also true if is reducible so q is a then there are irreducible characters such that is of degree indeed using the character table of g one observes that there exist irreducible characters of g that vanish at of g and such that z and z it follows that is a reducible character we recall that g denotes the third smallest degree of a irreducible representation of lemma let g be a group such that g psln q psun q with n q odd and g then g proof let first g q q odd then g q q by table while if q mod respectively if q mod thus g unless q is a in the latter case g q q and our claim follows next let g q with q odd then g q q which is larger than q for q now assume that g psln q with n and n see table while q when q q odd then g q q mod and q when q mod again the claim follows let g q q odd then g q q by table v while if q mod respectively if q mod thus g unless q is a in the latter case g q q q and our claim follows now let g q q odd then g q q suppose first that q we have q q whereas g q q q then g suppose now that q then q which is less than g q q now assume that g psun q with n odd here n g q q while q if q mod respectively q if q mod the claim follows finally assume that g psun q n with n even here g q q while the of is bounded above as given before again we can conclude proposition let g be with g psln q psun q with n and q odd a if n then g has no character b if n then g has no character for p proof by lemma with p g it suffices to prove the result in the case where g so we assume this and then equals the order of a sylow of g let first g psln q q odd let be a character for by lemma g and hence the irreducible constituents of are of degree characters for finite simple groups q n q or q n q q see table it is known that the irreducible characters of degree q n q are induced characters where is a character of the stabiliser p of a line of the underlying space for gln q while the irreducible character of degree q n q q is the unique constituent of the permutation character p on p let n and let g sln q be a matrix with an n n corresponding to a primitive element of with determinant and a corresponding to an element of order q since g has no eigenvalue in fq no conjugate of g is contained in p so all induced characters from p to g vanish on in particular g and g note that the image g of g has even order so g if is write where is a sum of induced characters of degree q n q with suitable xi evaluating on g we see that but then q n q is divisible by some odd prime so can not equal the this proves part b for g psln q now assume that n then an easy estimate shows that when q is not a so q and q then g so we may assume that in addition either q or q is a power of for q let g be the q where a q is a of order q observe that again g is not conjugate to an element of p thus h and h as g is a and is we have g we may now argue as above to conclude when q then the candidate characters have degrees and while so clearly there can be no character now consider the case when g q the proof of lemma shows that g unless q is a in the latter case the possible constituents of can have degrees q q q q while q clearly at most one of the degrees q q q q can contribute to but then necessarily q but in that case the character table shows that there s no character this completes the proof of a when g psln q now let g psun q with q odd and let be a character of according to lemma g and hence the irreducible constituents of are of degree q n n q or q n n q q see table v the first of these are semisimple characters lying in the lusztig series of an element s of order q in the dual group pgun q with centraliser s q the second is a unipotent character say corresponding to the character of the weyl group sn parametrised by the partition n let g sun q be a regular element of even order in a maximal torus t of order q q q see lemma a then no conjugate of the dual maximal torus t contains s so the characters in e g s vanish on g see prop if is then g as is unipotent its value on g is up to sign the same as h where irr sn is labelled by n and h is a permutation of cycle shape n see prop and remark before prop the rule gives that gunter malle and alexandre zalesski h so g we may now argue as in the first part to conclude that can not be thus completing the proof of b next assume that g q with q odd if q is not a power of and q then g as q so now assume that q is a power of and hence in particular q mod then q while the three smallest character degrees are q q q q q with the trivial character occurring at most once it is easily seen that there is no integral solution for a possible decomposition of when q then the three smallest degrees are while if q then the three smallest degrees are while so in neither case can there be characters either finally when g q then again the proof of lemma shows that q must be a here the possible constituents of have degrees q q q q while q again an easy consideration shows that at most the case q needs special attention but there the existence of characters can be ruled out from the known character table we now treat the case q which is considerably more delicate lemma let g or then g does not have reducible characters proof for g we have and all irreducible characters of degree less than take values on class so there are no characters for g we have and all irreducible characters have degree at most that large since the smallest character degree is those of degrees and can not be constituents of a character thus we need to consider the characters of degrees clearly those of degree can not occur either as mod we see that the character of degree has to appear at least three times but then the values on elements of order give a contradiction remark and both have irreducible characters see proposition lemma let g or let be a character of then proof suppose the contrary note that is reducible as we have let be an irreducible constituent of of maximal degree for all numerical data see let g then otherwise so and then which is false let irr g and then unless and unless or let then indeed otherwise the irreducible constituents of are of degree or which implies a contradiction it follows that let irr g with the irreducible characters of g of degree at most and distinct from are at in addition and as it follows that then a contradiction characters for finite simple groups let g and irr g with the irreducible characters of degree at most are of degree at most if then then and a contradiction suppose that then but then we obtain a positive value on class the same consideration rules out suppose that it occurs once as otherwise occurs times which is false as then so a contradiction no more option exists as the irreducible characters of degree less than are positive on lemma let g or g let be a character of then proof by lemma a g where is a proper character of by inspection of the character table of see it is easily checked that the conjugacy class of any element g of order is then by lemma b g for every g of this means that is by lemma then g lemma let g and let be a character of let be the irreducible constituents of disregarding multiplicities and then proof let then and thus suppose the contrary then we can assume that for i j note that k otherwise for some a and hence is a character of g and is a multiple of by g has no character of degree at most with this property by we have note that all irreducible characters of of degree at most extend to g except of degree of degree of degree and of degree p the corresponding characters of g are of degrees and respectively let ai where aip are integers i suppose that then i computing we get a contradiction as and i ai and k p ii suppose if then i whence k and computing we get a contradiction p so if then i computing we get a contradiction unless k and that is in in this case computing gives a contradiction so for i this violates iii let then if then whence k and this conflicts with if then computing yields a contradiction p p iv let so i if then i and hence which yields a contradiction with let then for i then for i k which violates v let then and the only irreducible character of degree less than with negative value at has it follows that and then if i and note that and as it follows that a character of degree occurs in which implies then we get contradiction to gunter malle and alexandre zalesski vi let computing leads to a contradiction lemma let g and a character of let be the irreducible constituents of disregarding multiplicities and then proof let then and suppose the contrary then we can assume that for i j note that for every linear character of therefore is a constituent of let g g then implies g by if then or it follows that must contain at least representations of the same degree which contradicts so either equals or by for these unless this violates unless k and then a contradiction lemma let h hn where hn or let be a character of then proof by lemma the claim holds for n p so by induction we can assume that it is true for x hn by lemma i where irr are p characters of x and i is a character of by induction p by lemmas and applied to we have i so by lemma proposition let n and g or let be a character of then proof let x be the direct product of n copies of or let be a character of x by lemmas and let y x sn the semidirect product where sn acts on x by permuting the factors then y contains a sylow of let m x s where s sn so the index m is odd note that as see the proof of proposition the result follows for these groups theorem let p m and g one of glm slm pslm gum sum psum then g has no character moreover if is a character of g then proof let first g glm or gum for m mod the result is stated in proposition let m l where l and h or let be a sylow of gll or gul and set u h then u contains a sylow of therefore by lemma if is a character of u then for some character of so by proposition so the result follows for these groups for g slm or sum the result follows from the above and lemma for g pslm or psum the statement follows from the above and lemma orthogonal and symplectic groups at p let v be the natural module for h q q odd and for g g let d g be the dimension of the fixed point subspace of g on v let characters for finite simple groups denote the weil character of by howe prop g q d g let where irr h and z for z z h lemma a let h h be semisimple such that h and zh fix no vector on then h b let v where is a subspace of dimension let g h be an element such that gvi vi i and g and zg fix no vector on then g q c let q then is not constant on the elements of q proof a we have h h h and zh h h therefore by prop h zh h whence the claim b we have g and zg q by prop then q g zg g whence the claim c choose g as in b and h to be an element stabilising such that h coincides with g on and the matrix of h on is similar to then h is a element satisfying a and hence h let h and g be the images of h g in h then h and g are elements of q as z h is in the kernel of this can be viewed as a character of h as q is greater than for q c follows proposition let g q with q odd and n then g has no characters proof we have q if divides q if divides q k let k be minimal with n then as we have if q q if q on the other hand g q n q n q q by thm and this is larger than unless n and q let s set aside these cases for a moment then otherwise if is the constituents of are either weil characters or the trivial character now note that a weil character of q of degree q n has the centre in its kernel if and only if its degree is odd so the constituents of have degree q n if q mod and n is odd and q n else according to lemma the trivial character occurs at most once in as q n is never a power of for n and odd q consider a zsigmondy prime divisor the trivial character must occur exactly once let denote the two weil characters of g interchanged by the outer diagonal automorphism of observe that is induced by an element of q and thus fixes all involution classes of let g g be an involution and write a g g then g ma where m is the number of constituents of as necessarily m compare the degrees we see that g so is not gunter malle and alexandre zalesski we now discuss the two exceptions for g and all irreducible characters of degree at most take values on class so there is no character for g and all irreducible characters of degree at most take positive values on class except for one of degree which takes value and one of degree which takes value as at most one of those latter two characters could occur and at most once there can be no character for p proposition let g q with q odd and n then g has no characters proof according to thm we have g q q which is larger than unless either n and q or n and q for g the only character of degree less than is the semisimple character of degree see since the trivial character can occur at most once in a regular character we see that no example can arise here for g the only character of degree less than is the semisimple character of degree see again this does not lead to an example for g the only character of degree less than is the character of degree and we conclude as before proposition let g q with q odd and n then g has no characters proof the second smallest character degree of g q is given by g n q q q see thm which is larger than unless n q leaving that cases aside for a moment we see that any character of g is a multiple of the smallest character of degree q n q q q plus possibly the trivial character arguing as in the case of symplectic groups we see that such characters take value on involutions for g the constituents of a character could have degree or no integral linear combination of these three degrees with appearing at most once adds up to n the second smallest character degree of g q is g q q q see again thm which is larger than we conclude as before again we are left with the case that p q lemma let g or then g has no character proof for g we have and all irreducible characters of g of degree at most take positive value on the class see let g then all irreducible characters of g of at most that degree are positive at the elements of conjugacy class let g by g has irreducible characters of degree at most all of them take positive values on class so the result follows this implies the result lemma let g or let be a character of g and the irreducible constituents of disregarding their multiplicities set then characters for finite simple groups proof suppose first that g note that so suppose the contrary then we use notation from there are characters of of degree less than the maximal degree among them is there is only one irreducible character of of degree less that that is negative at this is of degree while all other are positive so it must be a constituent of this character extends to g so the other constituents of are of degrees at most it follows that in fact indeed if and for some i j then for l i j are of degree at most these characters are positive at as well as those of degree and this violates thus and hence furthermore computing the character table of g by a program in the computer package gap one observes that there are distinct irreducible characters of degree and only one irreducible character of this degree for it follows that these characters differ from each other by multiplication by a linear character as one observes that for every linear character of therefore must be a constituent of so if then there are more constituents of of this degree which contradicts the inequality thus for i by all such irreducible characters of and hence of g are positive at which contradicts let g note that so suppose the contrary then there are irreducible characters of degree less than all such characters are positive at which violates hn pgo or psp lemma a let h hn where n let be a character of then b let g gn where gn or let be a n character of then proof a if n then the result is contained in lemma p so by induction we can assume that it is true for x hnp by lemma i where irr are characters of x and is a p character of by induction by lemma applied to we have i so by lemma b this follows from a and lemma as z g is a lemma let g or then g has no character for p proof let x be the direct product of n copies of or let be a n character of x by lemma let y x sn the semidirect product where sn acts on x by permuting the factors then y contains a sylow of let m x s where s sn so the index m is odd note that as see the proof of proposition the result follows for the groups and for g the result follows from the above and lemma for g or the statement follows from the above and lemma proposition let m and g or then g has no character for p gunter malle and alexandre zalesski proof for m mod the result is stated in lemma let m where l and h or let be a sylow of or set u h then u contains a sylow of let be a character of therefore by lemma if is a character of u then for some character of by lemma so and the result follows proposition let g m then g has no character for p proof let m l where l and let h then g contains a subgroup d isomorphic to h then one concludes that d contains a sylow of let be a sylow of set u h then u contains a sylow of by lemma if is a character of u then for some character of so by lemma and the result follows proposition let g m then g has no character for p proof the case m is dealt with in lemma so we assume that m that is let m l where l and let h then g contains a subgroup d isomorphic to h then d contains a sylow of set u h where is a sylow of so u contains a sylow of by lemma if is a character of u then for some character of so by lemma and the result follows theorem let g with m with m or with m and let be the derived group of let h be a group such that h then h and h have no character for p proof for h the statement follows from lemma propositions and using lemma and for h from lemma we now collect our results to prove our main theorems from the introduction proof of theorem assume that g is a finite simple group possessing a steinberglike character with respect to a prime the cases when is irreducible have been recalled in proposition if sylow of g are cyclic then g p is as in proposition so we may now assume that sylow of g are for g alternating and p odd there are no cases by theorem except for with p the characters of sporadic groups are listed in theorem thus g is of lie type the case when p is the defining prime was handled in and propositions and respectively so now assume p is not the defining prime for groups of exceptional lie type were handled in theorem for classical groups of large rank with p odd our result is contained in proposition the cases for psln q and psun q with p are completed in theorem those for the other classical groups in propositions and finally the cases with p are covered by proposition for g q proposition for psln q and psun q with q theorem for psln and psun propositions and for classical groups with q and theorem for the case that q characters for finite simple groups proof of theorem the characters of projective fp of dimension are in particular so in order to prove this result we need to go through the list given in theorem when sylow of g are cyclic the possibilities are given in lemma b for g of lie type in characteristic p see thm the case of theorem is subsumed in statement and finally the alternating groups for p are discussed in theorem references conway curtis norton parker and wilson atlas of finite groups clarendon press oxford curtis and reiner methods of representation theory with applications to finite groups and orders wiley new york emmett and zalesski on regular orbits of elements of classical groups in their permutation representations comm algebra feit the representation theory of finite groups amsterdam giannelli law on permutation characters and sylow of sn shoke and zalesski on in groups and fusion systems algebra howe on the character of weil s representation trans amer math soc huppert in klassischen gruppen math z james the representation theory of the symmetric groups berlin lassueur and malle simple endotrivial modules for linear unitary and exceptional groups math z lassueur malle and schulte simple endotrivial modules for groups reine angew math character degrees and their multiplicities for some groups of lie type of rank available at http malle and th weigel finite groups with minimal manuscripta math malle and zalesski prime power degree representations of groups archiv math navarro characters and blocks of finite groups cambridge univ press cambridge hung ngoc nguyen low dimensional complex characters of the symplectic and orthogonal groups comm algebra pellegrini and zalesski on characters of chevalley groups vanishing at the elements internat algebra comput ch rudloff and zalesski on multiplicity eigenvalues of elements in irreducible representations of finite groups j group theory tiep and zalesskii minimal characters of the finite classical groups comm algebra weir sylow of the classical groups over finite fields with characteristic coprime to proc amer math soc zalesski minimal polynomials and eigenvalues of in representations of groups with a cyclic sylow london math soc zalesski low dimensional projective indecomposable modules for chevalley groups in defining characteristic algebra zalesski remarks on characters for simple groups archiv math gunter malle and alexandre zalesski fb mathematik tu kaiserslautern postfach kaiserslautern germany address malle department of physics informatics and mathematics national academy of sciences of belarus minsk belarus address
4
stability of integral delay equations and stabilization of models iasson karafyllis and miroslav krstic dept of mathematics national technical university of athens zografou campus athens greece email iasonkar dept of mechanical and aerospace university of california san diego la jolla ca email krstic abstract we present bounded dynamic but output feedback laws that achieve global stabilization of equilibrium profiles of the partial differential equation pde model of a simplified chemostat model the chemostat pde state is which means that our global stabilization is established in the positive orthant of a particular function rather situation for which we develop tools our feedback laws do not employ any of the distributed parametric knowledge of the model moreover we provide a family of highly unconventional control lyapunov functionals clfs for the chemostat pde model two kinds of feedback stabilizers are provided stabilizers with continuously adjusted input and stabilizers the results are based on the transformation of the hyperbolic partial differential equation to an ordinary differential equation and an integral delay equation novel stability results for integral delay equations are also provided the results are of independent interest and allow the explicit construction of the clf for the chemostat model keywords hyperbolic partial differential equation models chemostat integral delay equations nonlinear feedback control introduction models are described by the foerster equation see and the references therein which is a first order hyperbolic partial differential equation pde with a boundary condition models are natural extensions of standard chemostat models see optimal control problems for agestructured models have been studied see and the references therein the ergodic theorem see and for similar results on asymptotic similarity has been proved an important tool for the study of the dynamics of age structured models see also for a study of the existence of limit cycles this work initiates the study of the global stabilization problem by means of feedback control for models more specifically the design of explicit output feedback stabilizers is sought for the global stabilization of an equilibrium age profile for an chemostat model just as in other chemostat feedback control problems described by ordinary differential equations odes see the dilution rate is selected to be the control input while the output is a weighted integral of the age distribution function the assumed output functional form is chosen because it is an appropriate form for the expression of the measurement of the total concentration of the microorganism in the bioreactor or for the expression of any other measured variable light absorption that depends on the amount and its size distribution of the microorganism in the bioreactor the main idea for the solution of the feedback control problem is the transformation of the first order hyperbolic pde to an integral delay equation ide see and the application of the strong ergodic theorem this feature differentiates the present work from recent works on feedback control problems for first order hyperbolic pdes see the present work studies the global stabilization problem of an equilibrium age profile for an agestructured chemostat model by means of two kinds of feedback stabilizers i a continuously applied feedback stabilizer and ii a feedback stabilizer the entire model is assumed to be unknown and two cases are considered for the equilibrium value of the dilution rate the case where the equilibrium value of the dilution rate is unknown absolutely nothing is known about the model and the case where the equilibrium value of the dilution rate is a priori known in the first case a family of dynamic output feedback laws with continuously adjusted dilution rate is proposed the equilibrium value of the dilution rate is estimated by the observer in the second case a output feedback law is proposed for arbitrarily sparse sampling schedule in all cases the dilution rate control input takes values in a bounded interval and consequently input constraints are taken into account the main idea for the solution of the feedback control problem is the transformation of the pde to an ode and an ide some preliminary results for the case which are extended in the present work were given in however instead of simply designing dynamic output feedback laws which guarantee global asymptotic stability of an equilibrium age profile the present work has an additional goal the explicit construction of a family of control lyapunov functionals clfs for the chemostat model in order to achieve this goal the present work novel stability results on linear ides which are of independent interest the newly developed results provide a proof of the scalar strong ergodic theorem for special cases of the integral kernel stability results for linear ides similar to those studied in this work have been also studied in since the state of the chemostat model is the population density of a particular age at a given time the state of the chemostat pde is valued accordingly the desired equilibrium profile a function of the age variable is so the state space of this pde system is the positive orthant in a particular function space we pursue global stabilization of the positive equilibrium profile in such a state space this requires a novel approach and even a novel formulation of stability estimates in which the norm of the state at the desired equilibrium is zero but takes the infinite value not only when the population density of some age is infinite but also when it is zero we infinitely penalize the population death the washout as we should our main idea in this development is a particular logarithmic transformation of the state which penalizes both the overpopulated and underpopulated conditions with an infinite penalty on the washout condition the structure of the paper is described next in section we describe the chemostat stabilization problem in a precise way and we provide the statements of the main results of the paper theorem and theorem section provides useful existing results for the uncontrolled pde while section is devoted to the presentation of stability results on ides which allow us to construct clfs for the chemostat problem the proofs of the main results are provided in section section presents a result which is similar to theorem but uses a reduced order observer instead of a observer simulations which illustrate the application of the obtained results are given in section the concluding remarks of the paper are given in section finally the appendix provides the proofs of certain auxiliary results notation throughout this paper we adopt the following notation for a real number x denotes the integer part of x denotes the interval let u be an open subset of a metric space and m be a set by c u we denote the class of continuous mappings on u which take values in when u n by c u we denote the class of continuously differentiable functions on u which take values in when u a b or u a b with a b c a b or c a b denotes all functions f a b or f a b which are continuous on a b and satisfy f s f a or lim f s f a and s s lim f s f b when u a b c a b denotes all functions s f a b which are continuously differentiable on a b and satisfy lim h f a h f a f s lim f s f a and s s k is the class of all strictly increasing unbounded functions a with a see for any subset s and for any a a s denotes the class of all functions f c a s for which there exists a finite or empty set b a such that i the derivative f a exists at every a a b and is a continuous function on a b ii all meaningful right and left limits of f a when a tends to a point in b a exist and are finite let a function f c a be given where a is a constant we use the notation f t to denote the profile at certain t f t a f t a for all a a let a function x c a be given where a is a constant we use the notation xt c to denote the a history of x at certain t x t a for all a a let dmin dmax be given constants the saturation function sat x for the interval dmin dmax is defined by sat x min x for all x problem description and main results i the model consider the chemostat model t a t a a d t f t a for t a a a f t k a f t a da for t where d t dmin dmax is the dilution rate dmax dmin are constants a is a constant and a a k a are continuous functions with k a da system is a continuous model of a microbial population in a chemostat the function a is called the mortality function the function f t a denotes the density of the population of age a a at time t and the function k a is the birth modulus of the population the boundary condition is the renewal condition which determines the number of newborn individuals f finally a is the maximum reproductive age physically meaningful solutions of are only the solutions solutions satisfying f t a for all t a a the chemostat model is derived by neglecting the dependence of the growth of the microorganism on the concentration of a limiting substrate a more accurate model would involve an enlarged system that has one pde for the age distribution coupled with one ode for the substrate as proposed in in the context of studying limit cycles with constant dilution rates however the approach of neglecting the nutrient equation in the chemostat is not new see for example we assume that there exists dmin dmax such that a a k a d a s ds this assumption is necessary for the existence of an equilibrium point for the control system which is different from the identically zero function any function of the form a f a m d a s ds for a a with m being an arbitrary constant is an equilibrium point for the control system with d t d notice that there is a continuum of equilibria the measured output of the control system is given by the equation a y t p a f t a da for t a where p a is a continuous function with p a da notice that the case p a corresponds to the total concentration of the microorganism in the chemostat feedback control with continuously adjusted input let y be an arbitrary constant the set point and let f a be the equilibrium age profile given by with a m y p a d a s ds consider the dynamic feedback law given by y t t z t d t t ln y y t t t ln y z t t z t and y t d t sat z t ln y where l are constants next consider solutions of the problem with initial condition f z x where x is the set a x f pc a f k a f a da by a solution of the problem with initial condition f z x we mean a pair of mappings f c a z c where which satisfies the following properties i f c f where d f t a a a t b a and b a is the finite possibly empty set where the derivative of f x is not defined or is not continuous ii f t x for all t where f t a f t a for a a see notation iii equations hold for all t iv equation t a t a a d t f t a holds for all t a d f and v z z z f a f a for all a a the mapping t f t z t x is called the solution of the system with and initial condition f z x defined for t define the functional c a by means of the equation a f f a k s l d dl a ak a f a da and assume that the following technical assumption holds for the function a k a k a d a s ds a that satisfies for a a k a da recall a a there exists a constant such that a k a k s ds da a where r ak a da we are now ready to state the first main result of the present work which provides stabilizers with continuously adjusted input theorem continuously adjusted input and unknown equilibrium value of the dilution rate consider the chemostat model with k a under assumption a then for every f x and z there exists a unique solution of the with and initial condition f z x furthermore there exist a constant l and a function k such that for every f x and z the unique solution of the with and initial condition f z x is defined for all t and satisfies the following estimate f t a t z t d max ln f a a f a l z d t max ln a f a for all t moreover let p be a pair of constants satisfying p then the continuous functional w c a defined by w z f f g q z f q z f where is an arbitrary constant q z f max exp a f a f f a a f a ln f ln f m f a min f min a f a is a sufficiently small constant and m g are sufficiently large constants is a lyapunov functional for the system with in the sense that every solution f t z t x of the system with satisfies the inequality lim sup h z t h f t h w z t f t l h w z t f t w z t f t for all t as remarked in the introduction theorem does not only provide formulas for dynamic output feedback stabilizers that guarantee global asymptotic stability of the selected equilibrium age profile but also provides explicit formulas for a family of clfs for system indeed the continuous functional w c a defined by is a clf for system remark i the family of feedback laws parameterized by l guarantees global asymptotic stabilization of every selected equilibrium age profile moreover the feedback law achieves a global exponential convergence rate see estimate in the sense that estimate holds for all physically meaningful initial conditions f x as indicated in the introduction the logarithmic penalty in penalizes both the overpopulated and underpopulated conditions with an infinite penalty on zero density for some age the state converges to the desired equilibrium profiles from all positive initial conditions but not from the initial condition which itself is an equilibrium population can not develop from a dead initial state ii the feedback law is a dynamic output feedback law the subsystem is an observer that primarily estimates the equilibrium value of the dilution rate the observer is a highly reduced order since it estimates only two variables the constant and the scalar functional of the state f introduced in all the remaining infinitely many states are not estimated this is the key achievement of our stabilization without the estimation of nearly the entire state and proving this result in an appropriately constructed transformed representation of that unmeasured infinitedimensional state iii the family of feedback laws does not require knowledge of the mortality function of the population the birth modulus of the population and the maximum reproductive age of the population accordingly it does not require the knowledge of the equilibrium value of the dilution rate either instead is estimated by the observer state z t see estimate iv the feedback law can work with arbitrary input constraints the only condition that needs to be satisfied is that the equilibrium value of the dilution rate must satisfy the input constraints dmin dmax which is a reasonable requirement otherwise the selected equilibrium age profile is not feasible v the parameters l can be used by the control practitioner for tuning the controller the selection of the values of these parameters affects the value of the constant l that determines the exponential convergence rate since the proof of theorem is constructive useful formulas showing the dependence of the constant l on the parameters l are established in the proof of theorem vi it should be noted that for every pair of constants l it is possible to find constants p satisfying l p indeed for every l the matrix is a hurwitz matrix consequently there exists a positive definite matrix so that the matrix p p p l p l l p l p is negative definite this implies the inequalities p and vii the main idea for the construction of the feedback law is the transformation of the pde problem into a system that consists of an ode and an ide along with the y t the transformations are presented in figure and y logarithmic output transformation y t ln are exploited rigorously in the proof of theorem figure also shows that the observer is actually an observer for the system t d t d t t checking assumption a theorem assumes that the birth modulus of the population satisfies assumption a this is not an assumption that is needed for the establishment of the exponential estimate estimate could have been established without assumption a by means of the strong ergodic theorem see section the role of assumption a is crucial for the establishment of the clf given by however since assumption a demands a specific property for the function a k a k a exp d a s ds that involves the unknown equilibrium value of the dilution rate the verification of the validity of assumption a becomes an issue the following proposition provides useful sufficient conditions for assumption a its proof is provided in the appendix proposition means of checking assumption a let k c a be a function that satisfies the following assumption a k a da moreover there s t k a where t a k a has b the function k c a satisfies k a for all a a and exists a constant such that the set lebesgue measure where r ak a da a then for every r it holds that a k a k s ds da s a proposition shows that assumption a is valid for a function that satisfies assumption b on the other hand we know that assumption b holds for every function k c a a satisfying k a da and having only a finite number of zeros in the interval a since a k a k a d a s ds we can be sure that assumption a necessarily holds for all birth moduli k c a of the population with only a finite number of zeros in the interval a no matter what the equilibrium value of the dilution rate is and no matter what the mortality function a is t a t a a d t f t a t a a f t k a f t a da f a f t t ln f t y t ln y t y a y t f t a p a f t a da t d d t a t k a t a da f t a t a f a exp t y t y t a a t a k s dsda a a y t t ln g a t a da figure the transformation of the pde with boundary condition given by to an ide and an ode and the inverse transformation control on the other hand when the equilibrium value of the dilution rate is a priori known then we are in a position to achieve stabilization let t be the sampling period and consider the system with the feedback law y it d t sat d t ln for all t it i t and for all integers i y by a solution of the problem with initial condition f x where x is the set a x f pc a f k a f a da we mean a mapping f c a where which satisfies the following properties i f c f where d f t a a a t b a t t t and b a is the finite possibly empty set where the derivative of f x is not defined or is not continuous ii f t x for all t iii equations hold for all t iv equation t a t a a d t f t a holds for all t a d f and v f a f a for all a a the mapping t f t x is called the solution of the system with and initial condition f x defined for t we are now ready to state the second main result of the present work theorem feedback and known equilibrium value for the dilution rate consider the chemostat model with k a then for every f x there exists a unique solution of the with and initial condition f x furthermore there exist a constant l and a function k such that for every f x the unique solution of the with and initial condition f x is defined for all t and satisfies the following estimate f t a f a l t max ln max ln a f a f a a for all t the differences of theorem with theorem are i theorem applies the feedback while theorem applies a continuously adjusted feedback ii theorem assumes knowledge of the equilibrium value of the dilution rate iii theorem does not assume property a but does not provide a clf for the system this was explained above assumption a is only needed for the explicit construction of a clf finally the reader should notice that there is no constraint for the sampling period t arbitrarily large values for t are allowed arbitrarily sparse sampling in the case where the output is given by y t f t for t instead of the proof of theorem works with only minor changes the proof is omitted this is the case considered in ideas behind the proofs of the main results the basic tool for the proofs of the main results of the present work is the transformation shown in figure the main idea comes from the recent work the transformation of a hyperbolic pde to an ide however if we applied the results of in a straightforward way then we would end up with the following ide a a t v t k a s ds d s ds t a da t where v t f t and a t f t a exp s ds d s ds t a t however the ide is dependent instead we would like to describe the effect of the control input in a more convenient way this is achieved by introducing one more state t ln f t where is given by the evolution of t is described by the ode t d d t then we are in a position to obtain the transformation t a f t a f a f t for all t a a which decomposes the dynamics of to the dynamics of the ide a a a t a k s dsda a t k a t a da evolving on the subspace described by the equation and the ode t d d t after achieving this objective the next step is the stability a analysis of the zero solution of the ide t k a t a da this is exactly the point where the strong ergodic theorem or the results on linear ides are used the uncontrolled pde the present section aims to give to the reader the background mathematical knowledge which is used for the study of pdes more specifically we aim to make the reader familiar to the strong ergodic theorem for pdes and to show the relation of pdes to linear ides let a be a constant and let a k a be continuous functions with a k a da consider the initial value pde problem t a t a a z t a for t a a a z t k a z t a da for t with initial condition z a z a for all a a the following existence and uniqueness result follows directly from proposition in and theorems on pages in lemma for each absolutely continuous function z c a with a z k a z a da there exists a unique function z a with z a z a for all a a that satisfies a for each t the function z t defined by z t a z t a for a a is a absolutely continuous and satisfies z t k a z t a da for all t b the mapping t z t a is continuously differentiable and c equation holds for almost all t and a a moreover if z a for all a a then z t a for all t a a the function z a is called the solution of when additional regularity properties hold then the solution of satisfies the properties shown by the following lemma lemma to ides if k a then for every z a a satisfying z k a z a da the function z a from lemma is c on s t a a a t b a where b is the finite or empty set where the derivative of z is not defined or is not continuous satisfies on s and equation for all t also a z t a s ds t a for all t a a where v c a c is the unique solution of the integral delay equation ide a a v t k a s ds t a da for t with initial condition v s ds z a for all a a lemma is obtained by integration on the characteristic lines of the solution v c a c of the ide is obtained as the solution of the delay differential equation a t k v t k a v t a da a v t a da dk a where k a k a s ds for a a the differential equation is obtained by formal differentiation of the ide and its solution satisfies the verification requires integration by parts a a it is straightforward to show that the function h d k a da s ds is strictly decreasing with lim h d and lim h d therefore there exists a unique d such that d holds equation is the condition the following strong ergodicity result follows from the results of section in and proposition in theorem scalar strong ergodic theorem let d be the unique solution of then there exist constants k such that for every absolutely continuous function z c a a with z k a z a da the corresponding solution z a of satisfies for all t a exp s ds z t a exp d t a s ds z da k exp d t a a s ds z a da where a is the linear continuous functional defined by a z z a k s l d dl a a a ak a l d dl a results on linear integral delay equations since the previous sections have demonstrated the relation of pdes to linear ides we next focus on the study of linear ides the present section provides stability results for the system described by the following linear ide a x t a x t a da where x t a is a constant and c a the results of the present section allow the construction of lyapunov functionals for linear ides which provide formulas for lyapunov functionals of pdes since the zero dynamics of the controlled model are described by linear ides all proofs of the results of the present section are provided in the appendix the notion of the a for every c with x a x da there exists a unique function x c a that satisfies for t and x for all a a this function is called the solution of with initial condition c the solution is obtained as the solution of the neutral delay equation d dt a x t a x t a da theorem on page in guarantees the existence of a unique function x c a c that satisfies d dt a x t a x t a da for t and x for all a a therefore ide the x x c x a x a da a defines a dynamical system on with state xt x where x t a for all a a see notation a basic estimate and its consequences the first result of this section provides useful bounds for the solution of with kernel notice that the following lemma allows discontinuous solutions for as well as discontinuous initial conditions lemma a basic estimate for the solution of linear ides let c be a given a function with a da and consider the ide let be an arbitrary constant with there exists a unique function x a with a da then for every x a a for a that satisfies for t moreover x a satisfies for all t the following inequality h min inf a inf a inf t a a a a c l c h sup t a sup a sup a c a a a a where h min a l a da c a da a direct consequence of lemma and lemma is that if k a then for every a z a satisfying z k a z a da and z a for all a a the corresponding solution of satisfies z t a for all t a a to see this notice that if a k a s ds a a k a s ds a on the other hand if then we may apply lemma and lemma directly for the ide then we define x t exp pt v t for all t a where it follows a k a pa s ds a a a x t k a pa s ds x t a da that for and that for p sufficiently large another direct consequence of lemma and lemma is that if k a then the quantity f t a f a f t appearing in the right hand side of the transformation is only a function of t a and thus is a valid transformation indeed it is straightforward to verify that for every piecewise continuous function d dmin dmax and for every a f pc a with f k a f a da the solution of with f a f a for t a a corresponding to input d dmin dmax satisfies f t a z t a d s ds for all t a a where z a is the solution of with same initial condition t z a f a for a a using and equation f t a z t a d s ds we get f t a f a t exp d a d s ds t a for all t a a t f t a z t a d s ds using equation and definition we get a t m wk w dw f t d s ds v t a exp d a k s ds t t d t d s ds v l exp d l k s ds a t since v t for all t a a consequence of and the conclusion of the previous paragraph the above equation implies that f t for all t combining the two above equations we get a v t a exp d t a wk w dw f t a d dt v l exp d l k s ds t a k a k a d a s ds for a a f a f t where a v l exp d l k s ds t t t d dt for all t a a notice that implies for all t indeed we have for all t t v l exp d l k s ds v t exp d t k s ds v l k t l exp d l dl t t a v t exp d t k s ds v t a k a exp d t a da that a using definition k a k a d a s ds for a a and the fact that a k s ds a consequence of we get for all t d dt a a v l exp d l k s ds exp d t v t v t a k a s ds t t which combined with gives fact that d dt v l exp d l k s ds t t v a exp s ds z a s ds f a for all t therefore using the for all a a we get v l exp d l k s ds dl v l exp d l k s ds t t w a f w d w s ds k s ds w f t a is a function the quantity f a f t a consequently a f t a f a f t only of t a since we have for all t wk w dw a w a f w d w s ds k s ds w v t a exp d t a for all t a a the strong ergodic theorem in terms of ides next we state the strong ergodic theorem theorem in terms of the ide to this goal we define the operator g c c a for every v c by the relation gv a v for all a a if c a k a satisfy for certain d then it follows from lemma and theorem that there exist constants k such that for every z a a satisfying z k a z a da the unique solution of the ide with initial condition v a s ds z a for all a a satisfies for all t the following estimate a v t a exp d t a z da k exp d t v da a the above property can be rephrased without any reference to the pde for every a k pc a with k a exp d a da there exist constants k such that for every a c with v k a v da and a the unique solution of the ide a v t k a v t a da with initial condition v v for all a a satisfies for all t with a z a v a s ds for a a using the transformation x t d t for all t a we obtain a mapping of solutions of the ide v t k a v t a da to the solutions of the ide with a k a d a a for all a a moreover estimate implies the following estimate for all t a x t a p x da x da k t exp d a a where the functional p c is defined by means of the equation a a a p x r x s dsda is found by substituting p c r a da the functional a z a x a d a s ds for a a in the functional a defined by and therefore we are in a position to conclude that the following property holds for every pc a a with a da there exist constants k such that for every a c with x a x da and a the unique solution of the ide with initial condition x for all a a satisfies the following estimate for all a a x t a p x da k t x da using this property we obtain the following corollary which is a restatement of the strong ergodic theorem theorem in terms of ides and the norm instead of the norm recall that a x x c x a x a da corollary the strong ergodic theorem in terms of ides suppose that a a with a da then there exist constants m such that for every x with pc a the unique solution of the ide with initial condition x for all a a satisfies the following estimate for all t max x t p x m t max x a a the construction of lyapunov functionals the problem with corollary is that it does not provide a functional which can allow the derivation of the important property moreover it does not provide information about the magnitude of the constant in order to construct a functional and obtain information about the magnitude of the constant we need some technical results the first result deals with the exponential stability of the zero solution for notice that the proof of the exponential stability property is made by means of a lyapunov functional a lemma lyapunov functional for the general case suppose that a da then x is globally exponentially stable for moreover the functional v x defined by v x max a x where is a constant that satisfies a a a exp da satisfies the differential inequality lim sup h xt h v xt v xt h for all t for every solution of lemma is useful because we next construct lyapunov functionals of the form used in lemma however we are mostly interested in kernels c a with values that a satisfy a da we show next that even for this specific case it is possible to construct a lyapunov functional on an x x c x a x a da invariant subspace of the state space a we next introduce a technical assumption a the function c a satisfies a for all a a and a da moreover a there exists such that a s ds da where r a da a a the following result provides the construction of a lyapunov functional for system under assumption theorem lyapunov functional for linear ides with special kernels consider system where c a satisfies assumption let be a real number for which a a s ds da where r a da define the functional v x by means of the a equation a v x max a x p x a a where is a real number for which a a s ds exp da and p x is the a functional defined by then the following relations hold p xt p x for all t lim sup h xt h v xt v xt h for all t for every solution of remark theorem is a version of the scalar strong ergodic theorem compare with corollary for kernels that satisfy assumption corollary does not allow us to estimate the magnitude of the constant that determines the convergence rate on the other hand theorem allows us to estimate the comparison lemma on page in and differential inequality guarantee that v xt exp t v for all t and for every solution of using definition and the previous estimate we can guarantee that max x t a p xt max x t a p exp t a max x p for all t a a a therefore bounds for can be computed in a straightforward way using the inequality a a a a s ds exp da an allowable value for is a ln a s ds da a a moreover corollary does not provide a functional for equation however the cost of these features is the loss of generality while corollary holds for all kernels a that satisfy a for all a a and a a da theorem holds only for kernels that satisfy assumption theorem can allow us to guarantee exponential stability for the zero solution of when the state evolves in certain invariant subsets of the state space this is shown in the following result corollary lyapunov functional for linear ides on invariant sets consider system where c a satisfies assumption let be a real number for which a s ds da where r a da let p x be the functional defined by a define the functional w x by means of the equation a a w x max a x a a where is a real number for which a a s ds exp da let s x be a positively a invariant set for system and let c s where is a constant and s c is an open set with s s be a continuous functional that satisfies lim sup h x t h c x t h for every t and for every solution x t of with x t s then for every s with p and for every b k c the following hold for the solution x t of with initial condition s lim x t h b w x t h c x t b w x t c x t b w x t w x t for all t h p xt for all t remark a the differential inequality is equivalent to the assumption that the mapping t c xt is b using assumption and lemma we can guarantee that the mapping t xt is nondecreasing and that the mapping t g xt is for every solution of where g c are the continuous functionals g x min and g x max a a indeed lemma implies that for every solution of it holds that x xt g xt g for all t consequently any set s x of the form s x x min s x x max a a s x x min max a a where c are constants is a positively invariant set for moreover using the semigroup property for the solution of and we get g g for all t the above inequality shows that the mapping t xt is and that the mapping t g xt is for every solution of proofs of main results we next turn our attention to the proof of theorem throughout this section we use the notation q x min d d dmin for all x notice that q x is a function which satisfies the equation q x sat d x d for all x equation and the fact dmin dmax imply the inequality q x max dmax d d dmin for all x for all x we also notice that the inequality xq x min d max d d d min x holds indeed inequality can be derived by using definition and distinguishing three cases i d dmin x dmax d ii x dmax d and iii d dmin x for case i we get from xq x x and since case for case ii min dmax d d dmin x we get dmax d min dmax d d dmin x from we conclude that holds in this xq x dmax d x and since x x we conclude that holds in this case the proof is similar for case iii the proof of theorem is based on the transformation shown in figure and on the following lemmas their proofs can be found in the appendix lemma consider the control system a t d d t t k a t a da t t where a is a constant dmin dmax is a constant dmax dmin are constants k c a satisfies assumption a and a k a da the control system is defined on the set s where a s s c p s c k a a da and p is the linear functional p r k s dsda with r ak a da the measured a a a output of system is given by the equation a y t t ln g a t a da a where the function g c a satisfies g a for all a a and g a da consider the system with the dynamic feedback law given by t z t d t t y t and t t y t z t t z t d t sat t y t where l are constants let p be a pair of constants satisfying p then there exist sufficiently large constants m g and sufficiently small constants l such that for every constant and for every z s the solution z t t t s of the system with and initial condition z z for all a a is unique exists for all t and satisfies the differential inequality lim sup h t h z t h t h v t z t t l h v t z t t v t z t t for all t where v c is the continuous functional defined by v z g q z q z max a m a q z min a lemma suppose that there exists a constant l such that the continuous function satisfies lim sup h t h t l h t for all t t then the following estimate holds l t for all t we are now ready to provide the proof of theorem proof of theorem define f a f a f for all a a ln f it is straightforward to verify using definitions equation and the fact that a k a k a d a s ds a a a for all a a that p where p r k s dsda with r ak a da define g a p a f a y for all a a notice that and the fact that a m y p a d a s ds imply that the function a g c a satisfies g a for all a a and g a da next consider the solution z t t t s of the system with and initial condition z z for all a a lemma guarantees that the solution z t t t s of the system with a exists for all t the solution of the ide t k a t a da is c on since it coincides a with the solution of the delay differential equation t k t k a t a k a t a da with the same initial condition therefore by virtue of the function defined by f t a t a f a exp t for t a a is continuous and f c f where d f t a a a t b a and b a is the finite possibly empty set where the derivative of f x is not defined or is not continuous since t s it follows that f t x for all t where f t a f t a for a a using and we conclude that t ln f t for all t moreover using and we conclude that equations hold for all t and equation holds for all t a d f finally by virtue of and it follows that z z f a f a for all a a using we conclude that v t z t t w z t f t for all t therefore the differential inequality implies the differential inequality lemma in conjunction with inequality implies that the following estimate holds l v t z t t v z v z t for all t since p is a pair of constants with p it follows that the quadratic form a e p is positive definite therefore there exist constants k such that e a e p e k e for all e using the previous inequality and we obtain the following estimates for all t l t v z v z g m max a t a min min t a a l v z v z t l g max t t z t d v z v z t estimates and the fact that m g are sufficiently large constants with g and g m imply the following estimate for all t max a t a t t z t d min min t a using and the l v z v z v z t x fact that ln x for all x we get the estimate min for all a a exp max s t s f t a t a a t ln t f a min t a min min t s a which implies the following estimate for all t exp max a t f t a a t max ln a f a min min t a combining and we obtain the following estimate for all t f t a t z t d max ln f a a l exp v z v z v z t taking into account we conclude that the validity of relies on showing that there exists a function b k that satisfies the following inequality for all f x and z and for s satisfying f a z d v z max ln a f a in order to show and taking into account definitions it suffices to show there exist functions k for which the following inequalities hold for all f x and for s satisfying a f a f a amax a max ln a f a min min f a a max ln a f a in what follows we are using repeatedly the notation v max ln and the fact that a f a f a f a max exp v exp min a f a a f a f a inequality follows from the definition v max ln and the following implications a f a f a v for all a a ln f a using and we get f a v for all a a v ln f a f a exp exp v for all a a f a for all f x f a f a f max min a f a a f a inequalities is derived by means of definition which directly implies a f f a k s l d dl a ak a f a da f a f a f max max a f a a f a a f f a k s l d dl a ak a f a da f a f a f min min a f a a f a moreover by virtue of and we have f since a f f a k s l d dl a ak a f a da a m d a s ds k s l d dl a a a m ak a d a s ds a a s k s d s l dl a a ak a d a s ds notice that for the last equality above we have used integration by parts for the integral in the numerator combining and using we get v consequently shows that the first inequality holds with s s for all s on the other hand using we get for all f x and a a f a and exp a exp min a f a f a using and we obtain for all f x and a a exp f a exp and exp the following inequality is a direct consequence of max a a min min a exp exp consequently shows that the second inequality holds with s exp exp for all s the proof is complete the proof of theorem is based on the transformation shown in figure and on the following lemma its proof can be found in the appendix lemma consider the control system where a is a constant dmin dmax is a a constant dmax dmin are constants k c a satisfies k a da the control system is defined on the set s where a s s c p s c k a a da and p r k s dsda with r ak a da a a a is a linear functional the measured output of system is given by the equation where the function g c a satisfies g a for a all a a and g a da consider the system with the dynamic feedback law given by d t sat d t it for all t it i t and for all integers i where t is a constant let g c c a be the operator defined by the relation gv a v for all a a for every v c then there exist a constant l and a function k such that for every s with a the solution t t s of the system with and initial condition for all a a is unique exists for all t and satisfies the following estimate max t s max s a s t l t min min t s min min s s s s we are now ready to provide the proof of theorem for all t proof of theorem define s by means of it is straightforward to a verify using definitions equation and the fact that k a k a d a s ds a a a for all a a that p and a where p r k s dsda with and g c c a is the operator defined by the relation r ak a da gv a v for all a a for every v c define g by means of and notice that and the fact that a m y p a exp d a s ds imply that the function a g c a satisfies g a for all a a and g a da next consider the solution t t s of the system with and initial condition for all a a lemma guarantees that the solution t t s of the system with exists for all t moreover there exist a constants l and a function k such that for every s with a the solution t t s of the system with and initial condition for all a a satisfies estimate the solution a of the ide t k a t a da is c on since it coincides with the solution of the delay a differential equation t k t k a t a k a t a da with the same initial condition therefore by virtue of the function defined by is continuous and f c f where d f t a a a t b a t t t and b a is the finite possibly empty set where the derivative of f x is not defined or is not continuous since t s it follows that f t x for all t where f t a f t a for a a using and we conclude that holds moreover using and we conclude that equations hold for all t and equation holds for all t a d f finally by virtue of and it follows that f a f a for all a a using and the fact that ln x x min for all x we get the estimate for all t max t f t a a max ln t a f a min min t a combining and we obtain the following estimate for all t max s f t a s lt max ln a f a min min s s estimate for certain k is a direct consequence and inequalities for certain k the proof is complete using a reduced order observer instead of using the observer of the system t d t d t t one can think of the possibility of using a reduced order observer that estimates the equilibrium value of the dilution rate such a dynamic output feedback law will be given by the equations y t t z t l ln d t y z t and y t d t sat l z t ln y where l are constants in such a case a solution of the problem with with initial condition where f x a x f pc a f k a f a da means a pair of mappings f c a z c where which satisfies the following properties i f c f where d f t a a a t b a and b a is the finite possibly empty set where the derivative of f x is not defined or is not continuous ii f t x for all t iii equations hold for all t iv equation t a t a a d t f t a holds for all t a d f and v z z f a f a for all a a the mapping t f t z t x is called the solution of the system with and initial condition f z x defined for t for the observer case we are in a position to prove exactly in the same way of proving theorem the following result since its proof is almost identical to the proof of theorem it is omitted theorem stabilization with a reduced order observer consider the chemostat model with k a under assumption a then for every f x and z there exists a unique solution of the with and initial condition f z x furthermore there exist a constant l and a function k such that for every f x and z the unique solution of the with and initial condition f z x is defined for all t and satisfies the following estimate f t a z t l d max ln f a a f a l t max ln a f a for all t moreover the continuous functional w c a defined by w z f f g q z f q z f where is an arbitrary constant max exp a f a f f a f a m a q z f z ln f f a min f min a f a c a is given by is a sufficiently small constant and m g are sufficiently large constants is a lyapunov functional for the system with in the sense that every solution f t z t x of the system with satisfies the inequality lim sup h z t h f t h w z t f t l h w z t f t w z t f t for all t the family of dynamic bounded output feedback laws presents the same features as the family the only difference lies in the dimension of the observer simulations to demonstrate the control design from theorem three simulations were carried out in each simulation we considered the case where a a k a ag a and k a a g a and the birth modulus is given by where g is the constant for which the condition holds the model is dimensionless a dimensionless version of can be obtained by using appropriate scaling of all variables after a simple calculation it can be found that the constant g is given by e the output is given by the equation y t f t a da t in other words the output is the total concentration of the microorganism in the chemostat the chosen equilibrium profile that has to be stabilized is the profile that is given by the equation f a exp a for a the equilibrium value of the output is given by y exp d two feedback laws were tested the state feedback law given by d t di sat d t ln f it f for all t it i t and for all integers i which is the feedback law proposed in and the output feedback law given by d t di sat d t ln y it y for all t it i t and for all integers i which is the feedback law given by theorem for both feedback laws we chose t dmin dmax the following family of functions was used for initial conditions f a c exp a for a where c are free parameters and the constant is chosen so that the condition f k a f a da holds after some simple calculations we find that g g e cg however we notice that not all parameters c can be used because the additional condition min f a must hold as well the simulations were made with the generation of a uniform grid of function values f ih jh for j and i where h for i we had f jh f jh for j the calculation of the integrals y ih f ih a da and f k a f ih a da for every i was made numerically however since we wanted the numerical integrator to be able to evaluate exactly the integrals y ih f ih a da and f k a f ih a da for every i when f ih is an exponential function when f ih a c exp for a and for certain constants c we could not use a conventional numerical integration scheme like the trapezoid s rule or simpson s rule the reason for this demand to be able to evaluate exactly the integrals y ih f ih a da and f k a f ih a da for every i when f ih is an exponential function is explained by the fact that the equilibrium profile given by is an exponential function and we would like to avoid a error due to the error induced by the numerical integration to this end we used the following integration schemes j h f ih j h f ih jh f ih a da i j h ln f ih j h ln f ih jh i for f ih j h f ih jh jh j h f ih a da i j hf ih jh i for f ih j h f ih jh jh for j and i f ih a da i i h f f ih h f ln f ln f ih h f ih a da i ih h i for for f ih h f f ih h f j h af ih a da j j h f ih j h j f ih j h f ih jh ln f ih j h ln f ih jh i jh j h af ih a da j j i jh h f ih j h f ih jh f ih j h ln f ih jh j h f ih jh for f ih j h f ih jh for f ih j h f ih jh for j and i af ih a da j i j f ln f ln f ih h f ih h h f f f ln f ih h for f ih h f af ih a da j j i f for f ih h f j h a f ih a da k j f ih j h ln f ih j h f ih jh i jh hf ih j h hf ih jh h jh ln f ih j h f ih jh ln f ih j h f ih jh j h a f ih a da k j i jh j h ih jh for for f ih j h f ih jh f ih j h f ih jh for j and i the derivation of formulas is based on the interpolation of the function f j a c j exp j a through the points jh f ih jh and j h f ih j h for j more specifically we obtain for j j h ln f ih j h f ih jh c j f ih jh f ih j h f ih jh j based on the above interpolation the exact integration formulas are used for example for the j h integral j we get when j h ln f ih j h f ih jh af ih a da for jh j h j h j h jh jh jh af j a da c j af ih a da a exp j a da c exp j jh j exp j h j c exp j j h exp j jh and when j h ln f ih j h f ih jh j h j h af ih a da jh af j a da c j jh j h ada jh c j j combining the above formulas with the estimated values for c j j given by we obtain formula similarly we derive formulas and notice that the formulas allow the numerical evaluation of the integrals y ih f ih a da and f k a f ih a da for every i without knowledge of f since the time step has been chosen to be equal to the discretization space step h we are able to use the exact formula f i h jh f ih j h di for j and i therefore we are in a position to use the following algorithm for the simulation of the system under the effect of the output feedback law algorithm given f ih jh for j and certain i do the following j j calculate f g j i j g k i j where j i j k i j are given by calculate y ih i i j where i i j is given by j ih is an integer then set di max dmin min dmax t ln y ih else set di di if t calculate f i h jh for j using the above algorithm with obvious modifications was also used for the simulation of the system as well as for the simulation of the system under the effect of the output feedback law in our first simulation we used the parameter values c in our initial conditions in figure we plot the control values and the newborn individual values we show the values for the open loop feedback d t and for the state and output feedbacks from and our simulation shows the efficacy of our control design in our second simulation we changed the parameter values to c and plotted the same values as before in figure the responses for the output feedback law and the output feedback law are almost identical the second simulation was made with an initial condition which is not close to the equilibrium profile in the sense that it is an initial condition with very large initial population the difference in the performance of the feedback controllers and can not be distinguished in the final simulation we tested the robustness of the controller with respect to errors in the choice of being used in the controllers we chose the values c but instead of and we applied the following controllers the state feedback law d t di sat t ln f it f for all t it i t and for all integers i and the output feedback law given by d t di sat t ln y it y for all t it i t and for all integers i we obtained in both cases lim f t and lim t a error in gives a t t deviation from the desired value of the newborn individuals see figure notice that a constant error in is equivalent to an error in the set point since we have ln f it f t ln f it f d t di sat t ln f it f sat t sat d t for the state feedback case and ln it y t ln it y d t di sat t ln y it y sat t sat d t for the output feedback case concluding remarks chemostats present challenging control problems for hyperbolic pdes that require novel results we studied the problem of stabilizing an equilibrium age profile in an chemostat using the dilution rate as the control we built a family of dynamic bounded output feedback laws with continuously adjusted input that ensures asymptotic stability under arbitrary physically meaningful initial conditions and does not require knowledge of the model we also built a bounded output feedback stabilizer which guarantees asymptotic stability under arbitrary physically meaningful initial conditions and requires only the knowledge of one parameter the equilibrium value of the dilution rate in addition we provided a family of clfs for the chemostat model the construction of the clf was based on novel stability results on linear ides which are of independent interest the newly developed results provide a proof of the scalar strong ergodic theorem for special cases of the integral kernel since the growth of the microorganism may sometimes depend on the concentration of a limiting substrate it would be useful to solve the stabilization problem for an enlarged system that has one pde for the age distribution coupled with one ode for the substrate as proposed in in the context of studying limit cycles with constant dilution rates instead of a control this is going to be the topic of our future research acknowledgements the authors would like to thank michael malisoff for his help in the initial stages of the writing process of the paper figure simulation for the initial condition given by with c the upper part of the figure shows the response for the newborn individuals the solid line with bullets is for the state feedback the dashed line is for the output feedback and the bulleted line is for the system with d t the lower part of the figure shows the applied control action d t again the solid line is for the state feedback the dashed line is for the output feedback while the bulleted line shows the equilibrium value of the dilution rate figure simulation for the initial condition given by with c the upper part of the figure shows the response for the newborn individuals the solid line is both for the state feedback and for the output feedback identical and the bulleted line is for the system with d t the lower part of the figure shows the applied control action d t again the solid line is both for the state feedback and for the output feedback while the bulleted line shows the equilibrium value of the dilution rate figure control in presence of modeling errors error in simulation for the initial condition given by with c the upper part of the figure shows the response for the newborn individuals the solid line is for the state feedback the dashed line is for the output feedback while the dotted line shows the equilibrium value f of the newborn individuals the lower part of the figure shows the applied control action d t again the solid line is for the state feedback the dashed line is for the output feedback while the bulleted line shows the equilibrium value of the dilution rate references bastin and coron on boundary feedback stabilization of linear hyperbolic systems over a bounded interval systems and control letters bernard and krstic adaptive stabilization of hyperbolic pdes automatica boucekkine hritonenko and yatsenko optimal control of populations in economy demography and the environment google ebook brauer and mathematical models in population biology and epidemiology new york charlesworth evolution in populations edition cambridge university press coron vazquez krstic and bastin local exponential stabilization of a quasilinear hyperbolic system using backstepping siam journal of control and optimization di meglio vazquez and krstic stabilization of a system of n coupled firstorder hyperbolic linear pdes with a single boundary input ieee transactions on automatic control feichtinger tragler and veliov optimality conditions for control systems journal of mathematical analysis and applications gouze and robledo robust control for an uncertain chemostat model international journal of robust and nonlinear control hale and lunel introduction to functional differential equations springerverlag new york inaba a semigroup approach to the strong ergodic theorem of the multistate stable population process mathematical population studies inaba asymptotic properties of the inhomogeneous foerster system mathematical population studies karafyllis kravaris syrou and lyberatos a vector lyapunov function characterization of stability with application to robust global stabilization of the chemostat european journal of control karafyllis kravaris and kalogerakis relaxed lyapunov criteria for robust global stabilization of nonlinear systems international journal of control karafyllis and jiang a new theorem with an application to the stabilization of the chemostat international journal of robust and nonlinear control karafyllis and krstic on the relation of delay equations to hyperbolic partial differential equations esaim control optimisation and calculus of variations karafyllis malisoff and krstic ergodic theorem for stabilization of a hyperbolic pde inspired by chemostat karafyllis malisoff and krstic feedback stabilization of agestructured chemostat models proceedings of the american control conference chicago il pp khalil nonlinear systems edition krstic and smyshlyaev backstepping boundary control for hyperbolic pdes and application to systems with actuator and sensor delays systems and control letters mazenc malisoff harmand stabilization and robustness analysis for a chemostat model with two species and monod growth rates via a lyapunov approach proceedings of the ieee conference on decision and control new orleans mazenc malisoff and harmand further results on stabilization of periodic trajectories for a chemostat with two species ieee transactions on automatic control exponential stability of some linear continuous time difference systems systems and control letters pazy semigroups of linear operators and applications to partial differential equations new york rao and roxin controlled growth of competing species siam journal on applied mathematics rundnicki and mackey asymptotic similarity and malthusian growth in autonomous and nonautonomous populations journal of mathematical analysis and applications smith and waltman the theory of the chemostat dynamics of microbial competition cambridge studies in mathematical biology cambridge university press cambridge sun optimal control of population dynamics for spread of universally fatal diseases ii applicable analysis an international journal toth and kot limit cycles in a chemostat model for a single species with age structure mathematical biosciences appendix proof of proposition define for r a a g k a k s ds da since t a k a we have a t g a t a k a k s ds da a aa ak a da ak a da a k s dsda t a and k s dsda a t t k s dsda a define the lebesgue measurable sets a s t k a k s ds a a s t k a k s ds a and the integrals s k a da k s ds a notice that s s t and consequently equations in conjunction with the fact that r ak a da implies that a g k a k s ds da k s ds k a a a a i k s ds k a da k s ds k a a a i r j i i a since s s t and k s ds for all a a we get a k s ds da j k s ds s j a t r a moreover since r and k s ds for all a a it follows that s s therefore we a obtain from s or equivalently s a definition and the fact that k a for all a a with k a da implies that i combining the previous inequality with and we obtain the desired inequality a a k a k s ds da s for all r the proof is complete a proof of lemma local existence and uniqueness for every initial condition is guaranteed by theorem in define for all t for which the solution of exists v t sup t a w t inf a a t a let q and t be sufficiently small so that the solution exists on t t q we get from definition and equation v t q sup t q a a sup q s q t s sup q s t s sup t s s q t sup a x t s a da s q t sup a x t s a da a x t s a da s q a t sup sup t l a da sup t l a da s q s s s s a max v t sup t l a da sup t l a da q q a using the fact that l a da and assuming that q min a we obtain from v t q t v t c cv t q a using the fact that is a constant with c a da and the fact that l a da we distinguish the following cases i v t v t c cv t q and in this case implies that v t q t c ii v t v t c cv t q and in this case implies that v t q v t therefore in any case we get v t q v t v t c similarly we get from definition equation and the fact that q min a w t q inf a inf q s q t q a t s min inf t s inf t s s q q s min t inf a x t s a da s q min t inf a x t s a da a x t s a da s a min t inf inf t l a da inf t l a da s s s s s min t l c inf t l c inf t l q q min t w t c cw t q a using the fact that is a constant with c a da and the fact that l a da we obtain again by distinguishing cases from w t q min t t c it follows from and that the solution of is bounded on t t q for q min a a standard contradiction argument in conjunction with theorem in implies that the solution exists for all t indeed a finite maximal existence time t max for the solution in conjunction with theorem in would imply that lim sup v t or t t max lim inf w t using induction and we are in a position to show that t max i i min w w ih v ih v c c for all integers i h max where h min a moreover using the fact that l and with t h h max for the case t max h h max or t t max h for the case t max h h max and arbitrary q t max t we get max max min w sup w t sup v t v max max c c which contradicts the assertion that lim sup v t or lim inf w t t max t t max inequality is a direct consequence of definitions the fact that l and inequalities the proof is complete a proof of corollary since holds and since a da we get a x t p x a t a p x for all t let k be the constants involved in using and we get for all t a a x t p x max a x t a p x da max a t x da a a a a it follows from that the following estimate holds for all t a max x t p x max a t a max x a a a a when t a we get from a max x t p x max x s p x max a s x da s s a a a max a x da max a max x a a a a a a moreover using definition and the fact that p x max x a a r a da which imply that for all c we get for t a x t p x t x s p x x s p x s s max x s p x max x s p x max x s p x max x a s s s a max the two above inequalities give for t a max x t p x max x t p x max x t p x max a max x a t a max a max x a a a a a a a combining and we conclude that estimate holds with m max a the proof is complete a a and proof of lemma notice that since x c a it follows that the mapping t v xt is continuous we have for all t h with h a v x t h max a x t h a a max t h s t h t h s x s max t h s x s max t h s x s t s h t h s t exp max a x t a max t s x s t s h a h exp x t max t s x s t s h using and we obtain a exp t h v x t h exp t v x t max exp s a x s a da t s t h a max exp t v x t a exp da max exp s max a x s a t s t a a a exp t v x t a exp da max s v x s t s h consequently we obtain from for all t h with h a a max s v x s exp t v x t a exp a da max s v x s t s h t s h a since a exp da we obtain from for all t h with h a max s v x s exp t v xt t s h indeed the proof of follows a exp t v xt a exp a da max s v x s t s h from distinguishing the cases i a and ii exp t v xt a exp a da max s v x s t s h a case ii leads to a contradiction since in this case in conjunction with max s v x s implies t s h a exp da which contradicts the assumption a exp t v xt a exp a da max s v x s t s h therefore we obtain from for all t h with a v xt h exp h v xt it follows from for all t and a h xt h v xt h h xt letting h and using we obtain the proof is complete proof of theorem notice that since x c a it follows that the mappings t v xt t p xt are continuous moreover definition implies that a a a t p xt r x t a s dsda r a x w s dsdw t it follows from leibniz s rule that the mapping t p xt is continuously differentiable and its derivative satisfies d p x t rx t r dt a x w t w dw r x t x t a a da t for all t a notice that for the derivation of the above equality we have used the fact that a da using we can conclude that holds next define y t x t p x for all t a a using definition and the fact that a da we obtain a y t a y t a da for all t moreover it follows from definition and that a a a p x t p x r x t a s dsda p x for all t aa a a since r s dsda r a da we obtain from and a a y t a r s ds a for all t combining and we get a y t a y t a a r s ds a a where is the real number for which for all t a a s ds da therefore we are in a position to a apply lemma for the solution of more specifically we get lim sup h y t h w y t w y t where w y t max a y t a a a a a and is for all t a real number for which a s ds exp da finally we notice that definitions and equality imply the following equalities v x t max a x t a p x t max a x t a p x a max a y t a w y t a a the differential inequality is a direct consequence of equation and inequality the proof is complete proof of corollary working as in the proof of theorem we show that holds since c s is a continuous functional and x c a it follows that the mappings t c xt t w xt are continuous applying theorem and taking into account the fact that p xt for all t we obtain lim sup h x t h w x t w x t h for all t let x t be a solution of the differential inequalities imply that the mappings are consequently we get t c xt t w xt c xt h b w xt h c xt b w xt h for all h which implies h xt b w xt c xt b w xt h xt w xt b w xt for all h by virtue of the mean value theorem we obtain the existence of s such that b w xt h b w xt xt h w xt xt s w xt h using the fact that the mapping t w xt is and combining the above relations we obtain h x t h b w x t h c x t b w x t c x t h x t h w x t min x t s w x t h for all sufficiently small h the differential inequality is a direct consequence of inequalities and the fact that the mapping t w xt is continuous the proof is complete proof of lemma by virtue of remark and corollary for every s the solution of a the ide t k a t a da exists for all t is unique and satisfies t s for all t more specifically using lemma we can guarantee that the solution t c a of the a ide t k a t a da satisfies inf t min s for all t t a indeed since k c with a s k a da it follows that all assumptions of lemma hold therefore we get from with l and arbitrary c that the inequality inf a inf t a sup t a sup a a a a a holds for all t inequality is a direct consequence of continuity of s and the above inequality a given the facts that g c a satisfies g a for all a a with g a da and that the a solution t c a of the ide t k a t a da satisfies we are in a position to guarantee that the mapping t v t defined by a v t ln g a t a da for all t is and is a continuous mapping it follows that for every z the solution of the system of differential equations t d sat t t v t t z t sat t t v t t t v t t t t v t exists locally and is unique moreover due to the fact that the right hand side of the differential equations satisfies a linear growth condition it follows that the solution z t t of the system of differential equations t d sat t t v t t z t sat t t v t t t v t t t t v t exists for all t due to definition and equations we are in a position to conclude that the constructed mappings coincide with a solution z t t t s of the system with with initial condition z s uniqueness of solution of the system with is a direct consequence of the above procedure of the construction of the solution define t t t e t z t d for all t and notice that equations and definition allow us to conclude that the following differential equations hold for all t d t t e t p e t l t p l t e t t dt t p l t t t e t t v t t t l v t since the inequalities l p p hold it follows that the quadratic forms a e p b e l p l are positive definite recall remark vi it follows that there exist constants k k k such that e a e p e k e for all e k e b e l p l k e for all e using p l and k p l e v the inequality which holds for all e and v we conclude that there exist constants c such that the following differential inequality holds for all t d t t e t p e t t t e t p e t c v t dt since k c a satisfies assumption and since the mapping t t is nona decreasing for every solution of the ide t k a t a da where is the continuous g min a functional a t c t min min t a recall remark it follows that the mapping is using remark and corollary with b s ms where m is an arbitrary constant we conclude that lim h h max a t h max a t m a a a a t h t min min a a max a t a a t min a a a k a k s ds da a since ln x x min a k a k s ds exp da a for all t where is a real number for which which is a real number for a and r ak a da for all x and using the facts that g a for all a a and a g a da we obtain from v t exp max t exp a a min min t a for all t using we obtain the following differential inequality lim sup h q e t h t h q e t t h max a t for all t a a t t t t c exp t min a where max a m a q e min a selecting m exp we obtain from and definition lim sup q e t h t h q e t t q e t t for all t h c exp where min suppose that q e t t since the mapping t q e t t is continuous it follows that q e t h t for all sufficiently small h the differential inequality implies that the mapping t q e t t is consequently by virtue of the mean value theorem we obtain q e t h t h q e t t h q e t h t h q e t t q e t t for all sufficiently small h therefore using we obtain the differential inequality lim q e t h t h q e t t q e t t h for all t with q e t t on the other hand using and we are in a position to conclude that e t t when q e t t in this case and using we conclude that a t for all h the unique solution of t k a t a da therefore in this case implies that v t h for all h finally and the facts that e t and v t h for all h imply that e t h for all h when q e t t definition allows us to conclude that q e t h t for all h when q e t t therefore the differential inequality holds for all t finally using we get for all t d t t d sat t t v t t t t v t dt we distinguish the following cases t v t t and t in this case we have t e t v t t t using the fact that the function q defined by is and the previous inequality we d t t t dt obtain from that t v t t and t in this case we have t e t v t t t using the fact that the function q defined by is and the previous inequality we obtain from that d t t t dt t v t t t t v t max dmax d d dmin inequality d t t t t max d max d d d min dt e t v t max d max d d d min combining all the above three cases we conclude that implies that t max d max d d dmin and consequently we obtain from that d t t t r e t v t dt for all t where r d d dmin combining with and using the triangle inequality we obtain for every g lim h t h g q e t h t h q e t h t h t g q e t t q e t t h t t r t v t q e t t q e t t for all t using and definition we obtain m t exp v t q e t t for all t using and we obtain for every g lim h t h g q e t h t h q e t h t h t g q e t t q e t t h r t t exp q e t t q e t t k m for all t therefore we obtain from and definitions for r exp m the differential inequality r l min exp min dmax dmin g k g m the proof is complete with proof of lemma first we notice that the differential inequality shows that is we also make the following claim claim t for all t t where t if then the claim holds by virtue of the fact that is if then the proof of the claim is made by contradiction suppose that there exists t t with t since is it follows that for all t consequently we obtain from lim sup h h l h l for all t l using the comparison lemma on page in and we obtain for all t since t l t we obtain a contradiction since t for all t t we obtain from lim sup h t h t l t for all t t using the comparison lemma on page in and we obtain t t t t l for all t t using the fact that t which implies the fact that t when and t when we obtain the estimate for all t t since is and satisfies t for all t t we conclude that holds for all t the proof is complete proof of lemma by virtue of remark and corollary for every s the solution of a the ide t k a t a da exists for all t is unique and satisfies t s for all t more specifically using lemma we can guarantee that the solution t c a of the a ide t k a t a da satisfies working as in the proof of theorem we can also show that p t for all t a given the facts that g c a satisfies g a for all a a with g a da and that the a solution t c a of the ide t k a t a da satisfies we are in a position to guarantee that the mapping t v t defined by is and is a continuous mapping it follows that for every the solution of the differential equation t d sat t it t it for all integers i and t it i t with initial condition exists locally and is unique moreover due to the fact that the right hand side of the differential equation is bounded it follows that the solution t of the differential equation t d sat t it t it for all integers i and t it i t with initial condition exists for all t due to definition and equations we are in a position to conclude that the constructed mappings coincide with a solution t t s of the closedloop system with with initial condition s uniqueness of solution of the system with is a direct consequence of the above procedure of the construction of the solution using corollary with a k a for all a a we conclude that there exist constants m such that for every s with a the unique solution of the ide a t k a t a da with initial condition for all a a satisfies the following estimate for all t max t m t max a a it follows from and that the following equation holds for all integers i and t it i t t it di d t it where di sat d t it t it for all integers i we next show the following claim claim the following inequality holds for all integers i i t it min it v it where min dmax t dmin t proof of claim we distinguish the following cases case dmin d t it t it dmax definition implies that di d t it t it using we get i t it which directly implies case d t it t it dmin definition implies that di dmin using we get i t it dmin d t the inequality d t it t it dmin implies that dmin d t it v it thus we get i t it d min d t v it v it it d min d t v it v it it d min d t v it v it it d d min t v it the above inequality in conjunction with the fact that min dmax t dmin t implies that holds case d t it t it dmax definition implies that di dmax using we get i t it dmax d t the inequality d t it t it dmax implies that dmax d t it v it thus we get i t it d max d t v it v it it d max d t v it v it it d max d t v it v it it d max d t v it the above inequality in conjunction with the fact that min dmax t dmin t implies that holds the proof of claim is complete claim the following inequalities hold for all integers i it d max kt it min i d dmin t min kt k i max k i proof of claim the proof of inequalities is made by induction first notice that both inequalities hold for i next assume that inequalities hold for certain integer i we distinguish the following cases case dmin d t it t it dmax definition implies that di d t it t it using we get i t it consequently we get i t it max it max kt k i max i dmax d t max kt k i which directly implies the second inequality with i in place of i similarly we obtain the first inequality with i in place of i case d t it t it dmin definition implies that di dmin using we get i t it dmin d t consequently we get from i t it d d min t min d d t i d d min i d d t min min i d d min t d d min t min kt min min t min k i min kt k i kt k i which is the first inequality with i in place of i furthermore the inequality d t it t it dmin implies that dmin d t it it consequently we get i t it d dmin t it max it max kt k i max i dmax d t max kt k i which is the second inequality with i in place of i case d t it t it dmax definition implies that di dmax using we get i t it dmax d t consequently we get from i t it d max d t max d d t i d d t max kt max i d d t max kt max i d max d t d max d t max kt k i max max k i max k i which is the second inequality with i in place of i furthermore the inequality d t it t it dmax implies that dmax d t it it consequently we get i t it dmax d t it min it min kt k i kt min i d dmin t min k i which is the first inequality with i in place of i the proof of claim is complete we next show the following claim claim the following inequalities hold for all t min min kt t k t t max kt k t t t t t t t t t proof of claim let arbitrary t and define i t t notice that the definition i t t implies the inclusion t it i t we distinguish the following cases case dmin d t it t it dmax definition implies that di d t it t it using we get for all s it i t s s it t it s it t it the above equality in conjunction with the facts that s it t s it t and inequality gives estimates more specifically the above inequality in conjunction with the facts that s it t s it t implies for all s it i t s s it t it s it t v it it v it and since t it i t the above inequality shows that holds in this case moreover the equation s s it t it s it t it gives for all s it i t s s it t it s it t it which in conjunction with the facts that s it t s it t implies for all s it i t s it it on the other hand inequality gives it max i dmax d t max kt max kt k i k i combining the two above inequalities we get for all s it i t s max kt k i and since t it i t the above inequality shows that the right inequality holds in this case the left inequality is proved in the same way case d t it t it dmin definition implies that di dmin using we get s it dmin d s it for all s it i t therefore we get for all s it i t s it which combined with gives for all s it i t s min i d dmin t min kt k i min min kt k i min min kt k i since t it i t the above inequality shows that the left inequality holds in this case furthermore the inequality d t it t it dmin implies that dmin d t it it since s it dmin d s it for all s it i t we get for all s it i t s it dmin d s it s it t it s it t it dmin d t s it t it s it t s it t it s it t it it the above equality in conjunction with and the facts that s it t s it t shows that the right inequality holds exactly as in case finally we notice that the following inequalities hold for all s it i t it s s it t it s it t it which combined with the facts that s it t s it t implies for all s it i t it v it s it v it since t it i t the above inequality shows that inequality holds in this case case d t it t it dmax definition implies that di dmax using we get s it dmax d s it for all s it i t therefore we get for all s it i t s it which combined with gives for all s it i t s max i dmax d t max kt k i max kt k i max kt k i since t it i t the above inequality shows that the right inequality holds in this case furthermore the inequality d t it t it dmax implies that dmax d t it it since s it dmax d s it for all s it i t we get for all s it i t s it dmax d s it s it t it s it t v it s it t it s it t it dmax d t the above equality in conjunction with the facts that s it t s it t gives for all s it i t s s it t min it s it t min it min it min it on the other hand inequality gives min it min i d dmin t min kt min min kt k i k i combining the two above inequalities we get for all s it i t s min min kt k i and since t it i t the above inequality shows that the left inequality holds in this case finally we notice that the following inequalities hold for all s it i t it s s it t min it s it t min it which combined with the facts that s it t s it t implies for all s it i t it v it s it v it since t it i t the above inequality shows that inequality holds in this case the proof of claim is complete using and the fact that ln x x min for all x we get the estimate for all t a a v t ln g a t a da m exp t max s g a t a da a min g a t a da max t s s min min t s s s min min s s let j be an integer with v kt for all k j we next show that the following inequality holds for all i j i t exp it v it indeed when it we get from that i t v it which directly implies on the other hand when it we get from that i t it v it the previous inequality in conjunction with the fact that v it for all i j gives exp i t it v it v it it v it v it v it v it it v it v it it exp v it v it it v it it consequently holds for all i j using and induction we are in a position to prove the following inequality for all i j it i j jt i i l v lt j more specifically inequality follows from the definition of the sequence i it and the fact that inequality gives i exp i v it for all i j using i induction we can prove the formula i i j j i l v lt for all j i j which directly implies for all i j using the fact that x exp x x exp x for all x we obtain from and the following inequality for all i j it i j jt jt i v lt i l v lt exp i j jt jt exp j exp max s max s i a s exp i l exp lt min min s min min s l j s s s where min t since t it follows that i l lt i for all l j i and thus we obtain from the following inequality for all i j it exp i j jt jt exp max s max s a s exp i i j min min s min min s s s s using which implies t max v kt for all k t t obtain t max s s min min s s t in conjunction with we for all t notice that holds for i j as well and consequently holds for all i j since j is an integer with v kt for all k j it follows from that j may be selected as the smallest integer conjunction with max s s the fact that min t that satisfies j ln min min s s and the fact that i j exp i i exp i for in all integers i j we get the following inequality for all i j max s s it exp i j min min s s where j s s exp s for all s using and the fact that j is the smallest integer that satisfies max s s ln min min s s we can guarantee that holds for all i using and the fact that t t t t we obtain with s s s and l t the proof is complete
3
pursuit of a single evader with uncertain information saad aleem a jan a cameron nowzari a george pappas a department of electrical systems engineering university of pennsylvania philadelphia pa usa abstract this paper studies a problem involving a single pursuer and a single evader where we are interested in developing a pursuit strategy that doesn t require continuous or even periodic information about the position of the evader we propose a control strategy that allows the pursuer to sample the evader s position autonomously while satisfying desired performance metric of evader capture the work in this paper builds on the previously proposed pursuit strategy which guarantees capture of the evader in finite time with a finite number of evader samples however this algorithm relied on the unrealistic assumption that the evader s exact position was available to the pursuer instead we extend our previous framework to develop an algorithm which allows for uncertainties in sampling the information about the evader and derive tolerable on the error such that the pursuer can guarantee capture of the evader in addition we outline the advantages of retaining the evader s history in improving the current estimate of the true location of the evader that can be used to capture the evader with even less samples our approach is in sharp contrast to the existing works in literature and our results ensure capture without sacrificing any performance in terms of guaranteed as compared to classic algorithms that assume continuous availability of information key words control control analysis introduction in this paper we study a problem involving a single pursuer and a single evader where the objective of the pursuer is to catch the evader traditionally treatment of this problem assumes continuous or periodic availability of on the part of the agents which entails numerous unwanted drawbacks like increased energy expenditure in terms of sensing requirement network congestion inefficient bandwidth utilization increased risk of exposure to adversarial detection etc in contrast we are interested in the scenario where we can relax this sensing requirement for the pursuer and replace it with triggered decision making where the pursuer autonomously decides when it needs to sense the evader and update its trajectory to guarantee capture of the evader the material in this paper was partially presented at the ieee conference on decision and control december osaka japan corresponding author saad aleem tel email addresses aleems saad aleem cnowzari cameron nowzari pappasg george pappas preprint submitted to automatica literature review there are two main areas related to the contents of this paper the first is the popular problem of which has garnered a lot of interest in the past from an engineering perspective problems have been studied extensively in context of differential games isaacs and olsder in sgall sufficient conditions are derived for a pursuer to capture an evader where the agents have equal maximum speeds and are constrained to move within the nonnegative quadrant of in alonso et upper and lower bounds on the have been discussed where the agents are constrained in a circular environment these pursuit strategies have been generalized and extended by kopparty and ravishankar to guarantee capture using multiple pursuers in an unbounded environment rn as long as the evader is initially located inside the convex hull of the pursuers in context of robotic systems the visibilitybased has received a lot of interest in the past lavalle and hinrichsen sachs et isler et in these problems the pursuer is visually searching for an unpredictable evader that can move arbitrarily fast in a simply connected polygonal environment similar problems have been studied in suzuki and yamashita gerkey et march isler et where visibility limitations are introduced for the pursuers but the agents actively sense and communicate at all times a related problem has been discussed in bopardikar et where the agents can move in but each agent has limited range of spatial sensing a detailed review of recent applications in context of search and rescue missions and motion planning involving adversarial elements can be found in chung et the vast literature on problems highlights their multifaceted applications in a variety of contexts however the previous works usually assume continuous or periodic availability of sensing information especially on the part of the pursuer towards this end we want to apply the new ideas of triggered control to the problem of which has not been studied so far in contrast to conventional approaches strategies based on triggered control schemes study how information could be sampled for control purposes where the agents act in an opportunistic fashion to meet their desired objective and bernhardsson velasco et triggered control allows us to analyze the cost to make up for less communication effort on the part of the agents while achieving a desired task with a guaranteed level of performance of the system see heemels et for an overview of more recent studies of particular relevance to this paper are works that study subramanian and fekri nowzari and or eqtami et mazo and tabuada implementations of local agent strategies in control the focus is on detecting events both intrinsic and exogenous during the execution that trigger agent actions in control the emphasis is instead on developing autonomous tests that rely only on current information available to individual agents to schedule or future actions in the context of problems we will make use of the approach which equips the pursuer with autonomous decision making in order to decrease its required sensing effort in tracking the evader in principle our paper shares with the above works the aim of trading increased decision making at the agent pursuer level for less sensing effort while still guaranteeing capture of the evader the key result in aleem et relied on receiving perfect information about the evader whenever the pursuer decided to sample in reality exact position information about the evader may never be available to the pursuer the main contribution in this paper is to design a robust policy where we allow for noisy sensor measurements on the part of the pursuer while still guaranteeing capture with only sporadic evader observations our triggered control framework provides fresh insights into dealing with information uncertainties and scenarios in the problem our framework readily incorporates the uncertainty in sensing the evader and allows us to derive tolerable error bounds in estimating the evader s position that preserve our capture guarantees the theme of our pursuit policy is quite similar to existing works on triggered control based on the latest current estimate of the evader the pursuer computes a certificate of sleep duration for which it can follow its current trajectory without having to sense the evader in addition we discuss the relative advantages of retaining previous estimates where we leverage the past information about the evader to arrive at a better estimate of the evader s true location we show that incorporating additional knowledge about evader s past improves the update duration for the pursuer and mitigates uncertainty in detecting the evader organization the problem formulation and its mathematical model are presented in section in section we present the design of update duration for the pursuer as derived in aleem et it is followed by section where we allow for uncertainty in sampling the evader s position and outline the maximum tolerable error that can be accommodated on the part of the pursuer without compromising its strategy section discusses the relative merits of retaining previous estimates of the evader s position in the hope of increasing the sleep durations for the pursuer the readers are encouraged to go over the detailed analysis of our problem in the appendix notation we let r and to be the sets of positive real nonnegative real and nonnegative integer numbers respectively rn denotes the euclidean space and k k is the euclidean distance contribution this paper builds on our earlier work in aleem et where we applied the framework of triggered control to design a pursuit policy for the pursuer which guarantees capture of the evader with a finite number of observations our work was different from the existing methods in the literature as our analysis did not assume the availability of continuous or periodic information about the evader instead the framework guaranteed capture of the evader without sacrificing any performance in terms of guaranteed as compared to classic algorithms that assume continuous information is available at all times problem statement we consider a system with a single pursuer p and a single evader at any given time t the position of the evader is given by re t and its velocity is given by ue t with kue t k ve where ve is the maximum speed of the evader similarly the position and velocity of the pursuer are given by rp t and up t with kup t k vp where vp ve is the maximum speed on a plane where both agents are modelled as single integrators note that it is not necessary to assume that the agents particularly the evader are moving with constant speeds at all times for all practical purposes we can upper bound the evader speed by vemax such that vemax vp and the analysis will remain unchanged we denote the positions of the agents by rp xp yp and re xe ye additionally the pursuer is moving along and the relative angle between the agents headings is denoted by see fig without loss of generality we normalize the speed of the pursuer to vp and the evader moves with a speed where at all times the dynamics of the pursuer and the evader are given by of the pursuer the system evolves as up ue in our problem the goal of the pursuer is to capture the evader we define capture of the evader as the instance when the pursuer is within some capture radius of the evader assuming that the pursuer has exact information about the evader s state at all times it is well known that the strategy for the pursuer is to move with maximum speed in the direction of the evader isaacs such a strategy known as classical pursuit is given by the control law up t vp re t rp t kre t rp t k cos cos sin sin the issue with the control law is that it requires continuous access to the evader s state at all times and instantaneous updates of the control input instead we want to guarantee capture of the evader without tracking it at all times and only updating the controller sporadically we do this by having the pursuer decide in an opportunistic fashion when to sample the evader s position and update its control input under this framework the pursuer only knows the position of the evader at the time of its last observation let tk be a sequence of times at which the pursuer receives information about the evader s position in between updates the pursuer implements a hold of the control signal computed at the last time of observation using which is given by up t vp re tk rp tk kre tk rp tk k xe ye e y xp yp o x fig figure shows the pursuer p at rp xp yp and the evader e at re xe ye in the pursuer is moving along and the relative angle between agents headings is denoted by the arrows indicate the velocity vectors of the agents update policy for pursuer suppose at time tk the pursuer at rp tk observes the evader at re tk such that the distance between the agents is dk krp tk re tk for notational brevity we will denote the position of the agents at the instance of observation by rek re tk and rpk rp tk we are interested in the duration for which the pursuer can maintain its course of trajectory without observing the evader more specifically we are interested in the first instance at which the separation between the agents can possibly increase thus prompting the pursuer to sample the evader s state and update its trajectory let r t denote the separation between the pursuer and the evader at time then we consider the objective function where dk kre tk rp tk k is the separation between the agents at time tk our goal is to design the triggering function such that the pursuer is guaranteed to capture the evader while also being aware of the number of samples of the evader required p for t tk in this paper our purpose is to identify a function for the update duration for the pursuer that determines the next time at which the updated information is required in other words each time the pursuer receives updated information about the evader at some time tk we want to find the duration dk ve vp until the next update such that tk dk ve vp xe xp ye yp note that the time at which becomes nonnegative is same as the time at which becomes nonnegative using the derivative of r see appendix for details is given by xe cos sin xe design of update law we study the pursuit and evasion problem consisting of a single pursuer and a single evader for and is a function of evader parameters xe ye for the reachable set of the evader to see that the separation is strictly decreasing note that k dmax dk dk dk h dk k where dmax is the maximum possible separation between the agents after the duration see appendix for details and h is given by sfrag replacements h fig figure shows a plot of normalized update time against evader speed dk is given by dk where h for thus dk is given by the ball be rek additionally for fixed we write in explicitly as a function of evader parameters and denote it by xe ye let g sup xe ye subject to the reachable capture time number of samples using the update policy the pursuer is guaranteed to capture the evader in finite time with finite number updates more specifically we can find the maximum number of samples in terms of the capture radius and evader speed and use it to guarantee finite this is summarized in the following theorem xe ye set of the evader re be rek for the dynamics in where vp and ve we can denote update duration in by dk vp ve and it is defined as inf r g theorem capture with finite samples let the pursuer and evader dynamics be given by where the agents are initially separated by given some positive capture radius the selftriggered update policy in ensures capture with finite observations in finite time is the smallest duration after which there exists an evader state that may increase the separation between the agents for the agents modelled by our update duration is obtained by solving inf r g see appendix for derivation and is given by dk dk k proof according to proposition the separation between the agents is strictly decreasing between successive updates in fact the new separation between the agents satisfies the inequality dk h where h is given by this implies that after n observations of the evader the separation between the agents satisfies the inequality dn hn where is the initial separation between the agents using this result the maximum number of samples can be calculated by setting hn the graph of against evader speed is shown in fig from the plot we observe that increasing the evader speed decreases the update duration for the pursuer so if the evader moves faster our law prescribes more frequent updates of it to guarantee capture the underlying objective in the design of the selftriggered policy is that at each instance of fresh observation the separation between the agents must have decreased the following proposition characterizes this result log nmax log h the expression in shows that for any positive capture radius the pursuer is guaranteed to capture the evader with finite number of samples this completes the first part of the proof for selftriggered pursuit policy in the sequence of times at which the pursuer samples the evader position denoted by tk follows the criteria tk this means that after n updates the total duration of pn pursuit denoted by tn is given by tn without loss of generality we can assume since the pursuer is guaranteed to capture the evader with proposition decreasing separation let the pursuer and evader dynamics be given by where the agents are separated by dk at time tk if the pursuer updates its trajectory using the update policy in then the distance between the agents at time has strictly decreased dk for tk proof given the separation dk at time tk the new separation between the agents after a duration of is tcap nx max f where f denotes nx max dk dk and satisfies the relationship psfrag we replacements f using the inequality dk hk get nx max tcap f hk as h for we have hk thus tcap x hk h use of the relationship f this shows that for any evader speed tcap is finite remark performance note that in proving finite in theorem we showed that is strictly less than however given a capture radius it can be shown that the satisfies the inequality this is because the evader is captured as soon as the actual separation is within the capture radius at any time not just at the instance of updates the relationship in is the same upper bound for in classical pursuit strategy in classical pursuit tcap is bounded by tcap vp ve allowable error in evader s estimate in this section we study the scenario in which the pursuer acquires the information about the position of the evader with some uncertainty we are interested in analyzing the effect of imperfect observations on our selftriggered framework our objective is to investigate the maximum allowable error in estimating the evader s position which still allows us to catch the evader using a pursuit policy more specifically we want to find the maximum tolerable uncertainty in estimating evader s position at each instance of observation as a function of the evader s speed suppose at tk the pursuer estimates the evader at rbe tk this observation is imperfect and is corrupted by an associated noise where so at the instance of observation the true position of the evader re tk be b re tk for notational brevity we will denote rbe tk by rbek our objective is to find the allowable range for the error as a function of the evader speed such that the pursuit policy can still guarantee capture the reachable set of the evader is given by be rbek t tk for t tk this is illustrated in fig let t tk applying the previous framework of analysis we want to maximize over the evader parameters subject to the constraint re be b rek let sup xe ye where in the last step we have made tcap fig figure shows the variation of the maximum number of samples against evader speed as given by the expression in where is chosen as nx max maximum number of samples finite number of maximum samples nmax the denoted by tcap is bounded by xe ye subject to the reachable set of the evader re be b rek finding is very similar to the procedure of finding g as outlined in the appendix the only difference is in the reachable set of the evader which is now increased by whereas remains the b k the same for the error and estimated separation d update duration is defined as where is the initial separation is the capture radius and vp ve isaacs the worstcase occurs in the scenario where evader is actively moving away at all times this shows that our pursuit policy guarantees capture with the same performance as the classical case but with only finite number of evader samples the expression in guarantees capture with finite samples of evader s state fig shows a graph of the maximum number of samples required to guarantee capture against the evader speed for capture radius the number of samples increase quite sharply as approaches this makes intuitive sense as the maximum number of evader observations should increase as evader approaches the maximum speed of the pursuer b k inf r d k solving inf r yields b k d d be t tk rpk p e rpk p be e t tk k dmax fig at time tk the pursuer p measures the evader e at b indicates rbek after the duration d k k be b re outlines the boundary of the reachable set k of the evader after dmax denotes the maximum possible separation between the agents at tk for to simplify the analysis we can select the error as a scaled version of the current estimate of b k where such the separation d a parametrization will allow us to study the effect of changing on the update duration and will also tell us the maximum tolerable error relative psfrag replacements b k setting to the current estimate of the separation d b k we get d bk d dk fig figure shows the pursuer at rpk estimating the observer b k at time tk the pursuer detects at rbek separated by d the evader with an uncertainty be rbek t tk indicates the reachable set of the evader for t tk where with a slight abuse of notation we have used b k inf r instead of d note that can not be chosen arbitrarily to find the feasible domain of the of we invoke the of criteria r for this yields however imposing the positivity of duration is not sufficient to come up with the desired domain the design of the update duration rests on the underlying performance objective of evader capture this requires strict decrease in true separation in between updates at the instance of update the new k b dmax estimate of the separation satisfies d where k dmax is given by fig figure shows the variation of in against for different values of evader speeds where satisfies the condition in for any evader speed increasing the value of decreases the previous inequality we have allowed for the worstcase scenarios in estimating the evader s position is sufficient to guarantee fore setting strict decrease in actual separation in between updates this results in a more conservative set of allowable values for as shown in k b k dmax this is illustrated in fig thus thus for evader speed the maximum allowable error for observation denoted by satisfies the inequality k b k b dmax d b k let this recall that is a function of d bk d b b in deriving means that dk note that the maximum allowable error dynamically changes decreases as the pursuer closes in on the evader this also means that for the duration of entire pursuit the maximum allowable error for all bk rek d sup xe ye re be b xe ye b k d in the case of uncertainty we have access to the estimates b k instead of true separation dk of current separation d tions denoted by satisfies the relationship samples in the presence of uncertainty given an initial b between agents and a preestimated separation of d b the maximum defined positive capture radius d number of evader updates can be calculated by setting b n which results in d where is the capture radius we incur no zeno behavior using the policy in the pursuer does not require infinitely many samples to capture the evader the following theorem characterizes this important result and p it is easy to verify that q is a b d b k d where for and satisfies the condition in using similar analysis from previous section we can estimate the maximum number of maximum number of samples analysis with memory in the previous section we studied the effect of uncertainty in sensing the evader s position and provided bounds on the maximum tolerable error as a function of evader s speed which allowed us to capture the evader with sporadic updates in the absence of uncertainty as outlined in section the pursuer employs a memoryless pursuit policy to catch the evader the pursuer only needs the current sample of the evader s true position rek in order to calculate the update duration in this is because for the current reachable set of the evader be rek is always a subset of reachable set of the evader be when we introduce uncertainty in estimating evader s position the above statement is no longer true the idea behind retaining evader s estimated history is that we can potentially reduce the actual reachable set of the evader when we combine the current reachable set of the evader with the previous reachable sets thus improving our current estimate of the true position of the evader this allows us to improve increase our update duration as compared to an important consequence of theorem is that in the presence of uncertainty our framework guarantees capture with only finite estimates of the evader this means that we can find the maximum number of samples that will guarantee capture by design satisfies the condition in and guarantees strict decrease in measured separation between successive updates in general we can write this as fig figure shows the plot of maximum number of samples in against for different values of evader speeds where satisfies the condition in the capture radius is taken d as positive and monotonically decreasing function for thus for any evader speed we have tk so duration is by a positive constant for all observations which suffices to show that our duration does not incur any zeno behavior d for evader speed the maximum number of samples for guaranteed capture increases as the parameter is increased see fig this shows that for any we need to sample the evader s state more frequently in order to allow for more uncertainty in estimating evader s position at each instance of update proof suppose that pursuer observes the evader at the instance tk in order to show no zeno behavior it suffices to prove that the duration is lowerbounded by a positive constant for all observations tk c according the h for the selfdition psfrag replacements triggered update policy in note that the higher values of result in a smaller duration see fig then given a capture radius the following relationship is satisfied for all observations log theorem no zeno behavior if the pursuer updates its trajectory using the update policy in where satisfies the condition in then pursuer is guaranteed to incur no zeno behavior for the duration of the pursuit where q log the memoryless case with uncertainty derived in section more specifically we can leverage our knowledge about the previous estimates of the evader in improving the current update duration for the pursuer while mitigating the effect of uncertainty in estimating evader s position consider the problem in which the pursuer receives uncertain information about the evader while keeping track of its previous estimates for the purpose of illustration we analyze the case where the pursuer retains only the previous estimate of the evader s position extending the framework to the case for more than one previous estimates will be similar and straightforward additionally we assume that the pursuer samples the evader with an associated error suppose that the pursuer sampled the evader at time computed the update duration and observed the evader again at the instance tk for t tk the current reachable set of the evader is given by of be rbek t tk previous reachable set b the evader is given by be rbe t tk based on this information the actual reachable set of the evader is given by the intersection of the two one particular scenario is illustrated in fig where the actual reachable set is not the same as the current reachable set be t tk the update duration is then defined as inf r b equivalently for fixed and b is the optimal value of the following optimization problem xe ye subject to re be sup xe ye as explained earlier incorporating evader s previous estimate can potentially reduce the actual reachable set of the evader in the case of noisy measurements by keeping track of the previous estimate s we are equivalently adding more constraints to the feasible reachable set in our optimization problem we want to formalize the benefit of retaining evader s history in terms of improvement increase in the update duration as compared to the memoryless update duration given by recall that the update duration in was derived using only the current reachable set of the evader be b rek based on its latest estimate let denote the optimal value of the problem which is used to obtain the memoryless update duration in sup xe ye xe ye rpk p observe that is a relaxation of as it is obtained by removing the constraint corresponding to the previous reachable set of the evader as a result for any we notice that both and are monotonically increasing in the parameter because increasing increases the feasible set in the optimization problem thus yielding a potentially greater maximum value using the monotonicity of b g and g in along with the fact that b for any allows us to infer that the first instance at which b g approaches will be greater than or equal to the first instance at which g approaches this means that be t tk re be b rek re be b rek e fig the figure shows the reachable sets of the evader based on its current b rek and previous b estimates for t tk the current reachable set is denoted by be rbek t tk and the previous reachable set is noted by be t tk the actual reachable set is the intersection of the two inf r inf r and as a consequence we have using t tk for t tk let b sup xe ye subject to the actual reachable set leveraging memory against uncertainty the relationship in shows that by using previous estimate s of the evader s position we can potentially increase the update durations for the pursuer case i in fig demonstrates one particular instance of leveraging evader s history which yields the greatest improvement xe ye of the evader re be rbek be be be rek e be ous and the current reachable set intersect at a point b suppose the measured separation at is d units using a memoryless pursuit policy in results in whereas obtaining update duration from solving the problem results in observe that which shows that in certain cases knowing one previous estimate of the evader can almost nullify the uncertainty in sensing evader s position and consequently allow for greater update duration e be a case i b case ii fig figure shows the two possible cases at the instance of fresh observation of the evader case i shows the extreme case where incorporating the evader s history precisely determines its true position case ii shows the scenario in which the current reachable set is a subset of the previous reachable set thus the previous estimate provides no additional information towards multiple estimates extending the above framework to the case of multiple previous estimates is relatively straightforward suppose for the observation of the evader we have the information about the m previous updates where m such that m k then the current sleep duration can be computed by inf r where is an optimal value of the optimization problem for the value m can be treated as the length of the sliding window for retaining fixed number of previous estimates to compute the current update duration for the pursuer in the current update duration in the extreme case at the instance of observation the current and the previous reachable sets of the evader intersect such that the actual reachable set is reduced to a point equivalently this means that we know precisely where the evader is this will result in a longer update duration as compared to memoryless case where we would have incorporated uncertainty in our observation to yield a more conservative update duration let denote the improvement in update duration comparing eq and eq we see that the greatest improvement denoted by is given by xe ye sup xe ye subject to re be b rek m bej re where bej is the current reachable set of the previous estimate of the evader s position and is given by for while sometimes adding information about the previous estimate can be advantageous it is important to realize that incorporating the previous reachable set of the evader does not always results in an improvement increase in the update duration we can increase the selftriggered update duration only when the evader s past provides more information about its current true position in the scenario where the current reachable set of the evader is a subset of the previous reachable set we get no additional information about the evader s true position and hence no improvement in our update duration as compared to the memoryless case in this is illustrated in case ii in fig where we can drop forget the previous estimate of the evader s position as it is a subset of the previous reachable set and will yield no improvement in increasing the current update duration bej be j x for j m and m k remark forgetting previous estimates note that while we might have the capability to store m previous estimates of the evader it is not necessary to use all of them in computing the current sleep duration for the pursuer as illustrated earlier retaining the history improves our update duration when it reduces the current reachable set of the evader this allows us to forget all those estimates whose reachable sets either completely contain the current reachable set or a reachable set of another previous estimate to formalize this notion at the instance of observation of the evader we can construct a set remark numerical example to illustrate the improvement in update duration numerically let and for these values from based on some initial measurement the pursuer finds the first update duration by using and samples the evader at to find that the i i be rbek bei i bel bei l i p for i where bei denotes be m and m k i denotes the collection of indices among the m previous samples of all those estimates which we can forget to reduce the computation complexity of the problem thus our improved update duration can be computed from solving the problem the problem is guaranteed to have the same optimal value as that of because all the estimates belonging to the set i have no effective contribution to the actual reachable set of the evader and thus removing them will have no change in the optimal value xe ye sup xe ye subject to re be b rek m bej re in this section we provide numerical results for the case when the pursuer retains past samples of the evader as outlined in section we study the potential benefit of retaining only the previous estimate of the evader s observation as the pursuer tries to capture the evader thus in our simulations m in and the selftriggered update duration will be obtained from solving the optimization problem in we model our agents as single integrators where we normalize the speeds of the agents such that vp and the maximum speed of the evader is the evader is restricted to move in any of directions right left up and down and chooses the best direction to actively move away with its maximum possible speed from the pursuer at all times we initialize the agents with an actual separation units for every observation the pursuer samples the evader s current position with an associated error of such that initially the true position of the evader rek be rbek note that we can not arbitrarily set the capture radius as outlined in section for an evader speed and error the where ture radius must satisfy the relationship p dk bk that guarantees capture with finite updates we observe that in the beginning of the pursuit adding history results in relatively better gains as compared to towards the end when the agents are nearby the values indicate that as the separation decreases so does the potential benefit of adding computational overhead by retaining the previous observation simulations dk table comparison between memoryless and update times j k conclusions the robust framework in this paper extends our previous results to address the practical issues related with uncertainty in information about the evader we elaborate the case when the sampling is not perfect and design update duration along with tolerable error bounds in estimating the evader s state we show that our analysis preserve the selftriggered controller updates for the pursuer such that it incurs no zeno behavior in catching the evader without losing any of the previous performance guarantees our methodology offers a fresh perspective on dealing with uncertain information in problems besides being in contrast to a majority of previous works that assume continuous or at least periodic information about the evader is available at all times additionally we study the merits of retaining evader s history and show that we can allow for potentially longer update durations by incorporating past observations of the evader in the pursuer s autonomous decision making in the future we are interested in extending our methods to scenarios involving multiple agents and deriving conditions for cooperative strategies p for and setting satisfies the aforementioned relationship table shows the variation between the memoryless update duration and the improved update duration which takes into account the previous estimate of the evader s observation since is different from they result in different measures of separation at the instance of observation for a fair comparison we need to compare the normalized update durations and where and dk denote the true k tions at the instance of evader observation for memoryaware and memoryless pursuit strategies respectively the results in table indicate that in the presence of uncertainty in sampling the evader incorporating only the previous estimate allows for greater sleep durations references aleem nowzari and pappas pursuit of a single evader in proceedings of the ieee conference on decision and control pages osaka japan alonso goldstein and reingold lion and man upper and lower bounds orsa journal on computing and bernhardsson comparison of periodic and event based sampling for stochastic systems in proceedings of the ifac world congress volume pages be rek t tk and olsder dynamic noncooperative game theory volume siam bopardikar bullo and hespanha cooperative pursuit with sensing limitations in proceedings of the ieee american control conference pages new york ny chung hollinger and isler search and pursuitevasion in mobile robotics a survey autonomous robots eqtami dimarogonas and kyriakopoulos eventtriggered control for systems in proceedings of the ieee american control conference pages baltimore md gerkey thrun and gordon pursuitevasion with limited field of view the international journal of robotics research heemels johansson and tabuada an introduction to and control in proceedings of the ieee conference on decision and control pages maui hi isaacs differential games a mathematical theory with applications to warfare and pursuit control and optimization courier corporation isler kannan and khanna randomized in a polygonal environment ieee transactions on robotics isler kannan and khanna randomized with local visibility siam journal on discrete mathematics kopparty and ravishankar a framework for pursuit evasion games in rn information processing letters lavalle and hinrichsen pursuitevasion the case of curved environments ieee transactions on robotics and automation mazo and tabuada decentralized control over wireless networks ieee transactions on automatic control nowzari and coordination of robotic networks for optimal deployment automatica sachs lavalle and rajko pursuitevasion in an unknown planar environment the international journal of robotics research sgall solution of david gale s lion and man problem theoretical computer science subramanian and fekri sleep scheduling and lifetime maximization in sensor networks fundamental limits and optimal solutions in symposium on information processing in sensor networks pages new york ny acm suzuki and yamashita searching for a mobile intruder in a polygonal region siam journal on computing velasco fuertes and marti the self triggered task model for control systems in proceedings of the ieee systems symposium volume pages a rek rpk p dk y e t tk x fig figure shows the pursuer and the evader at rpk and rek dk respectively separated by dk at time tk be rek t tk is the ball centered at rek with radius t tk and indicates the reachable set of the evader for t tk thus our pursuit trajectory is parallel to the as tk the pursuer does not observe the evader till the next update instance thus t for tk the modified dynamics are given by cos sin from rp t t tk for t tk thus xe ye where r and r is the separation xe t tk cos sin t tk xe using t tk xe cos sin xe the agent updates when can possibly become nonnegative for re be rek xe ye explicitly denotes in in terms of evader parameters for fixed the problem is formulated as sup xe ye subject to xe ye xe ye be rek for g denotes the optimal value of problem and the update duration is defined as inf r g in be rek xe dk the constraint of the problem is independent of cos setting xe sin e e e ye yields arctan xe note that for xe to see this suppose xe setting in we get xe for which is a contradiction as for any due to symmetry of the problem we can assume ye since xe we have substituting in we get derivation of update duration the agents are modelled by the dynamics in at time tk the pursuer observes the evader at a distance dk krek rpk without loss of generality we make the relative vector between pursuer and the evader parallel to the such that yp tk and ye tk additionally as a matter of convenience we assume that xp tk xe tk dk this is elaborated in fig xe ye p xe xe where which simplifies the problem in to p xe ye xe xe sup xe ye subject to xe ye be rek for this shows that satisfies the constraints in the problem at this means at the instance of update the maximizer of the relaxed problem is a feasible solution of the original problem and hence it is optimal solution for thus g and as a result note that is continuous in the parameter k as dk from the constraint xe ye be rek we get xe dk which means that must lie at the boundary of be rek thus p xe dk substituting in xe ye we get xe p xe xe dk xe this reduces the problem in to xe sup xe subject to b maximum separation if the pursuer updates its trajectory using the selftriggered update policy described in then the maximum distance between the agents between successive updates is given by xe dk dk k dmax dk note that we can relax the problem in by omitting the constraint xe dk dk the relaxation of results in an unconstrained optimization problem ignoring the constraints of problem let sup xe and inf r to see this after the duration the pursuer moves a distance of units and the evader evader can be anywhere inside a ball of radius centered at rek this is shown in fig the maximum separation between k the pursuer and the evader is denoted by dmax and is k given by dk xe as a result of this relaxation we have let argmax xe for the unconstrained problem to perform unconstrained maximization of xe the dk tive is given by e xe xe dk as setting yields k dk argmax xe and is given by dk k dk solving dk rpk p be rek rek e dk for inf r yields dk dk k k dmax fig indicates the new separation between the agents at be rek outlines the boundary of the k reachable set of the evader after dmax denotes the maximum possible separation between the pursuer and the evader at t tk recall that where was obtained from relaxing the constraint in problem our claim is that at the instance of update the maximizer of the relaxed problem is a feasible solution of the problem dk dk for to see this at the maximizer dk and dk are given by dk dk dk dk dk
3
diffeomorphic random sampling using optimal information transport apr martin sarang and klas department of mathematics florida state university bauer department of bioengineering scientific computing and imaging institute university of utah sjoshi department of mathematical sciences chalmers university of technology and the university of gothenburg abstract in this article we explore an algorithm for diffeomorphic random sampling of nonuniform probability distributions on riemannian manifolds the algorithm is based on optimal information transport oit analogue of optimal mass transport omt our framework uses the deep geometric connections between the metric on the space of probability densities and the information metric on the group of diffeomorphisms the resulting sampling algorithm is a promising alternative to omt in particular as our formulation is free of the nonlinear equation compared to markov chain monte carlo methods we expect our algorithm to stand up well when a large number of samples from a low dimensional nonuniform distribution is needed keywords density matching information geometry metric optimal transport image registration diffeomorphism groups random sampling introduction we construct algorithms for random sampling addressing the following problem problem let be a probability distribution on a manifold m generate n random samples from the classic approach to sample from a probability distribution on a higher dimensional space is to use markov chain monte carlo mcmc methods for example the algorithm an alternative idea is to use diffeomorphic density matching between the density and a standard density from which samples can be drawn easily standard samples are then transformed by the diffeomorphism to generate samples in bayesian inference for example the distribution would be the posterior distribution and would be the prior distribution in case the prior itself is hard to sample from the uniform distribution can be used for m being a subset of the real line the standard approach is to use the cumulative distribution function to define the diffeomorphic transformation if however the dimension of m is greater then one there is no obvious change of variables to transform the samples to the distribution of the prior we are thus led to the following matching problem problem given a probability distribution on m find a diffeomorpism such that here denotes a standard distribution on m from which samples can be drawn and is the the of acting on densities where is the jacobian determinant a benefit of methods over traditional mcmc methods is cheap computation of additional samples it amounts to drawing uniform samples and then evaluating the transformation on the other hand methods scale poorly with increasing dimensionality of m contrary to mcmc the action of the diffeomorphism group on the space of smooth probability densities is transitive moser s lemma so existence of a solution to problem is guaranteed however if the dimension of m is greater then one there is an space of solutions thus one needs to select a specific diffeomorphism within the set of all solutions moselhy and marzouk and reich proposed to use optimal mass transport omt to construct the desired diffeomorphism thereby enforcing for some convex function the omt approach implies solving in one form or another the heavily nonlinear equation for a survey of the omt approach to random sampling is given by marzouk et al in this article we pursue an alternative approach for diffeomorphic based random sampling replacing omt by optimal information transport oit which is diffeomorphic transport based on the geometry building on deep geometric connections between the metric on the space of probability densities and the information metric on the group of diffeomorphisms we developed in an efficient numerical method for density matching the efficiency stems from a solution formula for that is explicit up to inversion of the laplace operator thus avoiding the solution of nonlinear pde such as in this paper we explore this method for random sampling the initial motivation in is medical imaging although other applications including random sampling are also suggested the resulting algorithm is implemented in a short matlab code available under mit license at https density transport problems let m be an orientable compact manifold equipped with a riemannian metric g the volume density induced by g is denoted and without loss ofrgenerality we assume that the total volume of m with respect to is one m furthermore the space of smooth probability densities on m is given by z prob m d m m d where m denotes the space of smooth the group of smooth diffeomorphisms diff m acts on the space of probability densities via diff m prob m prob m by a result of moser this action is transitive we introduce the subgroup of volume preserving diffeomorphisms sdiff m diff m note that sdiff m is the isotropy group of with respect to the action of diff m the spaces prob m diff m and sdiff m all have the structure of smooth infinite dimensional manifold furthermore diff m and sdiff m are infinite dimensional lie groups a careful treatment of these topologies can be found in the work by hamilton in the following we will focus our attention on the diffeomorphic density matching problem problem a common approach to overcome the nonuniqueness in the solution is to add a regularization term to the problem that is to search for a minimum energy solution that has the required matching property for some energy functional e on the diffeomorphism group following ideas from mathematical shape analysis it is a natural approach to define this energy functional using the geodesic distance function dist of a riemannian metric on the diffeomorphism group then the regularized diffeomorphic matching problem can be written as follows problem given a probability density prob m we want to find the diffeomorphism diff m that minimizes the energy functional e id over all diffeomorphisms with the free variable in the above matching problem is the choice of riemannian distance the group of diffeomorphisms although not formulated as here moselhy and marzouk proposed to use the metric on diff m z u v hu v m for u v diff m this corresponds to optimal mass transport omt which induces the wasserstein distance on prob m see for example in this article we use the h metric u v z m k z x z hu m hv m where is the rham operator lifted to vector fields and is an orthonormal basis of the harmonic on m because of the hodge decomposition theorem gi is independent of the choice of orthonormal basis for the harmonic vector fields this construction is related to the metric on the space of probability density which is predominant in the field of information geometry we call gi the information metric see for more information on the underlying geometry the connection between the information metric and the metric allows us to construct almost explicit solutions formulas for problem using the explicit formulas for the geodesics of the metric theorem let prob m be a smooth probability density the diffeomorphism diff m minimizing distgi id under the constraint is given by where t is obtained as the solution to the problem t t t v t f t t d t v t t dt id where t is the unique geodesic connecting and r z r sin t sin cos t sin sin m the algorithm for diffeomorphic random sampling described in the following section is directly based on solving the equations numerical algorithm in this section we explain the algorithm for random sampling using optimal information transport it is a direct adaptation of algorithm algorithm oit based random sampling assume we have a numerical way to represent functions vector fields and diffeomorphisms on m and numerical methods for composing functions and vector fields with diffeomorphisms computing the gradient of functions computing solutions to poisson s equation on m sampling from the standard distribution on m and evaluating diffeomorphisms an oit based algorithm for problem is then given as follows choose a step size for some positive integer k and calculate the k geodesic t and its derivative t at all time points tk k using equation initialize id set k k compute sk t tk and solve the poisson equation sk compute the gradient vector field vk construct approximations to exp for example id update the set k k and continue from step unless k draw n random samples xn from the uniform distribution set yn xn n n the algorithm generates n random samples yn from the distribution one can save and repeat whenever additional samples are needed the computationally most intensive part of the algorithm is the solution of poisson s equation at each time step notice however that we do not need to solve nonlinear equations such as as is necessary in omt example in this example we consider m with distribution defined in cartesian coordinates x y by exp y exp x y normalized so that the ratio between the maximum and mimimum of is the resulting density is depicted in fig left we draw samples from this distribution using a matlab implementation of our algorithm available under mit license at if needed one may also compute the inverse by https the implementation can be summarized as follows to solve the lifting equations we discretize the torus by a mesh and use the fast fourier transform fft to invert the laplacian we use time steps the resulting diffeomorphism is shown as a mesh warp in fig we then draw uniform samples on and apply the diffeomorphism on each sample applying the diffeomorphism corresponds to interpolation on the warped mesh the resulting random samples are depicted in fig right to draw new samples is very efficient for example another samples can be drawn in less than a second on a standard laptop fig left the probability density of the maximal density ratio is right samples from calculated using our oit based random sampling algorithm conclusions in this paper we explore random sampling based on the optimal information transport algorithm developed in given the nature of the algorithm we expect it to be an efficient competitor to existing methods especially for drawing a large number of samples from a low dimensional manifold however a detailed comparison with other methods including mcmc methods is outside the scope of this paper and left for future work we provide an example of a complicated distribution on the flat the method is straighforward to extended to more elaborate manifolds by using finite element methods for the efficient solution of poisson s equation on manifolds for manifolds most importantly rn one might use standard techniques such as to first transform the required distribution to a compact domain fig the computed diffeomorphism shown as a warp of the uniform mesh every is shown notice that the warp is periodic it satisfies and solves problem by minimizing the information metric the ratio between the largest and smallest warped volumes is bibliography amari nagaoka methods of information geometry amer math providence ri bauer bruveris michor uniqueness of the metric on the space of smooth densities bull lond math soc bauer joshi modin diffeomorphic density matching by optimal information transport siam j imaging sci friedrich die und symplektische strukturen math nachr hamilton the inverse function theorem of nash and moser bull amer math soc hastings monte carlo sampling methods using markov chains and their applications biometrika khesin lenells misiolek preston geometry of diffeomorphism groups complete integrability and geometric statistics geom funct anal khesin wendt the geometry of groups a series of modern surveys in mathematics vol berlin marzouk moselhy parno spantini sampling via measure transport an introduction in ghanem higdon owhadi h eds handbook of uncertainty quantification springer international publishing cham miller younes on the metrics and equations of computational anatomy annu rev biomed eng modin generalized equations optimal information transport and factorization of diffeomorphisms geom anal moselhy marzouk bayesian inference with optimal maps journal of computational physics moser on the volume elements on a manifold trans amer math soc otto the geometry of dissipative evolution equations the porous medium equation comm partial differential equations reich a nonparametric ensemble transform method for bayesian inference siam journal on scientific computing villani optimal transport old and new grundlehren der mathematischen wissenschaften vol berlin
10
may adaptive algebraic multiscale solver for compressible flow in heterogeneous porous media matei yixuan wang hadi hajibeygi a department of geoscience and engineering faculty of civil engineering and geosciences delft university of technology box ga delft the netherlands b department of energy resources engineering stanford university panama rm stanford ca usa abstract this paper presents the development of an adaptive algebraic multiscale solver for compressible flow in heterogeneous porous media similar to the recently developed ams for incompressible linear flows wang et jcp operates by defining primal and blocks on top of the grid these coarse grids facilitate the construction of a conservative finite volume coarsescale system and the computation of local basis functions respectively however unlike the incompressible elliptic case the choice of equations to solve for basis functions in compressible problems is not trivial therefore several basis function formulations incompressible and compressible with and without accumulation are considered in order to construct an efficient multiscale prolongation operator as for the restriction operator allows for both multiscale finite volume msfv and finite element msfe methods finally in order to resolve highfrequency errors and smoother stages are employed in order to reduce computational expense the operators prolongation restriction and smoothers are updated adaptively in addition to this the linear system in the loop is infrequently updated systematic numerical experiments are performed to determine the effect of the various options outlined above on the convergence behaviour an efficient strategy for heterogeneous compressible problems is developed based on overall cpu times finally is compared against an algebraic multigrid amg solver results of this comparison illustrate that the is quite efficient as a nonlinear solver even when iterated to machine accuracy key words multiscale methods compressible flows heterogeneous porous media scalable linear solvers multiscale finite volume method multiscale finite element method iterative multiscale methods algebraic multiscale methods preprint submitted to elsevier science july introduction accurate and efficient simulation of multiphase flow in heterogeneous natural formations is crucial for a wide range of applications including hydrocarbon production optimization risk management of carbon capture and storage water resource utilizations and geothermal power extractions unfortunately considering the size of the domain along with the high resolution heterogeneity of the geological properties such numerical simulation is often beyond the computational capacity of traditional reservoir simulators therefore multiscale finite element msfe and finite volume msfv methods and their extensions have been developed to resolve this challenge a comparison of different multiscale methods based on their original descriptions has been studied in the literature msfv and msfe methods map a discrete system to a much coarser space in multigrid mg terminology this map is considered as a special prolongation operator represented by and adaptively updated basis functions the restriction operator is then defined based on either a finite element msfe finite volume msfv or a combination of both msfv has been applied to a wide range of applications see thus recommending multiscale as a very promising framework for the reservoir simulators however most of these developments including the algebraic multiscale formulation ams have focused on the incompressible linear flow equations when compressibility effects are considered the pressure equation becomes nonlinear and its solution requires an iterative procedure involving a parabolictype linear system of equations therefore the development of an efficient and general algebraic formulation for compressible nonlinear flows is crucial in order to advance the applicability of multiscale methods towards more realistic problems the present study introduces the first algebraic multiscale iterative solver for compressible flows in heterogeneous porous media along with a thorough study of its computational efficiency cpu time and convergence behaviour number of iterations in contrast to cases with incompressible flows the construction of basis functions for compressible flow problems is not straightforward in the past corresponding author yixuanw pressible elliptic compressible elliptic and parabolic basis functions have been considered however the literature lacks a systematic study to reveal the benefit of using one option over the other especially when combined with a smoother stage moreover no study of the overall efficiency of the multiscale methods based on the cpu time measurements has been done so far for compressible problems in order to develop an efficient prolongation operator in this work several formulations for basis functions are considered these basis functions differ from each other in the amount of compressibility involved in their formulation ranging from incompressible elliptic to compressible parabolic types in terms of the restriction operator both msfe and msfv are considered along with the possibility of mixing iterations of the former with those of the latter allowing to benefit from the symmetric positive definite spd property of msfe and the conservative physically correct solutions of msfv the errors are resolved in the global multiscale stage of cams while errors are tackled using a smoother at in this paper we consider two options for the smoothing stage the widely used local correction functions with different types of compressibility involved more general than the specific operator as well as ilu the best procedure is determined among these various strategies on the basis of the cpu time for heterogeneous problems it is important to note that the setup and linear system population are measured alongside the solve time a study which has so far not appeared in the previously published compressible multiscale works though is a conservative method only a few iterations are enough in order to obtain a approximation of the solution in the benchmark studies of this work it is iterated until machine accuracy is reached and thus its performance as an exact solver is compared against an algebraic multigrid amg method samg this comparative study for compressible problems is the first of its kind and is made possible through the presented algebraic formulation which allows for easy integration of in existing advanced simulation platforms numerical results presented for a wide range of heterogeneous domains illustrate that the is quite efficient for simulation of nonlinear compressible flow problems the paper is structured as follows first the compressible algebraic multiscale solver is presented where several options for the prolongation restriction operators as well as the solver are considered then the adaptive updating of the operators are studied along with the possibility of infrequent linear system updates in the loop numerical results are subsequently presented for a wide range of heterogeneous test cases aimed at determining the optimum strategy finally the is compared with an algebraic multigrid solver samg both in terms of the number of iterations and overall cpu time compressible flow in heterogeneous porous media single phase compressible flow in porous media using darcy s law without gravity and capillary effects can be stated as where and q are the porosity density and the source terms respectively moreover is the fluid mobility with permeability tensor k while is the fluid viscosity the form of this nonlinear flow equation using implicit eulerbackward time integration reads q which is linearized as where n c and n q the superscripts and denote the old and new iteration levels respectively as eq converges to the nonlinear eq and p therefore the coefficient c which is a of the linearization lemma plays a role only during iterations this fact opens up the possibility to alter c by computing it based on either resulting in or pn corresponding to cn pressure at the previous timestep each choice can potentially lead to a different convergence behaviour and thus computational efficiency algebraically eq can be written for the unknown pressure vector p as c f c where c is a diagonal matrix having dvi at cell i in its diagonal entry where dvi is the volume of cell i also is the convective compressible flow matrix having transmissibilities computed on the basis of a finitevolume scheme as entries moreover the vector contains the integrated source terms in the volumes qi dvi the total rhs terms are denoted by the vector f compressible algebraic multiscale solver the relies on the and grids which are superimposed on the grid see fig there are np and nd coarse and grid cells in a domain with nf cells xk cell coarse cell fig multiscale grids imposed on the given fine grid a and a block are highlighted on the right and left sides respectively the transfer operators between and are defined as the multiscale restriction r and prolongation p the former is defined based on either finite element msfe rf e p t or finite volume msfv for which rf v corresponds to the integral over blocks rf v i j if j is contained in block i otherwise the columns of p are the basis functions which are computed on cells see fig subject to simplified boundary conditions the localization assumption in contrast to the incompressible ams can be formulated based on different choices of the basis functions depending on the level of compressibility involved the first two types read k h k h and k h both being pressure dependent through c and but different in the sense of the consideration of the accumulation term alternatively one can also formulate basis functions using cn h h h or which are both pressure independent since c is now based on the pressure from the previous time step all of these equations are subject to boundary conditions along cell boundaries one can also obtain the equations for the corresponding four types of local correction functions by substituting the corresponding rhs term in eqs as mentioned before in this work systematic studies on the basis of the cpu time as well as the number of iterations are performed in order to find the optimum formulation for basis function prolongation operator the basis functions are assembled over cells nd s d n and if used the correction functions are also assembled snd k h as fig illustrates that the basis functions do not form a partition of unity when compressibility effects are included which is the intrinsic nature of the parabolic compressible equation the choices formulated above affect computational efficiency of constructing and updating the multiscale operators more precisely while basis functions of eqs and depend on pressure hence updated adaptively when pressure changes eqs and are pressure independent thus they only need to be computed once for problems for flows they need to be adaptively updated when local transmissibility changes beyond a prescribed threshold value while the basis and correction functions from eq were previously used the other options are as of yet have not been studied a b fig two choices of multiscale basis functions in a reference block left column summation of the basis functions over the block right column partition of unity check the approximates the solution by using the prolongation operator p which is a matrix of size nf nc having basis function in its k th column the map between the coarse and solution reads p the system is obtained using the restriction operator r as p rf and its solution is prolonged to the using eq p p rf p p rr in residual form it reads here while r f is the residual note that all the different options for basis functions can be considered in construction of the prolongation operator the employs eq as the global solver for resolving errors in addition to the solver an efficient convergent multiscale solver needs to include a smoother at fine scale the smoother accounts for the errors arising from simplified localization conditions the nonlinearity of the operator and the complex rhs term among the choices for this smoother or solvers the correction functions cf and ilu are considered in this work the procedure is finally summarized in table do until convergence e achieved see eq initialize update linear system components and f based on update residual r f adaptively compute basis functions use either of eqs stage only if cf is used apply cf on r and update residual multiscale stage solve for stage smooth for ns times using a iterative solver here ilu is used obtaining update solution update error compute and assign table iteration procedure converging to with tolerance in the next section numerical results for heterogeneous test cases are presented in order to provide a thorough assessment of the applicability of to problems numerical results the numerical experiments presented in this section are divided into finding a proper iterative procedure and multiscale components for efficiently capturing the nonlinearity within the flow equation and systematic performance study by comparing against a commercial algebraic multigrid solver samg note that the second aspect is mainly to provide the computational physics community with an accurate assessment of the convergence properties of the compressible multiscale solver as an advantage over many advanced linear solvers allows for construction of locally conservative velocity after any msfv stage therefore for multiphase flow scenarios only a few iterations are necessary to obtain accurate solutions for the studied numerical experiments of this paper sets of distributed permeability fields with spherical variograms are generated by using sequential gaussian simulations the variance and mean of natural logarithm of the permeability ln k for all test cases are and respectively unless otherwise is mentioned furthermore the grid size and dimensionless correlation lengths in the principle directions and are provided in table each set has realizations the sets with orientation angle of are referred to as the layered fields also the grid aspect ratio is m unless otherwise is specified permeability set grid angle between and y direction patchy variance of ln k mean of ln k table permeability sets each with realizations used for numerical experiments of this paper layered fields refer to the sets in which the orientation angle between and y direction is phase properties and simulation time are described as numbers the pressure and density are introduced as p peast pwest peast and respectively where the coefficient is set to for all subsequent test cases in this paper the pwest and peast values of and pa relative to the standard atmospheric condition are considered these correspond to pressure values of and which are set as dirichlet conditions at the west and east boundaries respectively for all the cases unless otherwise is mentioned also all the other surfaces are subject to neumann conditions the time is introduced as where pwest peast here is the average permeability and l is a length scale of the domain with the values of pa pressure difference viscosity of m and value of for homogeneous cases the will be s for problem size of l m in si units the implementation used to obtain the results presented in this paper consists of a code and the cpu times were measured on an intel xeon system with ram determining the most effective iterative procedure and multistage multiscale components the efficient capturing of the nonlinearity within the iterations is important in designing an efficient multiscale strategy for the purposes of a conclusive result in this section a set of patchy fields permeability set from table is considered one of the realizations and its corresponding solution at are shown in fig fig of the permeability left and pressure solution after right corresponding to one of the realization of permeability set from table nonlinear and linear level updates in formulating a convergence criterion for the one can express the error of the approximate solution at step on the basis of either the linear or nonlinear expressions according to eq the nonlinear error in each grid cell reads q and is assembled in the vector which allows the computation of the error norm on the other hand the error is based on the linearized equation which leads to the computation of the residual norm kr in order to determine a suitable sequence of the linear and nonlinear stages the same patchy domain of grid cells is considered fig for which the pressure equation is solved using the following solution strategy do until is reached update parameters linear system matrix and rhs vector based on solve linear system using the richardson iterative scheme preconditioned with one multigrid until table solution strategy used to determine a suitable stopping criterion the error and residual norms were recorded after each iteration of the richardson loop and are presented in fig note that the reduction of the residual norm beyond the first few iterations does not contribute to the reduction in the nonlinear error norm therefore one could ideally speed up the solution scheme by monitoring the error norm and updating the linear system after its decrease starts to stagnate however the computational cost of evaluating the nonlinear equation is roughly the same as that of a linear system update and thus much more expensive than the evaluation of the residual norm fig a also reveals that the stagnation of the error norm happens roughly after the residual norm has been approximately reduced by of its initial value immediately after the linear system update fig b shows the convergence behaviour after implementing this heuristic strategy which is deemed quite efficient since the two norms are in agreement hence in the following experiments the same strategy is employed for linear level kr i after iteration i of the inner linear loop and for nonlinear kr level after iteration of the outer nonlinear loop are set see table error norm residual norm error norm residual norm iteration a iteration b fig error and residual norm histories for one of the realizations of permeability set from table over a single time step of shown on the left is the strategy where at each nonlinear stage the fully converged linear solution is obtained shown on the right is the strategy where in each outer nonlinear loop the residual is reduced only by one order of magnitude adaptive updating of multiscale operators the previous study described the first adaptive aspect considered in this work namely updating the linear system only after the residual norm drops by an order of magnitude the procedure can be further optimized by employing adaptive updates of its multiscale components the basis and if considered the correction functions to this end one has to monitor the changes in the entries of the transmissibility matrix a and rhs f between the iteration steps fig c shows that the adaptive update of the basis functions leads to a significant in terms of cpu time furthermore the two adaptivity methods for linear system and local function updates are combined and shown in fig d hence will perform its iterations such that it exploits all adaptivity within the multiscale components and the nonlinearity within the flow equation note that for this case the compressible variant from eq was used for both basis and correction functions however if the incompressible eqs and are used then the basis functions do not require updates during iterations finally for this and all the following results unless otherwise stated the coarsening ratio was taken as because it was found efficient see subsection global stage choice of basis functions the aim of this study is to determine an optimum choice for the type of basis functions for the algorithm the correction function is computed based on eq in all cases and hence updated adaptively with pressure iterations of ilu are used for smoothing and all possibilities for the basis functions eqs are considered finally there is a single sec cpu time sec cpu time sec sec iteration a sec cpu time sec cpu time sec b sec iteration iteration c multiscale solution smoother solution iteration d lin sys construction basis functions correction function fig effect of different types of adaptivity on the performance for the permeability set from table after a time step of a no adaptivity b linear system update adaptivity only c multiscale operator update adaptivity only d fully adaptive in terms of both linear system and multiscale operator updates time step in the simulation which takes the initial solution at time everywhere to the solution at time the total cpu time spent in each stage of the solver as well as the number of iterations given on top of each bar in fig are measured also the success rate of convergence is given inside parentheses beside the average number of iterations the results show that including compressibility in the basis functions does not translate into faster convergence and thus the additional cpu time required to adaptively update them is not justified in fact it is more efficient to use the incompressible pressure independent basis functions from eqs and also the inclusion of the accumulation term and the type of restriction msfe or msfv does not play an important role for this patchy test case note that none of the choices results in successful convergence even though ilu smoothing iterations have been employed at each iteration this can be attributed to the use of correction functions as investigated in the next paragraph cpu time sec average cpu time of cf ms ilu with different types of prolongation and restriction multiscale solution smoother solution lin sys construction basis functions correction function m fe p m co in em f m vm em f f m cu ac m fv p m co in m co cu ac m fe p m co in in om c m vm f m cu m ac fv p om c om c cu ac om c fig effect of the choice of basis function on the performance for the problem after a time step of results are averaged over realizations the number of iterations is shown on top of each bar the success percentage is also shown in parentheses note that all simulations employ correction functions smoothing stage choice of correction function note that none of the results from the previous test case fig has a success rate as described in the cf can be seen as an independent stage the inclusion of which should be seen as an option and not a necessity for convergence fig presents the results of rerunning the previous experiment this time varying the type of correction function the plot confirms that eliminating the cf altogether leads to an overall and in addition a convergence success rate of as described in this can be explained by the sensitivity of cf to the heterogeneity of the permeability field which leads to solver instability therefore the cf should not be considered as candidate for the stage in an efficient procedure instead ilu is performed as in order to resolve errors smoothing stage number of smoother iterations another variable in the framework is the number of smoothing steps here ilu that should be applied in order to obtain the best between convergence rate and cpu time the results of several experiments with the optimum choices incompressible basis functions and no incorporation of cf and various numbers of ilu applications are illustrated in fig it is clear that with this setup an optimum scenario would cpu time sec average cpu time of cf ms ilu with different types of correction and restriction multiscale solution smoother solution lin sys construction basis functions correction function n n o rre co rre io ct io ct m fe n em m fv f m vm em f f m n m cu m fe p m co co o in fv p m co in ac cu m ac m co in m co m cu m fe p om fv p om vm f m cu ac om ac om in c c c c fig effect of the choice of correction function on the cpu time of the multiscale solution on the permeability set from table after a time step of the number of iterations is shown on top of each bar only the last bars on the right correspond to runs in which no correction function was used ms ilu be found with ilu iterations per call note that all runs without correction functions converged successfully average cpu time of fv with different number of smoothing steps cpu time sec multiscale solution smoother solution lin sys construction basis functions correction function smoothing steps fig effect of the number of ilu smoothing steps on the fv performance for the permeability set from table grid aspect ratio is after a time step of the number of iterations is shown on top of each bar with convergence success rate inside parentheses note that excluding cf leads to success rate for all scenarios sensitivity to coarsening ratio between size of coarse system and local problem cost the coarsening factors used in this paper were found to be optimal after a careful study of the sensitivity with the coarsening ratio as for a thorough study of the new solver it is important to illustrate also its sensitivity with change of system size and thus the coarsening ratio this important fact is studied and shown in figs for patchy fields not that for the cases studied in this paper the optimum overall cpu times were obtained with cells with the size of approximately the of the domain length in each direction fv on reservoirs with different coarsening ratios initialization lin sys construction solution cpu time sec fig patchy fields averaged cpu time over realizations of fv for different coarsening ratios for the permeability set from table results support the use of coarsening ratio of a similar behaviour was observed with the fe restriction operator fv on reservoirs with different coarsening ratios initialization lin sys construction solution cpu time sec fig patchy fields averaged cpu time over realizations comparison of fv for different coarsening ratios for the permeability set from table results support the use of coarsening ratio of a similar behaviour was observed with the fe restriction operator fv on reservoirs with different coarsening ratios initialization lin sys construction solution cpu time sec fig patchy fields averaged cpu time over realizations comparison of fv for different coarsening ratios for the permeability set from table a coarsening ratio of offers the best balance between initialization basis function computation and solution time while results in a more expensive initialization but faster convergence in subsequent a similar behaviour was observed with the fe restriction operator benchmark versus samg on the basis of the previously presented studies the optimal strategy includes a global multiscale stage using incompressible basis functions eq accompanied by iterations of ilu for in this subsection is compared against samg for three sets of different test cases the heterogeneous domains of different sizes from table permeability set from table with stretched grids and terms and permeability set from table with different ln k variances permeability contrasts in all the presentd experiments samg is called to perform a single repeatedly in a richardson loop its adaptivity is controlled manually at the beginning of each outer iteration samg is allowed to update its galerkin operators on the other hand during linear iterations samg is instructed to reuse its previous grids and operators for the test cases considered here this approach was found more efficient by a factor in excess of than the automatic solver control described in in all other aspects samg has been used as a commercial solver test case heterogeneous domains of different sizes from table in this subsection is compared against the samg algebraic multigrid solver for both patchy and layered permeability fields of table over consecutive time steps the pressure solution for one patchy and one layered sample are shown in fig illustrating the propagation of the signal from the western face through the entire domain figs and show the number of iterations and cpu time at consecutive times for different problem sets and from table note that with restriction operator did not converge in some of the test cases while the variant achieved success rate due to its spd property therefore an ideal solution strategy would use msfe to converge to the desired level of accuracy and then employ a single msfv sweep in order to ensure mass conservation in addition figs illustrate cpu time vertical axis and the total number of iterations on top of each column for permeability sets and from table with and coarsening ratios note that except for the first when all the basis functions are fully computed has a slight edge over samg mainly due to its adaptivity and relatively inexpensive iterations the initialization cost of is particularly high in the case due to the large number of linear systems solved with a direct solver needed for the basis functions it is clear from fig that with larger blocks requires less setup time but more iterations to converge note that all performance studies presented in this paper are for computations since reservoir simulators are typically run for many the high initialization time of is outweighed by the efficiency gained in subsequent steps moreover given the local support of the basis functions this initialization can be greatly improved through parallel processing furthermore only a few multiscale iteration may prove necessary to obtain an accurate approximation of the pressure solution in each time step for flow problems fig pressure solution on one of the realizations of permeability sets left and right from table at and from top to bottom respectively patchy reservoir cpu time sec initialization lin sys construction solution layered reservoir initialization lin sys construction solution fe s m v f c ms c g m sa fe s m v f c ms c g m sa fe s m v f c s m g m c sa fe s m v f c ms c g m sa fe s m v f c ms c g m sa fe s m v f c s m c g m sa fig averaged cpu time over realizations comparison between the and samg solvers on permeability sets a and b from table over successive the coarsening ratio of is moreover employs ilu smoothing steps per iteration the number of iterations is given on top of each bar layered reservoir patchy reservoir initialization lin sys construction solution cpu time sec initialization lin sys construction solution fe s m v f c ms c g m sa fe s m v f c ms c g m sa fe s m v f c s m g m c sa fe s m v f c ms c g m sa fe s m v f c ms c g m sa fe s m v f c s m g m c sa fig averaged cpu time over realizations comparison between the and samg solvers on permeability sets a and b from table over successive time steps employs the coarsening ratio of along with ilu smoothing steps per iteration the number of iterations is given on top of each bar the symbol signifies convergence success rate of when restriction operator was employed patchy coarsening ratio layered coarsening ratio initialization lin sys construction solution cpu time sec cpu time sec initialization lin sys construction solution layered coarsening ratio initialization lin sys construction solution cpu time sec fe s m v f c ms c g m sa fe s m v f c s m c g m sa fe s m v f c ms c g m sa fe s m v f c ms c g m sa fe s m v f c s m c g m sa fe s m v f c ms c g m sa patchy coarsening ratio cpu time sec initialization lin sys construction solution fe s m v f c s m c g m sa fe s m v f c s m c g m sa fe s m v f c s m c g m sa fe s m v f c s m c g m sa fe s m v f c s m c g m sa fe s m v f c s m c g m sa fig averaged cpu time over realizations comparison between the and samg solvers on permeability sets left column and right column from table over successive different coarsening ratios of top row and bottom row are considered for moreover employs ilu smoothing steps per iteration the number of iterations is given on top of each bar test case stretched grids with terms to study the effect of anisotropic permeability fields along with radial injection flow pattern the permeability set from table is considered the settings are all the same as previous test cases except the following items dirichlet boundary conditions are set at the centers of two vertical sets of grid cells one from to and the other from to with the values of and respectively in addition grid aspect ratios of and are considered note that the nondimensional time is calculated using as the characteristic length figure illustrates the pressure solutions for one of the permeability realizations after the first time step fig converged pressure solution for one of the realizations of permeability sets with grid aspect ratio and respectively from left to right after one time step dirichlet boundary conditions are set at the centers of two vertical sets of grid cells one from to and the other from to with the values of and respectively the performance of fe and samg are presented in fig in contrast to fe the fv not shown did not lead to convergence success however for those fv successful runs similar cpu times as in fe were observed results shown in fig are obtained with the coarsening ratios of and for the cases of and respectively note that as shown in fig the anisotropic transmissibility caused by stretched grid effect would further motivate the use of enhanced geometries for such a strategy is well developed in algebraic multigrid community and is the subject of our future studies fe patchy cpu time sec cpu time sec initialization lin sys construction solution initialization lin sys construction solution samg layered samg patchy cpu time sec initialization lin sys construction solution cpu time sec fe layered initialization lin sys construction solution fig performance of top and samg bottom for permeability set from table for different grid aspect rations in for three successive time steps pressure solutions for the first time step is shown for one of the realizations in fig test case effect of permeability contrast to study the effect of permeability contrast permeability set from table is considered with different ln k variances of and note that the studied cases were for variance as described in table the settings are all the same as the default test cases dirichlet conditions are set at the east and west faces with condition everywhere else figure illustrates the performances of fe and samg for this test case note that the requires more iterations when the permeability contrast is increased to improve its performance one can consider enriched multiscale strategies which are based on local spectral analysis and modified permeability field with less contrast for calculation of basis functions note that the success rates of fv not shown were patchy patchy and layered for the successful runs the cpu times of fv were comparable with fe fe patchy fe layered initialization lin sys construction solution cpu time sec cpu time sec initialization lin sys construction solution samg layered initialization lin sys construction solution initialization lin sys construction solution cpu time sec cpu time sec samg patchy fig averaged cpu time comparison between top and samg bottom for permeability set from table for different ln k variances of and conclusions algebraic multiscale solver for compressible flows in heterogeneous porous media was introduced its algebraic formulation benefits from adaptivity both in terms of the infrequent updating of the linearized system and from the selective update of the basis functions used to construct the prolongation operator extensive numerical experiments on heterogeneous patchy and layered reservoirs revealed that the most efficient strategy is to use basis functions with incompressible advection terms paired with iterations of ilu for postsmoothing finally several benchmark studies were presented where the developed cams research similator was compared with an multigrid solver samg the results show that is a competitive solver especially in experiments that involve the simulation of a large number of time steps the only drawback is the relatively high initialization time which can be reduced by choosing an appropriate coarsening strategy or by running the basis function updates in parallel moreover due to its conservative property requires only a few iterations per time step to obtain a good quality approximation of the pressure solution for practical purposes systematic error estimate analyses for multiphase simulations are a subject of ongoing research and in addition the performance can be further extended by enrichment of the multiscale operators and enriched coarse grid geometries on the basis of the underlying transmissibility both are subjects of our future studies acknowledgements we would like to acknowledge the financial support from the intersect alliance technology and schlumberger petroleum services cv during matei s scientific visit at tu delft between november february since march matei is a phd research assistant at tu delft sponsored by the authors also thank hamdi tchelepi of stanford university for the many helpful discussions i as references hou wu a multiscale finite element method for elliptic problems in composite materials and porous media comput hou wu cai convergence of a multiscale finite element method for elliptic problems with rapidly oscillating coefficients math efendiev hou multiscale finite element methods theory and applications springer efendiev ginting hou ewing convergence of a nonconforming multiscale finite element method siam numer aarnes hou multiscale domain decomposition methods for elliptic problems with high aspect ratios acta math jenny lee tchelepi method for elliptic problems in subsurface flow simulation comput jenny lee tchelepi adaptive fully implicit finitevolume method for flow and transport in heterogeneous porous media comput kippe aarnes lie a comparison of multiscale methods for elliptic problems in porous media flow comput trottenberg oosterlee schueller multigrid elsevier academic press zhou tchelepi operator based multiscale method for compressible flow spe journal zhou tchelepi algebraic multiscale linear solver for highly heterogeneous reservoir models spe spe lunati jenny multiscale method for flow in porous media comput lee zhou techelpi adaptive multiscale method for nonlinear multiphase transport in heterogeneous formations comput hajibeygi bonfigli hesse jenny iterative multiscale finitevolume method comput hajibeygi karvounis jenny a hierarchical fracture model for the iterative multiscale finite volume method comput hajibeygi lee lunati accurate and efficient simulation of multiphase flow in a heterogeneous reservoir by using error estimate and control in the multiscale framework spe journal wolfsteiner lee tchelepi well modeling in the multiscale finite volume method for subsurface flow simulation siam multiscale model hajibeygi tchelepi compositional multiscale formulation spe journal wang hajibeygi tchelepi algebraic multiscale linear solver for heterogeneous elliptic problems journal of computational physics cortinovis patrick jenny iterative multiscale finitevolume method comp moyner lie a multiscale method comput aziz settari petroleum reservoir simulation blitzprint cagary alberta lunati jenny multiscale method for compressible multiphase flow in porous media comput lee wolfsteiner tchelepi multiscale formulation for multiphase flow in porous media black oil formulation of compressible flow with gravity comput hajibeygi jenny multiscale method for parabolic problems arising from compressible multiphase flow in porous media comput saad iterative methods for sparse linear systems siam philadelphia usa stuben samg user s manual fraunhofer institute scai remy boucher wu applied geostatistics with sgems a user s guide cambridge university press new york efendiev galvis wu multiscale finite element methods for highcontrast problems using local spectral basis functions comput bonfigli jenny an efficient poisson solver for the incompressible equations with immersed boundaries comput manea sewall and tchelepi parallel multiscale linear solver for highly detailed reservoir models proceedings of spe rss doi dolean jolivet nataf spillane xiang domain decomposition methods for highly heterogeneous darcy equations connections with multiscale methods oil gas science and technology revue d ifp energies nouvelles doi
5
decoupling schemes for predicting compressible fluid flows petr vabishchevicha b a nuclear safety institute russian academy of sciences tulskaya moscow russia friendship university of russia rudn university moscow russia jan b peoples abstract numerical simulation of compressible fluid flows is performed using the euler equations they include the scalar advection equation for the density the vector advection equation for the velocity and a given pressure dependence on the density an approximate solution of an value problem is calculated using the finite element approximation in space the fully implicit scheme is used for discretization in time numerical implementation is based on newton s method the main attention is paid to fulfilling conservation laws for the mass and total mechanical energy for the discrete formulation schemes of splitting by physical processes are employed for numerical solving problems of barotropic fluid flows for a transition from one time level to the next one an iterative process is used where at each iteration the linearized scheme is implemented via solving individual problems for the density and velocity possibilities of the proposed schemes are illustrated by numerical results for a model problem with density perturbations keywords compressible fluids the euler system barotropic fluid finite element method conservation laws schemes decoupling scheme introduction applied models of continuum mechanics are based on conservation laws for the mass momentum and energy the transport of scalar and vector quantities due to advection determines a mathematical form of conservation laws in addition some parameters of a flow have the positivity property monotonicity such important properties of the differential problem of continuum mechanics must be inherited in a discrete problem flows of ideal fluids are governed by the euler equations whereas the equations are applied to describing viscous flows mathematical problems of validation of such models are considered for example in the corresponding author email address vabishchevich petr vabishchevich preprint submitted to january books when discussing the existence of solutions in various sobolev spaces the principal problems of the positivity of the fluid density are also should be highlighted such a consideration is also carried out see for instance at the discrete level for various approximations in time and space in computational fluid dynamics the most important problems are associated with two contradictory requirements namely it is necessary to construct monotone approximations for advective terms and to fulfil conservation laws the construction of monotone approximations is discussed in many papers see in standard linear approximations are considered for the basic problems of continuum mechanics problems for discretization in space conservative approximations are constructed on the basis of using the conservative divergent formulation of continuum mechanics equations this approach is most naturally implemented using interpolation method balance method for regular and irregular grids and in the control method volume nowadays the main numerical technique to solve applied problems is the finite element method it is widely used in computational fluid dynamics too discretizations in time for computational fluid dynamics are often constructed using explicit schemes that have strong restrictions on a time step in sense of stability moreover explicit schemes have similar restrictions on the monotonicity of an approximate solution so it is more natural to focus on implicit schemes to solve boundary value problems for partial differential equations schemes are widely used schemes with weights for linear problems a study of discretizations in time can be based on the general theory of stability for schemes in particular it is possible to apply unimprovable coinciding necessary and sufficient stability conditions which are formulated as operator inequalities in hilbert spaces in the present work an value problem is considered for the euler equations describing barotropic fluid flows section which are conservation laws for the mass momentum and total mechanical energy discretization in space is performed section using standard lagrange elements for the density and cartesian velocity components to evaluate an approximate solution at a new time level the fully implicit scheme is employed for the approximate solution the mass conservation law holds and an estimate for the dissipation of the total mechanical energy is fulfilled the fully implicit scheme is not convenient for numerical implementation the solution at the new time level is determined from a system of coupled nonlinear equations for the density and velocity a decoupling scheme is proposed in section which refers to the class of linearized schemes of splitting by physical processes linearization is carried out over the field of advective transport in such a way that at each time level we solve individual problems for the density and velocity possibilities of the proposed schemes are illustrated by the results of numerical solving a model problem with a perturbation of the fluid density being initially at rest section to solve numerically the nonlinear discrete problem at the new time level the newton method is used in the above calculations a small number of iterations two or three is sufficient for the process convergence the influence of the grid size in space and time is investigated it was observed that decreasing of the time step results in the monotonization of the numerical solution the main result of the paper is the proof of the robustness of the linearized decoupling scheme the scheme involves separate solving standard advection problems for the density and velocity and demonstrates high iteration convergence such an approach can be used for other problems of continuum mechanics for numerical solving value problems for the equations mathematical models an value problem is considered for describing barotropic fluid flows the system of equations includes the scalar advection equation for the density and the vector advection equation for the velocity with a given pressure dependence on the density the conservation laws for the mass momentum and total mechanical energy are discussed barotropic fluid the continuity equation in a bounded domain has the form div u x t t where x t is the density and u x t is the velocity the momentum equation is written in the conservative form u div u u grad p x t t where p x t is the pressure the considered fluid is assumed to be barotropic we have a known dependence of the pressure on the density p p dp d assume that the domain boundaries are rigid and so the impermeability condition is imposed u n x initial conditions for the density and velocity are also specified x x u x x x the value problem describes transient flows of an ideal barotropic fluid the direct integration of the continuity equation over the domain taking into account the boundary condition results in the mass conservation law z m t m m t x t dx in the hilbert space we define the scalar product and norm in the standard way z w u kwk w w w x u x dx in a similar way the space of vector functions is defined if the density is the conservation law for the mass can be written as k k k this relation can be treated as an a priori estimate for in the equation directly expresses the conservation law for the momentum integrating this equation over we obtain z z u dx p ndx thus we have z i t i z p ndx i t udx multiplying by u and taking into account equation rewrite equation as div u div p u p div u integration over the domain in view of leads to z z d dx p div u dx dt the second term in is expressed from the renormalized equation of continuity define the pressure potential from the equation p d in particular for an ideal fluid we have p a a a const from the continuity equation we have div p div u t integration of the renormalized equation of continuity results in the expression z z d dx p div u dx dt adding this equality to we get z d dx dt we arrive to the conservation law for the total mechanical energy z e t e e t dx the equations and are the basic conservation laws for of the problem formulation for the convenience of consideration we introduce operators of advective convective transport for the system of euler equations the advection operator a a u in the divergent form is written as follows a u div u assuming that the boundary condition is satisfied for the velocity u we obtain a u the continuity equation can be written in the form of an differential equation d a u dt t t where the notation t x t is used similarly equation is written in the form d u a u u grad p t dt for the system of equations with a prescribed dependence p we consider the cauchy problem where the initial conditions see have the form u for the considered problem the key point is the property for the advection operator written in the divergent form implicit scheme to solve numerically the value problem for the euler equations we use the fully implicit backward euler scheme for with finite element discretizations in space the problems of fulfilment of the conservation laws at the discrete level are discussed discretization in space to solve numerically the problem or we employ finite element discretizations in space see for we define the bilinear form z a div u dx define the subspace of finite elements v h h and the discrete operator a a u as a u a w u v h similarly to we have a u for and vector quantities the representation is employed u ud t d for a simple specification of the boundary conditions assume that separate parts of the boundary of the computational domain are parallel to the coordinate axes a finite element approximation is used for the individual components of the vector ui v h i after constructing discretizations in space we arrive at the cauchy problem for the system of operator equations in the corresponding space namely we have the cauchy problem for the system of ordinary differential equations for instance for we put into the correspondence the problem d a u dt d u a u u grad p dt u t h here p p u with p denoting onto v the solution of the problem satisfies the same system of conservation laws as the solution of the problem see discretization in time let be a step of a uniform for simplicity grid in time such that tn tn n n n t to construct and study schemes the main attention is given to the fulfillment of the corresponding conservation laws a priori estimates such an important problem as the positivity of the density at each time level requires a more study and so it is not considered in the present work to solve numerically the problem the fully implicit scheme is applied in this case the approximate solution at the new time level is determined from n a n un p n n using the prescribed see value the basic properties of the approximate solution are related to the fulfillment of the conservation laws for the mass and total energy to simplify our investigation assume that the density is positive assumption at each time level n n n in view of integration of equation over the domain leads to n n n the equality is a discrete analog the mass conservation law for the momentum conservation law we put into the correspondence the equality z n un p ndx n n which has been obtained by integrating equation an estimate for the total mechanical energy can be established following the work multiplying equation by and integrating it over we arrive at n un a grad p for the first term we have n un n n n un n n un n n n in view of this from we obtain n n a grad p from and the definition of the operator a we have n a a a this makes possible to rewrite the inequality as n p div to estimate the second term on the side of the inequality we apply the discrete analogue of the renormalized equation of continuity multiply the continuity equation by d n a d d the following equality takes place n n e n d d where min n max n assumption assume that d under these natural assumptions we get n n d for the second term in taking into account we have a div grad d d div p div in view of this integration of equation results in n p div combining and we obtain the inequality n n comparing with we can conclude that at the discrete level instead of fulfillment of the conservation law for the total energy we observe a decrease of the energy it should be noted that this property has been established under the additional assumption the result of our consideration can be expressed in the following statement proposition the fully implicit scheme produces the approximate solution of the problem that satisfies the mass conservation law in the form ref and the momentum conservation law moreover if the assumptions and hold the estimate for the total mechanical energy is also fulfilled decoupling schemes a linearized scheme is used where the solution at a new time level is evaluated by advective transport taken from the previous time level using such a linearization an iterative process is constructed for the numerical implementation of the fully implicit scheme the approximate solution at the new time level is determined by sequential solving first the linear problem of advection for the density and secondly the linear problem for the velocity linearized scheme we focus on the use of such techniques that demonstrate the following properties the transition to a new time level is implemented by solving linear problems splitting with respect to physical processes is employed namely the problems for the density and velocity are solved separately with individual problems for the velocity components an example of the simplest decoupling scheme for the euler equations system is the linearized scheme where the advective transport involves the velocity from the previous time level instead of we employ the scheme n a un n un a un grad p n n first from the linear transport equation we evaluate the density at the new time level next from the linear decoupled system for the velocity components we calculate the velocity remark the system of equations with a given density is in the general case coupled for individual cartesian velocity components in the case where all parts of the boundary of the computational domain are parallel to the axes of the cartesian coordinate system the system of equations is decoupled and so we can evaluate independently the individual components of the velocity for the linearized scheme the discrete analogs of the mass conservation law see and the momentum conservation law see hold iterative decoupling scheme on the basis of the linearized scheme it is possible to construct an iterative algorithm for the numerical implementation of the fully implicit scheme the approximate solution for at the iteration is denoted by with the initial approximation from the previous time level n un assume that the new approximation at the new time level is calculated when the previous k iterations have been done similarly to we use the system of equations n a n un p k where k uk n n thus at each iteration we firstly solve the linear problem for the density and only then calculate the linear problem for the velocity numerical results the possibilities of the fully implicit scheme and decoupling schemes are illustrated by numerical results for a model problem with a density perturbation of an initially resting fluid test problem here we present the results of numerical solving a model problem obtained using different techniques the problem is considered in the square x x assume that the dependence of the density on the pressure has the form with a we simulate the motion of the initially resting fluid x in with the initial density see specified in the form x exp where and fully implicit scheme the problem is solved using the standard uniform triangulation on m segments in each direction the finite elements are employed for discretization in space to implement the fully implicit scheme for the nonlinear discrete problem at the new time level the newton method with a direct solver is applied for the corresponding system of linear algebraic equations of the compressible fluid flow is shown in fig which presents the density at various time moments in this calculation we used the spatial grid with m the time step was of the density in the center of the computational domain maximum max and minimal min values of the density over the entire domain are given in fig newton s iterative method for solving the discrete problem at each new time moment converges very quickly two or three iterations are enough table demonstrates convergence of the iterative process for the first step in time here we present the relative error for the first three iterations for the model problem obtained with m and various time steps table convergence of newton s method iteration the accuracy of the approximate solution of the test problem will be illustrated by the data on the density in the section the solution calculated on the grid with m using various grids in time is shown in fig similar data for m and m are shown in fig respectively it is easy to see a good accuracy in the reconstruction of the leading edge of the wave when the spatial grid is refined also we observe the effect of smoothing namely elimination of with increasing of time step decoupling scheme in using the decoupling scheme the greatest interest is related to the convergence rate of the iterative process the time step in the calculations was equal to we present numerical results for the model problem figure the density at various time moments figure of the density central maximal and minimal values figure the solution of the problem at various time moments calculated on the grid m dotted line dashed solid figure the solution of the problem at various time moments calculated on the grid m dotted line dashed solid figure the solution of the problem at various time moments calculated on the grid m dotted line dashed solid figure the solution of the problem at various time moments calculated on the grid m dashed line k dotted k solid k under consideration obtained on different grids in space the dependence of the solution on the number of iterations for the grid with m is shown in fig figure and presents similar data for grids with m and m respectively it is easy to see see that on the finest grid see fig the linearized scheme k in yields a substantially solution which is monotonized on subsequent iterations the main conclusion of our study is the demonstration of high computational efficiency of the iterative decoupling scheme namely for the problems under consideration it is sufficient to do only two iterations using for the above methods the key point is a violation of the conservation law for the total energy for the fully implicit scheme instead of conservation of the energy see the estimate decreasing of the total energy is observed the dynamics of the total mechanical energy using a linearized scheme k and iterative decoupling schemes for k on various grids is shown in fig here according to tn is calculated at each time moment z e tn n n dx n for k the solution obtained using the decoupling scheme practically coincides with the solution derived from the fully implicit scheme the above data indicate that the conservation law for the total energy is satisfied with a good accuracy decreasing of the time step results in increasing figure the solution of the problem at various time moments calculated on the grid m dashed line k dotted k solid k figure the solution of the problem at various time moments calculated on the gridm dashed line k dotted k solid k figure of the total mechanical energy for various time steps obtained on the grid with m dashed line k solid k of the accuracy of the conservation law fulfillment acknowledgements the publication was financially supported by the ministry of education and science of the russian federation the agreement figure of the total mechanical energy for various time steps obtained on the grid with m dashed line k solid k figure of the total mechanical energy for various time steps obtained on the grid with m dashed line k solid k references batchelor an introduction to fluid dynamics cambridge university press landau lifshitz fluid mechanics godunov romenskii elements of continuum mechanics and conservation laws springer leveque finite volume methods for hyperbolic problems cambridge university press anderson computational fluid dynamics the basics with applications wesseling principles of computational fluid dynamics springer lions mathematical topics in fluid mechanics incompressible models oxford university press lions mathematical topics in fluid mechanics compressible models oxford university press feireisl karper mathematical theory of compressible viscous fluids analysis and numerics springer kulikovskii pogorelov semenov mathematical aspects of numerical solution of hyperbolic systems taylor francis hundsdorfer verwer numerical solution of equations springer verlag kuzmin a guide to numerical methods for transport equations university morton kellogg numerical solution of problems chapman hall london a samarskii vabishchevich numerical methods for solving problems urss moscow in russian a samarskii the theory of difference schemes marcel dekker new york versteeg malalasekra an introduction to computational fluid dynamics the finite volume method prentice hall ern guermond theory and practice of finite elements springer larson bengzon the finite element method theory implementation and applications springer donea huerta finite element methods for flow problems wiley zienkiewicz taylor the finite element method for fluid dynamics ascher numerical methods for evolutionary differential equations society for industrial and applied mathematics leveque finite difference methods for ordinary and partial differential equations and problems society for industrial and applied mathematics a samarskii matus vabishchevich difference schemes with operator factors kluwer academic dordrecht marchuk splitting and alternating direction methods in ciarlet lions eds handbook of numerical analysis vol i northholland pp vabishchevich additive schemes splitting schemes de gruyter berlin galerkin finite element methods for parabolic problems springer verlag berlin brenner scott the mathematical theory of finite element methods springer new york
5
relational concept analysis alexandre jessie marianne and giacomo bourgogne dijon france lirmm cnrs and de montpellier montpellier france limos clermont auvergne france contact jcarbonnel huchard mar abstract formal concept analysis and its associated conceptual structures have been used to support exploratory search through conceptual navigation relational concept analysis rca is an extension of formal concept analysis to process relational datasets rca and its multiple interconnected structures represent good candidates to support exploratory search in relational datasets as they are enabling navigation within a structure as well as between the connected structures however building the entire structures does not present an efficient solution to explore a small localised area of the dataset for instance to retrieve the closest alternatives to a given query in these cases generating only a concept and its neighbour concepts at each navigation step appears as a less costly alternative in this paper we propose an algorithm to compute a concept and its neighbourhood in extended concept lattices the concepts are generated directly from the relational context family and possess both formal and relational attributes the algorithm takes into account two rca scaling operators we illustrate it on an example keywords relational concept analysis formal concept analysis ondemand generation introduction many datasets in thematic areas like environment or product lines comprise databases complying with a relational data model typical applications in which we are currently involved concern issues relative to watercourse fresqueau project the inventory and use of pesticidal antibacterial and antifungal knomana project and the analysis and representation of product lines in these applications there is a wide range of question forms such as classical querying establishing correlations between descriptions of objects from several categories or case based reasoning these questions can be addressed by complementary approaches including conceptual classification building knowledge pattern and rule extraction or exploratory search in the knomana project http http bazin carbonnel huchard kahn for example one main purpose will be after the ongoing inventory to support farmers their advisors local entrepreneurs or researchers in selecting plants of immediate interest for agricultural crop protection and animal health as such users will face large amounts of data and mainly will formulate general potentially imprecise and potentially inaccurate queries without prior knowledge of the data exploratory search will be a suitable approach in this context previous work has shown that formal concept analysis may be a relevant support for data exploration and we expect relational concept analysis rca to be beneficial as well considering rca for relational dataset exploration brings issues relative to the use of the scaling logical operators the iterative process and the presence of several concept lattices connected via relational attributes despite this additional complexity rca helps the user to concentrate on the classification of objects of several categories where the object groups concepts are described by intrinsic attributes and by their relations to object groups of other categories besides the relational attributes offer a support to navigate between the object groups of the different categories while the concept lattices offer a navigation between object groups of the same category there are several complementary strategies to explore datasets using rca one may consist in exhaustively computing concept lattices and related artefacts like implication rules at several steps using several logical operators and considering only some of the object categories and some of the relationships another strategy which is followed here consists in an computation of a concept and its neighbourhood comprising its upper lower and relational covers the next section presents the main principles of relational concept analysis section the computation of a concept and its neighbourhood is presented in section section illustrates the algorithm with the example introduced in section related work is exposed in section we conclude the paper with a few perspectives in section relational concept analysis formal concept analysis fca allows to structure a set of objects described by attributes in a canonical structure called a concept lattice it is based on a formal context k o a i where o is the set of objects a the set of attributes and i an incidence relation stating which objects possess which attributes from this context the application of fca extracts a finite set ck of formal concepts x y such that x o o y o a i is the concept s extent and y a a x o a i is the concept s intent the concept lattice is obtained by ordering the concepts of ck by the order on their extents we call an resp attributeconcept the lowest resp the greatest concept in the lattice possessing an object resp an attribute relational concept analysis relational concept analysis rca is an adaptation of fca to process relational datasets a relational dataset is composed of several sorts of objects described by both their own attributes and their relationships with other objects as input rca takes a relational context family rcf gathering a set of formal contexts and a set of relational contexts defining links between the objects of different formal contexts definition relational context family a relational context family is a pair k r such that k ki oi ai ii is a set of formal contexts relations r rk rk oi oj is a set of relational contexts relations with oi and oj being sets of objects respectively of ki and kj ki is called the source context and kj the target context the three contexts of table present an example of rcf ks rs taken from the software product line domain table top displays two formal contexts the one on the side presents data modelling tools against attributes representing their compatible operating systems os and the data models dm the tools may manage the table on the side describes database management systems dbms according to the data types dt they may handle table bottom presents a relational context stating which data modelling tools support which database management systems astah erwin dm magic draw mysql workbench rs x x x x x x x x x x x dt enum dt set dt geometry dt spatial dt audio dt image dt video dt xml dt json dt period ks os windows os mac os os linux dm conceptual dm physical dm logical dm etl table top two formal contexts side data modelling tools and side database management systems dbms bottom relational context stating which support which dbms dbms x mysql x x x x x x oracle x x x x x x x x x postgresql x x x x x x x teradata x x x x x x support mysql oracle postgresql teradata astah x x erwin dm x x x x x x x magic draw x x x mysql workbench x applying rca on the contexts of k builds in a first time one concept lattice per context of objects without taking links into account the two concept lattices associated with table top are presented in fig in a second time rca introduces links between objects of different lattices depending bazin carbonnel huchard kahn os windows os mac os linux mysql workbench dm physical dm conceptual dt enum dt geometry dm logical dt set dt json astah erwin dm mysql postgresql dm etl dt period teradata dt xml magic draw dt spatial dt audio dt image dt video oracle fig left concept lattice of right concept lattice of dbms on the relations expressed in these links take the form of relational attributes they introduce the abstractions concepts from the target context into the source context through a specific relation and a specific scaling operator in our example we may introduce the relational attribute support to characterise the that support at least one dbms offering json and xml more generally given two formal contexts ki kj k and a relational context r oi oj the application of rca extends the set of attributes ai with a set of relational attributes representing links to the concepts of kj the extended attribute set is denoted i then the incidence relation ii is extended to take into account these new attributes denoted by associating them to each object of oi depending on the relation r the concept denoted c involved in the relational attribute and a scaling operator a relational attribute is thus of the form c in this paper we focus on two scaling operators the existential operator denoted associating an object o to the relational attribute c if o is linked to at least one object of the extent of c by r the universal strict operator denoted associating an object o to c if all the objects linked to o by r are included in the extent of c and r o the concept lattice associated with a formal context k o i then structures the objects from o both by their attributes and their relations to other sets of objects through the relational attributes fig presents the extended concept lattice corresponding to the extended formal context according to the relation support and the existential scaling operator relational concept analysis os windows exists support exists support exists support os mac os linux dm physical mysql workbench astah magic draw dm conceptual exists support exists support dm logical exists support exists support erwin dm dm etl exists support fig concept lattice of the extended context in this way for complex data models including more than one relation rca produces a succession of concept lattices extended at each step by the new abstractions obtained at the previous step at step the concept lattices in the set are the ones built from the initial formal contexts from at step n the formal contexts in the set kn are extended depending on the concepts of the concept lattices in and the relations expressed in the exploration algorithm in this section we present an algorithm for taking a step in an exploration it considers an rcf potentially extended at previous steps a starting concept c from a context ki of the rcf and an exploration strategy which consists in choosing a set of relations of the rcf with ki as a source provided with scaling operators the objective of one step is to complete the intent corresponding to the extent of c as well as compute its upper lower and relational covers meanwhile the rcf is updated with the relational attributes for a next step redefining derivation operators the explicit knowledge of all the relational attributes of a context requires the computation of all the concepts of the target contexts however we can not afford what amounts to the exhaustive computation of the relational concepts of multiple contexts we would prefer to manipulate only a minimal number of relational attributes allowing us to derive the other relational attributes bazin carbonnel huchard kahn any object described by an attribute x y instead of x y by abuse of notation is also necessarily described by all the attributes of the form where y as such intents can be represented without loss of information by their relational attributes constructed from attributeswise maximal concepts however a problem arises with such a representation the set intersection can not be used to compute the intent of a set of objects anymore similarly if only maximal relational attributes are explicitly present in the context the extent of a set of attributes can not be computed through a simple test of set inclusion to remedy this we provide three algorithms to use on sets of attributes both intrinsic and relational with only the maximal relational attributes given explicitly intersect takes as input two sets of attributes a and b represented by their maximal relational attributes it outputs the set of maximal relational attributes of their intersection a relational attribute x y is in the intersection of a and b if and only if there exists two attributes a and b such that x and x the same holds for the scaling operator as such intersecting the intents of the concepts in the attributes of a and b and keeping the maximal ones results in the maximal relational attributes it uses ex algorithm algorithm intersect ki a b input ki oi ai ii a formal context a ai an attribute set b ai the intent of an object o output the relational intersection of the attribute set a and the intent of o a b f foreach b with r oi oj and kj oj aj ij do foreach a do f f ex kj intersect kj intersect kj f f foreach b with r oi oj and kj oj aj ij do foreach a do f f ex kj intersect kj intersect kj f return in uses intersect to compute the intent of a set of objects described by their maximal relational attributes it starts with the set of all explicitly known attributes and intersects it with the description of each object in the context ki ex computes the extent of a set of maximal relational attributes a for each object o and attribute x y a it checks whether r o and x intersect in the correct way depending on the scaling operator relational concept analysis algorithm in ki o input ki oi ai ii a formal context o oi a set of objects output computes the intent of a set of objects o a ai foreach o o do a a intent o return a algorithm ex ki a input ki oi ai ii a formal context a ai a set of attributes output computes the extent of a set of attributes a o oi foreach a a do if a x y then foreach o o do if r o x then o else if a x y then foreach o o do if r o x then o else foreach o o do if o a ii then o return o computing the closed neighbourhood now that we have redefined the derivation operators on implicitly known relational contexts we are able to compute the upper lower and relational covers of a concept the easiest are the relational covers a concept x y is a relational cover of a concept u v if and only if x y is a maximal relational attribute in v upper covers are easy too candidates can be generated by adding an object the set of which we have perfect knowledge of to the current extent and computing the corresponding concept the covers are the candidates that have the smallest extent computing the lower covers is more challenging they could be computed by adding attributes to the intent but the full set of relational attributes is only known implicitly we chose to instead remove objects the lower covers of x y being the concepts with the maximal extents that are contained in x and do not contain any of the minimal generators of x a simple bazin carbonnel huchard kahn way to compute them would be to remove minimal transversals of the minimal generators algorithm computes the closed neighbourhood of a concept it takes as input a set of formal contexts k kw of a rcf a strategy s r lj l j w and a starting concept c from a context ki the goal is to compute or complete the intent corresponding to the extent of c as well as its upper lower and relational covers in the extended context for each r ij s such that r ki kj the first loop lines to computes ocj the of kj then each x y ocj relation r and scaling operator give rise to a new relational attribute x y that is added to the context ki with growcontext in line the intent of concept c is extended with the relational attributes added during the previous loop the next loop lines to computes the relational covers r of concept for each relational attribute in the intent of c the corresponding concept in the target context is added to the cover in lines to the lower covers l of c are computed by removing from the extent of c a minimal transversal of the set of minimal generators of c s extent finally the upper covers u of c are computed in lines to candidates are created by adding an object o to the extent of only the minimal resulting concepts are kept algorithm rca k s c ki input k kw s r lj l j w a strategy c o a a concept of ki oi ai ii output c u r l the completed concept c and its closed relational neighbourhood foreach r ij s do ocj kj foreach o oi do growcontext ki r o ocj a ki o foreach a a do r r foreach t mint rans mingen o do l l o t in ki o t u foreach o oi o do u u ex ki in ki o o in ki o o u u return c u r l relational concept analysis algorithm growcontext ki r q o ocj input ki oi ai ii a formal context r oi oj a relational context a scaling operator o oi an object ocj the set of of kj oj aj ij output extends the context ki and adds the crosses if then foreach x y ocj such that r o obj x do ai ai x y ii ii o x y if then x kj r o ai ai ex ki x x ii ii o ex ki x x example in this section we illustrate the defined algorithms we consider the rcf ks rs with ks dm dbm s and rs support as presented in section we decide to apply the strategy support let us imagine that a user wants to select a data modelling tool that runs on windows os windows and that handles logical and conceptual data models dm logical and dm conceptual traditional fca may compute the formal concept associated with these attributes side of fig and inform the user that the corresponding tools are erwin dm magic draw and and that all these tools also handle dm physical let us apply our algorithms on this concept to retrieve the supported dbms relational cover and find the closest alternatives to the query lower and upper covers rca ks support dm lines to extend the context of with the relational attributes representing the of dbms support s target context in our case we have only one relation support visited at line in line ocj takes the of dbms concepts and from the righthand side of fig then the loop on lines and considers the objects of on which growcontext is called each object oi of is associated to the relational attributes representing the concepts of ocj having in their extents at least one object linked with oj as support astah m ysql oracle mysql and oracle are added to and associated to astah at the end of line we obtain the extended context presented in table line updates the intent of the input concept to take into account the relational attributes os windows dm conceptual dm physical dm logical the bazin carbonnel huchard kahn sup sup x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x sup os windows os mac os os linux dm conceptual dm physical dm logical dm etl astah erwin dm magic draw mysql workbench sup table formal context extended according to the relation support cepts of dbms corresponding to the relational attributes of c to form the relational cover of the input concept lines to then lines to we compute the minimal generators of the extent of c which are erwin dm magic draw and magic draw their minimal transversals are magic draw and erwin dm the two concepts having magic draw and erwin dm for extent represent the lower cover respectively and in fig finally in lines to we consider the objects of that are not in c s extent mysql workbench and astah for each one of them we compute the concept corresponding to their union with c s extent and we obtain the two concepts and of fig they represent the upper cover of related work lattice structures are among the first structures used as a support for exploratory search and this task has later attracted a lot of attention in formal concept analysis theory many works focus on conceptual neighbourhood to present both information related to a query and its closest variants in this paper we consider rca to retrieve the conceptual neighbourhood in interconnected lattices structuring both intrinsic and relational attributes the exponential growth of concept lattices is as a consequence the main limitation of exploratory search lies in the complexity and computation of the structures many solutions have been proposed to reduce the complexity of conceptual navigation some authors propose to prune the concept lattice to restrict the explorable dataspace by computing iceberg concept lattices or by applying constraints to bound the final structure to ease the navigation the authors of seek to extract more simplified browsable structures they first extract a tree from the concept lattice and then reduce the obtained tree using clustering and methods the tool searchsleuth enables exploratory search for web queries a field where the domain can not be entirely processed using fca and concept lattices to tackle this issue they generate a new formal context specific to a query at relational concept analysis each navigation step in a previous work we proposed to compute the conceptual neighbourhood of a query in a of the concept lattice restricted to the and poset a condensed alternative to concept lattices at each step only the conceptual neighbourhood is computed in the present work we also generate the conceptual neighbourhood but this time in interconnected concept lattices mimouni et al use rca to structure query and browse a collection of legal documents first they build interconnected lattices representing different types of legal documents referring to each other then their approach allows for the retrieval of the concept corresponding to a user query and to explore variations of this query by navigation in the neighbour concepts in their approach they compute all the lattices during the first step and hermann propose faceted search and an implementation in the tool sewelis that allows to browse relational datasets in the form of rdf files also et al propose rlca a relational extension of logical formal analysis an adaptation of fca to describe objects by formulas of logics instead of binary attributes while rca computes connected yet separate concept lattices one per sort of objects rlca gathers the objects their descriptions and their relations to other objects in one structure conclusion in this paper we proposed algorithms to compute the conceptual neighbourhood of a query in connected concept lattices generated with rca first we redefined the traditional fca derivation operators to take into account relational attributes then we presented a way to compute the relational upper and lower covers of a given concept in extended lattices without computing all the structures two rca scaling operators existential and universal strict may be used we illustrated how the algorithms work on a running example from the domain of software product line engineering in the future we plan to study the properties of the algorithm and to implement it to perform exploratory search in relational datasets a scalability study on real datasets from the projects fresqueau and knomana and from available product descriptions is then envisioned to this end we will generate random queries and exploration paths we also are collecting concrete questions from the knomana project partners for having real exploration tasks in their domain and qualitatively evaluate the benefits of the approach references alam le napoli latviz a new practical tool for performing interactive exploration over concept lattices in proc of the int conf on concept lattices and their applications cla pp bazin carbonnel kahn generation of reducing the complexity of conceptual navigation in proc of the int symp on foundations of intelligent systems ismis pp bazin carbonnel huchard kahn ben nasr acher bosco sannier baudry davril automated extraction of product comparison matrices from informal product descriptions of systems and software carbonnel huchard nebut analyzing variability in product families through canonical feature diagrams in proc of the int conf on software engineering knowledge engineering seke pp carpineto romano exploiting the potential of concept lattices for information retrieval with credo of universal comp sci codocedo napoli formal concept analysis and information retrieval a survey in proc of the int conf on formal concept analysis icfca pp ducrou eklund searchsleuth the conceptual neighbourhood of an web query in proc of the int conf on concept lattices and their applications cla pp dunaiski greene fischer exploratory search of academic publication and citation data using interactive tag cloud visualizations scientometrics hermann reconciling faceted search and query languages for the semantic web int of metadata semantics and ontologies ridoux sigonneau arbitrary relations in formal concept analysis and logical information systems in proc of the int conf on conceptual structures iccs pp springer reconciling expressivity and usability in information access from filesystems to the semantic web habilitation thesis matisse univ rennes habilitation diriger des recherches hdr defended on november ganter wille formal concept analysis mathematical foundations springer godin gecsei pichet design of a browsing interface for information retrieval in proc of the int conf on research and development in information retrieval sigir pp godin saunders gecsei lattice model of browsable data spaces inf sci hacene huchard napoli valtchev a proposal for combining formal concept analysis and description logics for mining relational data in proc of the int conf on formal concept analysis icfca pp huchard hacene roume valtchev relational concept discovery in structured datasets ann math artif intell marchionini exploratory search from finding to understanding comm acm melo grand aufaure browsing large concept lattices through tree extraction and reduction methods int of intelligent information technologies mimouni nazarenko salotti a conceptual approach for relational ir application to legal collections in proc of the int conf on formal concept analysis icfca pp palagi gandon giboin troncy a survey of definitions and models of exploratory search in acm workshop esida iui pp stumme taouil bastide pasquier lakhal computing iceberg concept lattices with titanic data knowledge engineering
2
probabilistic integration a role in statistical computation chris mark michael and dino department of statistics university of warwick of mathematics imperial college london school of mathematics statistics and physics newcastle university the alan turing institute for data science department of engineering science university of oxford department of statistics university of oxford oct department october abstract a research frontier has emerged in scientific computation wherein discretisation error is regarded as a source of epistemic uncertainty that can be modelled this raises several statistical challenges including the design of statistical methods that enable the coherent propagation of probabilities through a possibly deterministic computational in order to assess the impact of discretisation error on the computer output this paper examines the case for probabilistic numerical methods in routine statistical computation our focus is on numerical integration where a probabilistic integrator is equipped with a full distribution over its output that reflects the fact that the integrand has been discretised our main technical contribution is to establish for the first time rates of posterior contraction for one such method several substantial applications are provided for illustration and critical evaluation including examples from statistical modelling computer graphics and a computer model for an oil reservoir introduction this paper presents a statistical perspective on the theoretical and methodological issues pertinent to probabilistic numerical methods our aim is to stimulate what we feel is an important discussion about these methods for use in contemporary and emerging scientific and statistical applications background numerical methods for tasks such as approximate solution of a linear system integration global optimisation and discretisation schemes to approximate the solution of differential equations are core building blocks in modern scientific and statistical computation these are typically considered as computational that return a point estimate for a deterministic quantity of interest whose numerical error is then neglected numerical methods are thus one part of statistical analysis for which uncertainty is not routinely accounted although analysis of errors and bounds on these are often available and highly developed in many situations numerical error will be negligible and no further action is required however if numerical errors are propagated through a computational pipeline and allowed to accumulate then failure to properly account for such errors could potentially have drastic consequences on subsequent statistical inferences mosbach and turner oates et the study of numerical algorithms from a statistical point of view where uncertainty is formally due to discretisation is known as probabilistic numerics the philosophical foundations of probabilistic numerics were to the best of our knowledge first clearly exposed in the work of larkin kadane diaconis and o hagan theoretical support comes from the field of complexity traub et where continuous mathematical operations are approximated by discrete and finite operations to achieve a prescribed accuracy level proponents claim that this approach provides three important benefits firstly it provides a principled approach to quantify and propagate numerical uncertainty through computation allowing for the possibility of errors with complex statistical structure secondly it enables the user to uncover key contributors to numerical error using established statistical techniques such as analysis of variance in order to better target computational resources thirdly this dual perspective on numerical analysis as an inference task enables new insights as well as the potential to critique and refine existing numerical methods on this final point recent interest has led to several new and effective numerical algorithms in many areas including differential equations linear algebra and optimisation for an extensive bibliography the reader is referred to the recent expositions of hennig et al and cockayne et al contributions our aim is to stimulate a discussion on the suitability of probabilistic numerical methods in statistical computation a decision was made to focus on numerical integration due to its central role in computational statistics including frequentist approaches such as bootstrap estimators efron and tibshirani and bayesian approaches such as computing marginal distributions robert and casella in particular we focus on numerical integrals where the cost of evaluating the integrand forms a computational bottleneck to this end let be a distribution on a state space x the task is to compute or rather to estimate integrals of the form z f f where the integrand f x r is a function of interest our motivation comes from settings where f does not possess a convenient closed form so that until the function is actually evaluated at an input x there is epistemic uncertainty over the actual value attained by f at x the use of a probabilistic model for this epistemic uncertainty has been advocated as far back as larkin the probabilistic integration method that we focus on is known as bayesian cubature bc the method operates by evaluating the integrand at a set of states xi x socalled discretisation and returns a distribution over r that expresses belief about the true value of f the computational cost associated with bq is in general o as the name suggests this distribution will be based on a prior that captures certain properties of f and that is updated via bayes rule on the basis of evaluations of the integrand the maximum a posteriori map value acts as a point estimate of the integral while the rest of the distribution captures uncertainty due to the fact that we can only evaluate the integrand at a finite number of inputs however a theoretical investigation of this is to the best of our knowledge in contrast to the map estimator which has been our first contribution is therefore to investigate the claim that the bc posterior provides a coherent and honest assessment of the uncertainty due to discretisation of the integrand this claim is shown to be substantiated by rigorous mathematical analysis of bc building on analogous results from reproducing kernel hilbert spaces if the prior is in particular rates of posterior contraction to a point mass centred on the true value f are established however to check that a prior is for a given integration problem can be our second contribution is to explore the potential for the use of probabilistic integrators in the contemporary statistical context in doing so we have developed strategies for i model evidence evaluation via thermodynamic integration where a large number of candidate models are to be compared ii inverse problems arising in partial differential equation models for oil reservoirs iii logistic regression models involving latent random effects and iv spherical integration as used in the rendering of virtual objects in prescribed visual environments in each case results are presented as they are and the relative advantages and disadvantages of the probabilistic approach to integration are presented for critical assessment outline the paper is structured as follows sec provides background on bc and outlines an analytic framework in which the method can be studied sec describes our novel theoretical results sec is devoted to a discussion of practical issues including the important issue of prior elicitation sec presents several novel applications of probabilistic integration for critical sec concludes with an appraisal of the suitability of probabilistic numerical methods in the applied statistical context background first we provide the reader with the relevant background sec provides a formal description of bc secs and explain how the analysis of bc is dual to minimax analysis in nonparametric regression and sec relates these ideas to established sampling methods let x b be a measurable space where x will either be a subspace of rd or a more general manifold the sphere sd in each case equipped with the borel b b x let be a distribution on x rb our integrand is assumed to be an integrable function f x r whose integral f f is the object of interest r notation for functional arguments write hf f g kf hf f and for vector arguments denote for functions v x rm we write v for the m vector whose ith element is vi the notation u max u will be used the relation al bl is taken to mean that there exist such that al bl al computer code to reproduce experiments reported in this paper can be downloaded from http a cubature rule describes any functional of the form f n x wi f xi xi for some states x and weights wi the term quadrature rule is sometimes preferred when the domain of integration is d the notation f is motivated by the fact that this pexpression can be as the integral of f with respect to an empirical measure wi where is an atomic measure for all a b a p if xi a a if xi a the weights wi can be negative and need not satisfy wi bayesian cubature probabilistic integration begins by defining a probability space f p and an associated stochastic process g x r such that for each g belongs to a linear topological space for bc larkin considered a gaussian process gp this is a stochastic process such that the random variables lg are gaussian for all l where is the topological dual of l bogachev in this paper to avoid technical obfuscation it is assumed that l contains only continuous functions let denote expectation taken over a gp can be characterised by its mean function m x g x and its covariance function c x g x m x g m and we write g n m c in this paper we assume without loss of generality that m note that other priors could also be used a process affords heavier tails for values assumed by the integrand the next step is to consider the restriction of p to the set g xi f xi i n to induce a posterior measure pn over the fact that l contains only continuous functions ensures that g xi is moreover the restriction to a set is also then for bc pn can be shown to be a gp denoted n mn cn see chap of rasmussen and williams the final step is to produce a distribution on r by projecting the posterior pn defined on l through the integration operator a sketch of the procedure is presented in figure and the relevant formulae are now provided denote by en vn the expectation and variance taken with respect to pn write f rn for the vector of fi f xi values x xi and c x x c x x for the n vector whose ith entry is c x xi and c for the matrix with entries ci j c xi xj proposition the induced distribution of g is gaussian with mean and variance en g c x c f vn g c c x c c x here c denotes the integral of c with respect to each argument all proofs in this paper are reserved for supplement a it can be seen that the computational cost of obtaining this full posterior is much higher than that of obtaining a point estimate for the integral at o however certain combinations of point sets and covariance functions can reduce this cost by several orders of magnitude see karvonen and this would not have been the case if instead l since the canonical space of continuous processes is a polish space and all polish spaces are borel spaces and thus admit regular conditional laws theorem and theorem of kallenberg integrand posterior distribution x solution of the integral figure sketch of bayesian cubature the top row shows the approximation of the integrand f red by the posterior mean mn blue as the number n of function evaluations is increased the dashed lines represent posterior credible intervals the bottom row shows the gaussian distribution with mean en g and variance vn g and the dashed black line gives the true value of the integral f as the number of states n increased this posterior distribution contracts onto the true value of the integral f bc formally associates the stochastic process g with a prior model for the integrand f this in turn provides a probabilistic model for epistemic uncertainty over the value of the integral f without loss of generality we assume m for the remainder of the paper then eqn takes the form of a cubature rule en g f n x wibc f xi where wbc c c x furthermore eqn does not depend on function values fi but only on the location of the states xi and the choice of covariance function this is useful as it allows state locations and weights to be and however it also means that the variance is endogeneous being driven by the choice of prior a valid quantification of uncertainty thus relies on a prior we consider this issue further in sec the bc mean eqn coincides with classical cubature rules for specific choices of covariance function for example in one dimension a brownian covariance function c x min x leads to a posterior mean mn that is a piecewise linear interpolant of f between the states xi the trapezium rule suldin similarly et al constructed a covariance function c for which cubature is recovered and karvonen and showed how other cubature rules can be recovered clearly the point estimator in eqn is a natural object it has also received attention in both the kernel quadrature literature sommariva and vianello and empirical interpolation literature kristoffersen recent work with a computational focus includes kennedy minka rasmussen and ghahramani huszar and duvenaud gunter et al briol et al karvonen and oettershagen the present paper focuses on the full posterior as opposed to just the point estimator that these papers studied cubature rules in hilbert spaces next we review how analysis of the approximation properties of the cubature rule f can be carried out in terms of reproducing kernel hilbert spaces rkhs berlinet and consider a hilbert space h with inner product and associated norm k kh h is said to be an rkhs if there exists a symmetric positive definite function k x x r called a kernel that satisfies two properties i k x h for all x x and ii f x hf k x ih for all x x and f h the reproducing property it can be shown that every kernel defines a rkhs and every rkhs admits a unique reproducing kernel berlinetr and sec in this paperr all kernels k are assumed to satisfy r k x x x in particular this guarantees f for all f define the kernel mean x r as x k x this exists in h as a consequence of r smola et the name is justified by the that z z h f f f k x h x z d e f k x x hf ih h the reproducing property permits an elegant theoretical analysis with many quantities of interest tractable in in the language of kernel means cubature rules of the form in eqn can be written as f hf ih where is the approximation to the kernel mean given by x k x for fixed f h the integration error associated with can then be expressed as f f hf ih hf ih hf ih a tight upper bound for the error is obtained by f f kf kh kh the expression decouples the magnitude in h of the integrand f from the kernel mean approximation error the following sections discuss how cubature rules can be tailored to target the second term in this upper bound optimality of cubature weights denote the dual space of h as and denote its corresponding norm k the performance of a cubature rule can be quantified by its error wce in the rkhs sup f f kf kh the wce is characterised as the error in estimating the kernel mean the integral and inner product commute due to the existence of as a bochner integral steinwart and christmann sometimes called the inequality hickernell fact kh minimisation of the wce is natural and corresponds to solving a problem in the feature space induced by the kernel let w rn denote the vector of weights wi z rn be a vector such that zi xi and k be the matrix with entries ki j k xi xj then we obtain the following fact w kw z several optimality properties for integration in rkhs were collated in sec of novak and relevant to this work is that an optimal estimate can without loss of generality take the form of a cubature rule of the form in eqn to be more precise any adaptive estimator can be in terms of asymptotic wce by a cubature rule as we have defined to relate these ideas to bc consider the challenge of deriving an optimal cubature rule conditional on fixed states xi that minimises the wce in the rkhs hk over weights w rn from fact the solution to this convex problem is w k z this shows that if the reproducing kernel k is equal to the covariance function c of the gp then the map from bc is identical to the optimal cubature rule in the rkhs kadane and wasilkowski furthermore with k c the expression for the wce in fact shows that vn g where is any other cubature rule based on the same states xi regarding optimality the problem is thus reduced to selection of states xi selection of states in earlier work o hagan considered states xi that are employed in gaussian cubature methods rasmussen and ghahramani generated states using monte carlo mc calling the approach bayesian mc bmc recent work by gunter et al briol et al selected states using experimental design to target the variance vn g these approaches are now briefly recalled monte carlo methods an mc method is a cubature rule based on uniform weights wimc and random states n xi the simplest of those methods consists of sampling states xmc i independently from for densities markov chain monte carlo mcmc methods proceed similarly but induce a dependence structure among the xmcmc we denote these rani mcmc uniformly dom estimators by when xi xmc i and when xi xi weighted estimators are to many challenging integration problems since they provide a convergence rate for the wce of op they are widely applicable and to analyse for instance the central limit theorem clt gives that n f f n where f f and the convergence is in distribution however the clt may not be as a measure of epistemic uncertainty as an explicit model for numerical error since i it is only valid asymptotically and ii is unknown depending on the integral f being estimated quasi monte carlo qmc methods exploit knowledge of the rkhs h to spread the states in an efficient deterministic way over the domain x hickernell qmc also approximates of course adaptive cubature may provide superior performance for a single fixed function f and the minimax result is not true in general outside the rkhs framework integrals using a cubature rule f that has uniform weights wiqmc the in some cases optimal convergence rates as well as sound statistical properties of qmc have recently led to interest within statistics gerber and chopin buchholz and chopin a related method with weights was explored in stein b experimental design methods an optimal bc obc rule selects states xi to globally minimise the variance vn f obc corresponds to classical cubature rules for specific choices of kernels karvonen and however obc can not in general be implemented the problem of optimising states is in general and smola sec a more pragmatic approach to select states is to use experimental design methods such as the greedy algorithm that sequentially minimises vn g this method called sequential bc sbc is straightforward to implement using numerical optimisation and is a probabilistic integration method that is often used osborne et gunter et more sophisticated optimisation algorithms have also been used for example in the empirical interpolation literature eftang and stamm proposed adaptive procedures to iteratively divide the domain of integration into in the bc literature briol et al used conditional gradient algorithms for this task a similar approach was recently considered in oettershagen at present experimental design schemes do not possess the computational efficiency that we have come to expect from mcmc and qmc moreover they do not scale well to highdimensional settings due to the need to repeatedly solve optimisation problems and have few established theoretical guarantees for these reasons we will focus next on mc mcmc and qmc methods this section presents novel theoretical results on probabilistic integration methods in which the states xi are generated with mcmc and qmc sec provides formal definitions while sec establishes theoretical results probabilistic integration the sampling methods of mcmc and to a lesser extent qmc are widely used in statistical computation here we pursue the idea of using mcmc and qmc to generate states for bc with the aim to exploit bc to account for the possible impact of numerical integration error on inferences made in statistical applications in mcmc it is possible that two states xi xj are identical to prevent the kernel matrix kpfrom becoming singular duplicate states be pn should n bc mcmc bc then we define f wi f xi and f wi f xqmc this i procedure requires no modification to existing mcmc or qmc sampling methods each estimator is associated with a full posterior distribution described in sec a moment is taken to emphasise that the apparently simple act of mcmc samples can have a dramatic improvement on convergence rates for integration of a sufficiently smooth integrand whilst our main interest is in the suitability of bc as a statistical model this is justified since the information contained in function evaluations fi fj is not lost this does not introduce additional bias into bc methods in contrast to mc methods for discretisation of an integral we highlight the efficient point estimation which comes out as a to date we are not aware of any previous use of bmcmc presumably due to analytic intractability of the kernel mean when is bqmc has been described by hickernell et al marques et al et al to the best of our knowledge there has been no theoretical analysis of the posterior distributions associated with either method the goal of the next section is to establish these fundamental results theoretical properties in this section we present novel theoretical results for bmc bmcmc and bqmc the setting we consider assumes that the true integrand f belongs to a rkhs h and that the gp prior is based on a covariance function c which is identical to the kernel k of that the gp is not supported on h but rather on a hilbert scaler of h is viewed as a technical detail indeed a gp can be constructed on h via c x k x y k y y and a theoretical analysis similar to ours could be carried out lemma of cialenco et bayesian markov chain monte carlo as a baseline we begin by noting a general result for mc estimation this requires a slight strengthening of the assumption on the kernel kmax k x x this implies that all f h are bounded on x for mc estimators lemma of song show that when kmax the wce converges in probability at the classical rate op turning now to bmcmc and bmc as a special case we consider the compact manifold x d below the distribution will be assumed to admit a density with respect to lebesgue measure denoted byp define the sobolev space to consist of all measurable functions such that kf kh f here is the order of and is a rkhs derivative counting can hence be a principled approach for practitioners to choose a suitable rkhs all results below apply to rkhs h that are to permitting flexibility in the choice of kernel specific examples of kernels are provided in sec our analysis below is based on the scattered data approximation literature wendland a minor technical assumption that enables us to simplify the presentation of results below is that the set x xi may be augmented with a finite set y yi m where m does not increase with clearly this has no bearing on asymptotics for measurable a we write pn a en where is the indicator function of the event theorem bmcmc in suppose is bounded away from zero on x d let h be to where suppose states are generated by a reversible uniformly ergodic markov chain that targets then op and moreover if f h and pn f g f op exp n d where depends on and can be arbitrarily small two norms k k k on a vector space h are equivalent when there exists constants such that for all h h we have khk khk this result shows the posterior distribution is the posterior distribution of g concentrates in any open neighbourhood of the true integral f this result does not address the frequentist coverage of the posterior which is assessed empirically in sec although we do not focus on point estimation a brief comment is warranted a lower bound on the wce that can be attained by randomised algorithms in this setting is op novak and thus our result shows that the point estimate is at most one mc rate away from being bach obtained a similar result for fixed n and a specific importance sampling distribution his analysis does not directly imply our asymptotic results and vice versa after completion of this work similar results on point estimation appeared in oettershagen bauer et al thm can be generalised in several directions firstly we can consider more general domains x specifically the scattered data approximation bounds that are used in our proof apply to any compact domain x rd that satisfies an interior cone condition wendland technical results in this direction were established in oates et al second we can consider other spaces for example a slight extension of thm shows that certain infinitely differentiable kernels lead to exponential rates for the wce and rates for posterior contraction for brevity details are omitted bayesian quasi monte carlo the previous section focused on bmcmc in the sobolev space to avoid repetition here we consider more interesting spaces of functions whose mixed partial derivatives exist for which even faster convergence rates can be obtained using bqmc to formulate bqmc we must posit an rkhs a priori and consider collections of states xqmc that constitute a qmc point i set tailored to the rkhs consider x d with uniform on x define thepsobolev space of dominating mixed smoothness to consist of functions for which kf ij f here is the order of the space and k ks is a rkhs to build intuition note that is normequivalent to the rkhs generated by a tensor product of kernels sickel and ullrich or indeed a tensor product of any other univariate sobolev space kernel for a specific space such as we seek an appropriate qmc point set the digital t m d construction is an example of a qmc point set for for details we refer the reader to dick and pillichshammer for details theorem bqmc in let h be to where suppose states are chosen according to a digital t m d net over zb for some prime b where n bm then o and if f and pn f g f o exp where depends on and can be arbitrarily small this result shows that the posterior is again indeed the rate of contraction is much faster in compared to in terms of point estimation this is the optimal rate for any deterministic algorithm for integration of functions in novak and these results should be understood to hold on the n bm as qmc methods do the control variate trick of bakhvalov can be used to achieve the optimal randomised wce but this steps outside of the bayesian framework not in general give guarantees for all n it is not clear how far this result can be generalised in terms of and x compared to the result for bmcmc since this would require the use of different qmc point sets summary in this section we established rates of posterior contraction for bmc bmcmc and bqmc in a general sobolev space context these results are essential since they establish the sound properties of the posterior which is shown to contract to the truth as more evaluations are made of the integrand of course the higher computational cost of up to o may restrict the applicability of the method in regimes however we emphasise that the motivation is to quantify the uncertainty induced from numerical integration an important task which often justifies the higher computational cost implementation so far we have established sound theoretical properties for bmcmc and bqmc under the assumption that the prior is unfortunately prior specification complicates the situation in practice since given a test function f there are an infinitude of rkhs to which f belongs and the specific choice of this space will impact upon the performance of the method in particular the scale of the posterior is driven by the scale of the prior so that the uncertainty quantification being provided is endogenous and if the prior is not this could mitigate the advantages of the probabilistic numerical framework this important point is now discussed it is important to highlight a distinction between b mc mc and bqmc for the former the choice of states does not depend on the rkhs for b mc mc this allows for the possibility of specification of the kernel after evaluations of the integrand have been obtained whereas for alternative methods the kernel must be stated our discussion below therefore centres on prior specification in relation to b mc mc where several statistical techniques can be applied prior specification the above theoretical results do not address the important issue of whether the scale of the posterior uncertainty provides an accurate reflection of the actual numerical error this is closely related to the problem of prior specification in the kriging literature stein xu and stein consider a parametric kernel k x with a distinction drawn here between scale parameters and smoothness parameters the former are defined as parametrising the norm on h whereas the latter affect the set h itself selection of based on data can only be successful in the absence of acute sensitivity to these parameters for scale parameters a wide body of evidence demonstrates that this is usually not a concern stein however selection of smoothness parameters is an active area of theoretical research et in some cases it is possible to elicit a smoothness parameter from physical or mathematical considerations such as a known number of derivatives of the integrand our attention below is instead restricted to scale parameters where several approaches are discussed in relation to their suitability for bc marginalisation a natural approach from a bayesian perspective is to set a prior p on parameters and then to marginalise over to obtain a posterior over f recent results for a certain infinitely differentiable kernel establish minimax optimal rates for this approach including in the practically relevant setting where is supported on a of the ambient space x yang and dunson however the act of marginalisation itself involves an intractable integral while the computational cost of evaluating this integral will often be dwarfed by that of the integral f of interest marginalisation nevertheless introduces an additional undesirable computational challenge that might require several approximations osborne it is however possible to analytically marginalise certain types of scale parameters such as amplitude parameters proposition suppose our covariance function takes the form c x y x y where x x r is itself a reproducing kernel and is an amplitude parameter consider the improper prior p then the posterior marginal for g is a distribution with mean and variance x f f f x x n and n degrees of freedom here i j xi xj x i xi x x another approach to kernel choice is however this can perform poorly when the number n of data is small since the data needs to be further reduced into training and test sets the performance estimates are also known to have large variance in those cases chap of rasmussen and williams since the small n scenario is one of our primary settings of interest for bc we felt that was unsuitable for use in applications below empirical bayes an alternative to the above approaches is empirical bayes eb selection of scale parameters choosing to maximise the likelihood of the data f xi i n sec of rasmussen and williams eb has the advantage of providing an objective function that is easier to optimise relative to however we also note that eb can lead to when n is very small since the full irregularity of the integrand has yet to be uncovered et in addition it can be shown that eb estimates need not converge as n when the gp is supported on infinitely differentiable functions xu and stein for the remainder we chose to focus on a combination of the marginalisation approach for amplitude parameters and the eb approach for remaining scale parameters empirical results support the use of this approach though we do not claim that this strategy is optimal tractable and intractable kernel means bc requires that the kernel mean x k x is available in this is the case for several pairs k and a subset of these pairs are recorded in table x d d d rd sd arbitrary arbitrary arbitrary arbitrary unif x unif x unif x mixt of gaussians unif x unif x mixt of gauss unif x known moments known log x k wendland tp weighted tp exponentiated quadratic exponentiated quadratic gegenbauer trigonometric splines polynomial tp kernel reference oates et al sec use of error function kennedy sec integration by parts wahba briol et al oates et al table a list of distribution and kernel k pairs that provide a expression for both the kernel mean x k x and the initial error here tp refers to the tensor product of kernels in the event that the pair k of interest does not lead to a kernel mean it is sometimes possible to determine another pair k for which k x is available and such that i is absolutely continuous with respect to so that the derivative exists and ii f h k then one can construct an importance sampling estimator z z f f f f and proceed as above o hagan one side contribution of this research was a novel and generic approach to accommodate intractability of the kernel mean in bc this is described in detail in supplement b and used in case studies and presented in sec results the aims of the following section are i to validate the preceding theoretical analysis and ii to explore the use of probabilistic integrators in a range of problems arising in contemporary statistical applications assessment of uncertainty quantification our focus below is on the uncertainty quantification provided by bc and in particular the performance of the hybrid approach to kernel parameters to be clear we are not concerned with accurate point estimation at low computational cost this is a wellstudied problem that reaches far beyond the methods of this paper rather we are aiming to assess the suitability of the probabilistic description for integration error that is provided by bc our motivation is expensive integrands but to perform assessment in a controlled environment we considered inexpensive test functions of varying degrees of irregularity whose integrals can be accurately approximated these included a test function fj x exp sin cj with an easy setting and a hard setting the hard test function is more variable and will hence be more difficult to approximate see fig one realisation of states xi generated independently and uniformly over x d initially estimated integrals n estimated integrals n n n figure evaluation of uncertainty quantification provided by bc here we used empirical bayes eb for with marginalised left the test functions top bottom in d dimension right solutions provided by monte carlo mc black and bayesian mc bmc red for one typical realisation credible regions are shown for bmc and the green horizontal line gives the true value of the integral the blue curve gives the corresponding lengthscale parameter selected by eb d was used to estimate the fi we work in an rkhs characterised by tensor products of kernels d y x x i i i i x where is the modified bessel function of the second kind kernel means exist in this case for p whenever p in this eb was used to select the lengthscale parameters d of the kernel while the amplitude parameter was marginalised as in prop the smoothness parameter was fixed at note that all test functions will be in the space for any and there is a degree of arbitrariness in this choice of prior results are shown in fig are used to denote the posterior credible regions for the value of the integral and we also display the values of the length scale selected by the appear to converge rapidly as n this is encouraging but we emphasise that we the term credible is used loosely since the are estimated rather than marginalised figure evaluation of uncertainty quantification provided by bc here we used empirical bayes for with marginalised in dimensions d top and d bottom coverage frequencies computed from top or bottom realisations were compared against notional bayesian credible regions for varying level and number of observations the quadrant represents conservative credible intervals whilst the quadrant represents intervals left easy test function right hard test function do not provide theoretical guarantees for eb in this work on the negative side is possible at small values of indeed the bc posterior is liable to be under eb since in the absence of evidence to the contrary eb selects large values for that correspond to more regular functions this is most evident in the hard case next we computed coverage frequencies for credible regions for each sample size n the process was repeated over many realisations of the states xi shown in fig it may be seen that for n large enough the uncertainty quantification provided by eb is for the easier function whilst being for the more complicated functions such as as expected we observed that the coverage was for small values of performance was subsequently investigated with selected by eb in general this performed worse than when was marginalised results are contained in supplement finally to understand whether theoretical results on asymptotic behaviour are realised in practice we note in the absence of eb that the variance vn g is independent of the integrand and may be plotted as a function of results in supplement c demonstrate that theoretical rates are observed in practice for d for bqmc however at large values of d more data are required to achieve accurate estimation and increased numerical instability was observed the results on test functions provided in this section illustrate the extent to which uncertainty quantification in possible using bc in particular for our examples we observed reasonable frequentist coverage if the number n of samples was not too small for the remainder we explore possible roles for bmcmc and bqmc in statistical applications four case studies carefully chosen to highlight both the strengths and the weaknesses of bc are presented brief critiques of each study are contained below the full details of which can be found in supplement case study model selection via thermodynamic integration consider the problem of selecting a single best model among a set mm based on data y assumed to arise from a true model in this set the bayesian solution assuming a uniform prior over models is to select the map model we focus on the case with uniform prior on models p mi and this problem hence reduces to finding the largest marginal likelihood pi p the pi are usually intractable integrals over the parameters associated with model mi one approach to model selection is to estimate each pi in turn say by then to take the maximum of the over i m in particular thermodynamic integration is one approach to approximation of marginal likelihoods pi for individual models gelman and meng friel and pettitt in many contemporary applications the map model is not for example in variable selection where there are very many candidate models then the map becomes sensitive to numerical error in the since an incorrect model mi i k can be assigned an overly large value of due to numerical error in which case it could be selected in place of the true map model below we explore the potential to exploit probabilistic integration to surmount this problem thermodynamic integration to simplify notation below we consider computation of a single pi and suppress dependence on the index i corresponding to model mi denote the parameter space by for t an inverse temperature define the power posterior a distribution over with density p t p the thermodynamic identity is formulated as a double integral z z log p y dt log p r the thermodynamic integral can be as log p y g t dt g t f where f log p standard practice is to discretise the outer integral and estimate the inner integral using mcmc letting tm denote a fixed temperature schedule we thus have using the trapezium rule log p y m x ti n log p j n where j are mcmc samples from several improvements have been proposed including the use of numerical quadrature for the outer integral friel et hug et and the use of control variates for the inner integral oates et to date probabilistic integration has not been explored in this context probabilistic thermodynamic integration our proposal is to apply bc to both the inner and outer integrals this is instructive since nested integrals are prone to propagation and accumulation of numerical error several features of the method are highlighted transfer learning in the probabilistic approach the two integrands f and g are each assigned prior probability models for the inner integral we assign a prior f n kf our data here are the nm vector f where f f j for estimating gi with bc we have m times as much data as for the mc estimator in eqn which makes use of only n function evaluations here information transfer across temperatures is made possible by the explicit model for f underpinning bc in the posterior g g g tt is a gaussian random vector with n where the mean and covariance are defined in the obvious notation by kf x f b kf kf x kf x where x j and kf is an nm nm kernel matrix defined by kf inclusion of prior information for the outer integral it is known that discretisation error can be substantial friel et al proposed a correction to the trapezium rule to mitigate this bias while hug et al pursued the use of simpson s rule attacking this problem from the probabilistic perspective we do not want to place a stationary prior on g t since it is known from extensive empirical work that g t will vary more at smaller values of indeed the ti is commonly used calderhead and girolami we would like to encode this information into r our prior to do this we proceed with an importance sampling step log p y g t dt h t t dt the implies an importance distribution t for some small which renders the function h approximately stationary made precise in supplement a stationary gp prior h n kh on the transformed integrand h provides the encoding of this prior knowledge that was used propagation of uncertainty under this construction in the posterior log p y is gaussian with post prob std thermo int prob thermo int std thermo int candidate models post prob prob thermo int candidate models candidate models candidate models figure probabilistic thermodynamic integration illustration on variable selection for logistic regression the true model was standard and probabilistic thermodynamic integration were used to approximate marginal likelihoods and hence the posterior over models each row represents an independent realisation of mcmc while the data y were fixed left standard monte carlo where point estimates for marginal likelihood were assumed to have no associated numerical error right probabilistic integration where a model for numerical error on each integral was propagated through into the posterior over models the probabilistic approach produces a probability distribution over a probability distribution where the numerical uncertainty is modelled on top of the usual uncertainty associated with model selection mean and covariance defined as en log p y kh t vn log p y kh kh t kh t z kh t kh t z where t ti m and kh is an m m kernel matrix defined by kh the term arises from bc on the outer integral while the term arises from propagating numerical uncertainty from the inner integral through to the outer integral simulation study an experiment was conducted to elicit the map model from a collection of candidate logistic regression models in a variable selection setting this could be achieved in many ways our aim was not to compare accuracy of point estimates but rather to explore the probability model that unlike in standard methods is provided by bc full details are in supplement results are shown in fig here we compared approximations to the model posterior obtained using the standard method versus the probabilistic method over two realisations of the mcmc the data y were fixed we make some observations i the probabilistic approach models numerical uncertainty on top of the usual statistical uncertainty ii the computation associated with bc required less time in total than the time taken afforded to mcmc iii the same model was not always selected as the map when numerical error was ignored and depended on the mcmc random seed in contrast under the probabilistic approach either or could feasibly be the map under any of the mcmc realisations up to numerical uncertainty iv the top row of fig shows a large posterior uncertainty over the marginal likelihood for this could be used as an indicator that more computational effort should be expended on this particular integral v the posterior variance was dominated by uncertainty due to discretisation error in the outer integral rather than the inner integral this suggests that numerical uncertainty could be reduced by allocating more computational resources to the outer integral rather than the inner integral case study uncertainty quantification for computer experiments here we consider an industrial scale computer model for the teal south oil field new orleans hajizadeh et conditional on field data posterior inference was facilitated using mcmc lan et oil reservoir models are generally challenging for mcmc first simulating from those models can be making the cost of individual mcmc samples a few minutes to several hours second the posterior distribution will often exhibit strongly concentration of measure here we computed statistics of interest using bmcmc where the uncertainty quantification afforded by bc aims to enable valid inferences in the presence of relatively few mcmc samples full details are provided in supplement quantification of the uncertainty associated with predictions is a major topic of ongoing research in this field mohamed et hajizadeh et park et due to the economic consequences associated with inaccurate predictions of quantities such as future oil production rate a probabilistic model for numerical error in integrals associated with prediction could provide a more complete uncertainty assessment the particular integrals that we considered are posterior means for each model parameter and we compared against an empirical benchmark obtained with brute force mcmc bmcmc was employed with a kernel whose was selected using eb estimates for posterior means were obtained using both standard mcmc and bmcmc shown in fig for this example the posterior distribution provides sensible uncertainty quantification for integrals but was for integrals the point accuracy of the bmcmc estimator matched that of the standard mcmc estimator the lack of faster convergence for bmcmc appears to be due to inaccurate estimation of the kernel mean and we conjecture that alternative exact approaches such as oates et al may provide improved performance in this context however standard confidence intervals obtained from the clt for mcmc with a estimate for the asymptotic variance were for parameters parameter parameter parameter estimated integrals estimated integrals estimated integrals n parameter n n parameter parameter estimated integrals estimated integrals estimated integrals n n parameter parameter n parameter n estimated integrals estimated integrals estimated integrals n n figure numerical estimation of parameter posterior means for the teal south oil field model centered around the true values the green line gives the exact value of the integral the mcmc black line and bmcmc point estimates red line provided similar performance the mcmc confidence intervals based on estimated asymptotic variance black dotted lines are poorly calibrated whereas with the bmcmc credible intervals red dotted lines provide a more honest uncertainty assessment case study random effects our aim here was to explore whether more flexible representations afforded by weighted combinations of hilbert spaces enable probabilistic integration when x is the focus was bqmc but the methodology could be applied to other probabilistic integrators weighted spaces the formulation of high and infinite qmc can be achieved with a construction known as a weighted hilbert space these spaces defined below are motivated by the tion that many integrands encountered in applications seem to vary more in lower dimensional projections compared to higher dimensional projections our presentation below follows sec and of dick and pillichshammer but the idea goes back at least to wahba chap as usual with qmc we work in x d and uniform over x let i d for each subset u i define a weight and denote the collection of all weights p by consider the space of functions of the form f x fu xu where fu belongs to an rkhs hu with kernel ku and xu denotes the components of x that are indexed by u i this is not restrictive since any function can be written in this form by considering p only u i we turn into a hilbert space by defining an inner product hf hfu p gu iu where u i constructed in this way is an rkhs with kernel x ku x intuitively the weights can be taken to be small whenever the function f does not depend heavily on the interaction of the states xu thus most of the will be small for a function f thatpis effectively a measure of the effective dimension of the function is given by in an extreme case d could even be infinite provided that this sum remains bounded dick et the canonical weighted sobolev space of dominating mixed smoothness is defined by taking each of the component spaces to be in finite dimensions d bqmc rules based on a digital net attain optimal wce rates o for this rkhs see supplement for full details random effects regression for illustration we considered generalised linear models and focus on a poisson random effects regression model studied by kuo et al example the context is inference for the parameters of the following model yj po log j j j ud j uj n independent here j j and z z where are knots we took d equally spaced knots in min inference for requires multiple evaluations of the observed data likelihood p rd p u p u du and therefore is a candidate for probabilistic integration methods in order to model the cumulative uncertainty of estimating multiple numerical integrals in order to transform this integration problem to the unit cube the change of r we perform variables xj uj so that we wish to evaluate p d p x dx here x denotes the standard gaussian inverse cdf applied to each component of probabilistic integration here proceeds under the hypothesis that the integrand f x p x belongs to or at least can be well approximated by functions in for some smoothness parameter and some weights intuitively the integrand f x is such that an increase in the value of xj at the knot can be compensated for by a decrease in the value of at a neighbouring knot but not by changing values of x at more remote knots therefore we expect f x to exhibit strong individual and pairwise dependence on the xj but expect dependency to be weaker this motivates the weighted space assumption sinescu et al provides theoretical analysis for the choice of weights here weights of order two were used qmc bqmc bqmc true integral estimate m figure application to random effects regression in d dimensions based on n samples from a digital net error bars show credible regions to improve visibility results are shown on the error bars are symmetric on the linear scale a qmc estimate was used to approximate the true value of the integral p where was the value of the parameter for dmax dmax otherwise which corresponds to an assumption of interaction terms though f can still depend on all d of its arguments full details are provided in supplement results in fig showed that the posterior credible regions cover the truth for this problem suggesting that the uncertainty estimates are appropriate on the negative side the bqmc method does not encode of the integrand and consequently some posterior mass is placed on negative values for the integral which is not meaningful to understand the effect of the weighted space construction here we compared against the bqmc point estimate with interactions u i an interesting observation was that these point estimates closely followed those produced by qmc case study spherical integration for computer graphics probabilistic integration methods can be defined on arbitrary manifolds with formulations on spaces suggested as far back as diaconis and recently exploited in the context of computer graphics brouillat et marques et this forms the setting for our final case study global illumination integrals below we analyse bqmc on the sd x in order to estimate integrals of the form f sd f where is the spherical measure uniform r d over s with sd probabilistic integration is applied to compute global illumination integrals used in the rendering of surfaces pharr and humphreys and we therefore focus on the case where d uncertainty quantification is motivated by inverse global illumination yu et integral estimate integral estimate red channel blue channel green channel number of states n number of states n number of states n figure probabilistic integration over the sphere was employed to estimate the rgb colour intensities for the california lake environment error bars for bmc blue and bqmc green represent credible intervals mc estimates black and qmc estimates red are shown for reference where the task is to make inferences from noisy observation of an object via computerbased image synthesis a measure of numerical uncertainty could naturally be propagated in this context below to limit scope we restrict attention to uncertainty quantification in the forward problem the models involved in global illumination are based on three main factors a geometric model for the objects present in the scene a model for the reflectivity of the surface of each object and a description of the light sources provided by an environment map the light emitted from the environment will interact with objects in the scene through reflection this can be formulated as an illumination integral z li n lo le here lo is the outgoing radiance the outgoing light in the direction le represents the amount of light emitted by the object itself which we will assume to be known and li is the light hitting the object from direction the term is the bidirectional reflectance distribution function brdf which models the fraction of light arriving at the surface point from direction and being reflected towards direction here n is a unit vector normal to the surface of the object our investigation is motivated by strong empirical results for bqmc in this context obtained by marques et al to assess the performance of bqmc we consider a typical illumination integration problem based on a california lake environment the goal here is to compute intensities for each of the three rgb colour channels corresponding to observing a virtual object from a fixed direction we consider the case of an object directly facing the camera wo n for the brdf we took exp the integrand f li was modelled in a sobolev space of low smoothness the specific function space that we consider is the sobolev space sd for formally defined in supplement results both bmc and bqmc were tested on this example to ensure fair comparison identical kernels were taken as the basis for both methods bqmc was employed using a spherical tdesign bondarenko et it can be shown that for bqmc o when this point set is used see supplement fig shows performance in for this particular test function the bqmc point estimate was almost identical to the qmc estimate at all values of overall both bmc and bqmc provided sensible quantification of uncertainty for the value of the integral at all values of n that were considered conclusion the increasing sophistication of computational models of which numerical integration is one component demands an improved understanding of how numerical error accumulates and propagates through computation in now common settings where integrands are computationally intensive or very many numerical integrals are required effective methods are required that make full use of information available about the problem at hand this is evidenced by the recent success of qmc which leverages the smoothness properties of integrands probabilistic numerics puts the statistician in centre stage and aims to model the integrand this approach was eloquently summarised by kadane who proposed the following vision for the future of computation statistics can be thought of as a set of tools used in making decisions and inferences in the face of uncertainty algorithms typically operate in such an environment perhaps then statisticians might join the teams of scholars addressing algorithmic this paper explored probabilistic integration from the perspective of the statistician our results highlight both the advantages and disadvantages of such an approach on the positive side the general methodology described a unified framework in which existing mcmc and qmc methods can be associated with a probability distribution that models discretisation error posterior contraction rates were for the first time established on the negative side there remain many substantial open questions in terms of philosophical foundations theoretical analysis and practical application these are discussed below philosophy there are several issues concerning interpretation first whose epistemic uncertainty is being modelled in hennig et al it was argued that the uncertainty being modelled is that of a hypothetical agent that we get to design that is the statistician selects priors and loss functions for the agent so that it best achieves the statistician s own goals these goals typically involve a combination of relatively behaviour to perform well on a diverse range of problems and a low computational overhead interpretation of the posterior is then more subtle than for subjective inference and many of the points of contention for objective inference also appear in this framework methodology there are options as to which part of the numerical method should be modelled in this paper the integrand f was considered to be uncertain while the distribution was considered to be known however one could alternatively suppose that both f and are unknown pursued in oates et al regardless the endogenous nature of the uncertainty quantification means that in practice one is reliant on effective methods for estimation of kernel parameters the interaction of standard methods such as empirical bayes with the task of numerical uncertainty quantification demands further theoretical research xu and stein theory for probabilistic integration further theoretical work is required our results did not address coverage at finite sample size nor the interaction of coverage with methods for kernel parameter estimation a particularly important question recently addressed in kanagawa et al is the behaviour of bc when the integrand does not belong to the posited rkhs prior specification a broad discussion is required on what prior information should be included and what information should be ignored indeed practical considerations essentially always demand that some aspects of prior information are ignored competing computational statistical and philosophical considerations are all in play and must be balanced for example the rkhs framework that we studied in this paper has the advantage of providing a flexible way to encode prior knowledge about the integrand allowing to specify properties such as smoothness periodicity and effective on the other hand several important properties including boundedness are less easily encoded for bc the possibility for importance sampling eqn has an element of arbitrariness that appears to preclude the pursuit of a default prior even within the rkhs framework there is the issue that integrands f will usually belong to an infinitude of rkhs selecting an appropriate kernel is arguably the central open challenge for qmc research at present from a practical perspective elicitation of priors over infinitedimensional spaces in a hard problem an adequate choice of prior can be very informative for the numerical scheme and can significantly improve the convergence rates of the method methods for choosing the kernel automatically could be useful here duvenaud but would need to be considered against their suitability for providing uncertainty quantification for the integral the list above is not meant to be exhaustive but highlights the many areas of research that are yet to be explored acknowledgements the authors are grateful for the expert feedback received from the associate editor and reviewers as well as from barp cockayne dick duvenaud gelman hennig kanagawa kronander meng owen robert schwab simpson skilling sullivan tan teckentrup and zhu the authors thank lan and marques for providing code used in case studies and fxb was supported by the epsrc grant cjo was supported by the arc centre of excellence for mathematical and statistical frontiers acems mg was supported by the epsrc grant an epsrc established career fellowship the eu grant and a royal society wolfson research merit award this work was also supported by the alan turing institute under the epsrc grant and the programme on engineering finally this material was also based upon work partially supported by the national science foundation nsf under grant to the statistical and applied mathematical sciences institute any opinions findings and conclusions or recommendations expressed in this material are those of the author s and do not necessarily reflect the views of the nsf references bach on the equivalence between quadrature rules and random features mach learn bakhvalov on the approximate calculation of multiple integrals in russian vestnik mgu ser math mech astron phys bauer devroye kohler krzyzak and walk nonparametric estimation of a function from noiseless observations at random points multivariate berlinet and reproducing kernel hilbert spaces in probability and statistics springer science business media new york bogachev i gaussian measures mathematical surveys and monographs american mathematical society bondarenko radchenko and viazovska optimal asymptotic bounds for spherical designs ann briol oates girolami and osborne a bayesian quadrature probabilistic integration with theoretical guarantees in proc adv neur in nips brouillat bouville loos hansen and bouatouch a bayesian monte carlo approach to global illumination comp graph forum buchholz and chopin improving approximate bayesian computation via quasi monte carlo calderhead and girolami estimating bayes factors via thermodynamic integration and population mcmc comput statist data cialenco fasshauer ye q approximation of stochastic partial differential equations by a collocation method int comput cockayne oates sullivan and girolami bayesian probabilistic numerical methods diaconis bayesian numerical analysis statist decis theory rel top iv dick and pillichshammer digital nets and sequences discrepancy theory and carlo integration cambridge university press dick kuo and sloan integration the carlo way acta duvenaud automatic model construction with gaussian processes phd thesis university of cambridge efron and tibshirani j an introduction to the bootstrap crc press eftang and stamm b parameter hp empirical interpolation int numer methods friel and pettitt a marginal likelihood estimation via power posteriors stat soc ser stat friel hurn and wyse j improving power posterior estimation of statistical evidence stat gelman and meng simulating normalizing constants from importance sampling to bridge sampling to path sampling statist gerber and chopin sequential carlo stat soc ser stat girolami and calderhead b riemann manifold langevin and hamiltonian monte carlo methods stat soc ser stat gunter garnett osborne hennig and roberts sampling for inference in probabilistic models with fast bayesian quadrature in proc adv neur hajizadeh christie and demyanov ant colony optimization for history matching and uncertainty quantification of reservoir models j petrol sci hennig osborne and girolami probabilistic numerics and uncertainty in computations roy soc a hickernell j a generalized discrepancy and quadrature error bound math hickernell lemieux and owen a b control variates for carlo statist hug schwarzfischer hasenauer marr and theis j an adaptive scheduling scheme for calculating bayes factors with thermodynamic integration using simpson s rule stat huszar and duvenaud herding is bayesian quadrature in proc uncertainty in artificial intelligence uai kadane j b parallel and sequential computation a statistician s view j complexity kadane j and wasilkowski average case in computer science a bayesian view in bayesian kallenberg o foundations of modern probability second edition probability and its applications springer kanagawa sriperumbudur and fukumizu convergence guarantees for quadrature rules in misspecified settings in proc adv neur in nips kanagawa sriperumbudur and fukumizu convergence analysis of deterministic quadrature rules in misspecified settings karvonen and fully symmetric kernel quadrature karvonen and classical quadrature rules via gaussian processes ieee international workshop on machine learning for signal processing to appear kennedy bayesian quadrature with approximating functions stat kristoffersen the empirical interpolation method master s thesis department of mathematical sciences norwegian university of science and technology kuo dunsmuir sloan wand and womersley carlo for highly structured generalised response models methodol comput appl lan christie and girolami emulation of tensors in manifold monte carlo methods for bayesian inverse problems comput larkin gaussian measure in hilbert space and applications in numerical analysis rocky mountain j marques bouville ribardiere santos and bouatouch a spherical gaussian framework for bayesian monte carlo rendering of glossy surfaces in ieee trans vis and comp marques bouville santos and bouatouch efficient quadrature rules for illumination integrals from quasi monte carlo to bayesian monte carlo synth lect comput graph animation minka deriving quadrature rules from gaussian processes technical report statistics department carnegie mellon university mohamed christie and demyanov comparison of stochastic sampling algorithms for uncertainty quantification spe journal mosbach and turner a quantitative probabilistic investigation into the accumulation of rounding errors in numerical ode solution comput math novak and tractability of multivariate problems volume i linear information ems publishing house ems tracts in mathematics novak and tractability of multivariate problems volume ii standard information for functionals ems publishing house ems tracts in mathematics o hagan a quadrature statist plann inference o hagan a some bayesian numerical analysis bayesian oates cockayne briol and girolami convergence rates for a class of estimators based on stein s identity oates papamarkou and girolami the controlled thermodynamic integral for bayesian model comparison amer statist oates girolami and chopin control functionals for monte carlo integration stat soc ser stat oates niederer lee briol and girolami probabilistic models for integration error in assessment of functional cardiac models in proc adv neur in nips to appear oates cockayne and aykroyd bayesian probabilistic numerical methods for industrial process monitoring oettershagen j construction of optimal cubature algorithms with applications to econometrics and uncertainty quantification phd thesis university of bonn osborne a bayesian gaussian processes for sequential prediction optimisation and quadrature phd thesis university of oxford osborne duvenaud garnett rasmussen roberts and ghahramani z active learning of model evidence using bayesian quadrature in proc adv neur in nips park scheidt fenwick boucher and caers j history matching and uncertainty quantification of facies models with multiple geological interpretations comput pharr and humphreys physically based rendering from theory to implementation morgan kaufmann publishers rasmussen and williams gaussian processes for machine learning mit press rasmussen and ghahramani z bayesian monte carlo in proc adv neur inf nips ritter analysis of numerical problems berlin heidelberg robert and casella monte carlo statistical methods springer science business media hartikainen svensson and sandblom on the relation between gaussian process quadratures and methods adv inf fusion and smola a learning with kernels support vector machines regularization optimization and beyond mit press sickel and ullrich tensor products of spaces and applications to approximation from the hyperbolic cross approx theory sinescu kuo and sloan on the choice of weights in a function space for carlo methods for a class of generalised response models in statistics in proc monte carlo and carlo methods smola gretton song and b a hilbert space embedding for distributions in proc conf algorithmic learn theory sommariva and vianello numerical cubature on scattered data by radial basis functions computing song learning via hilbert space embedding of distributions phd thesis school of information technologies university of sydney stein interpolation of spatial data some theory for kriging springer science business media stein predicting integrals of random fields using observations on a lattice ann stein locally lattice sampling designs for isotropic random fields ann steinwart and christmann a support vector machines springer science business media suldin a b wiener measure and its applications to approximation methods izvestiya vysshikh uchebnykh zavedenii matematika van der vaart and van zanten j frequentist coverage of adaptive nonparametric bayesian credible sets ann traub and wasilkowski complexity academic press wahba spline models for observational data regional conference series in applied mathematics wendland scattered data approximation cambridge university press zu and stein maximum likelihood estimation for a smooth gaussian random field model siam j uncertainty quantification yang and dunson b bayesian manifold regression ann yu debevec malik and hawkins inverse global illumination recovering reflectance models of real scenes from photographs in proc ann conf comput graph int supplement this supplement provides complete proofs for theoretical results extended numerics and full details to reproduce the experiments presented in the paper a proof of theoretical results proof of fact for a prior n m c and data xi fi standard conjugacy results for gps lead to the posterior pn n mn cn over l with mean mn x m x c x x c f m and covariance cn x c x c x x c c x see chap of rasmussen and williams then repeated application of fubini s theorem produces z en g en g mn z g vn g zzz zz z mn dpn g g x mn x g mn dpn g x cn x x the proof is completed by substituting the expressions for mn and cn into these two equations the result in the main text additionally sets m proof of fact from eqn in the main text kh for the converse inequality consider the specific integrand f then from the supremum definition of the dual norm f f kh now we use the reproducing property f f kf kh ih kf kh kh kh this completes the proof proof of fact combining fact with direct calculation gives that z zz n n x x wi wj k xi xj wi k x xi x k x x i w kw k x k as required the following lemma shows that probabilistic integrators provide a point estimate that is at least as good as their counterparts p lemma bayesian let f consider the cubature rule f wi f xi p and the corresponding bc rule f wibc f xi then proof this is immediate from fact which shows that the bc weights wibc are an optimal choice for the space the convergence of is controlled by quality of the approximation mn lemma regression bound let f h and fix states xi x then we have f f kf mn r of jensen s inequality f f f mn rproof is an application f mn kf mn as required note that this regression bound is not sharp in general ritter prop and as a consequence thm below is not quite optimal lemmas and refer to the point estimators provided by bc however we aim to quantify the change in probability mass as the number of samples increases lemma bc contraction assume f suppose that where as n define f f to be an interval of radius centred on the true value of the integral then pn g vanishes at the rate o exp proof assume without loss of generality that the posterior distribution over g is gaussian with mean mn and variance vn since vn we have vn now the r c posterior probability mass on is given by i c vn dr where vn is the of the n mn vn distribution from the definition of we get the upper bound z f z pn g vn dr vn dr f f m f m n n vn vn vn vn z z from the definition of the wce we have that the terms are bounded by kf kh so that asymptotically as we have pn g vn vn erfc the result follows from the fact that erfc x exp for x sufficiently small this result demonstrates that the posterior distribution is probability mass concentrates in a neighbourhood of f hence if our prior is well calibrated see sec the posterior provides uncertainty quantification over the solution of the integral as a result of performing a finite number n of integrand evaluations define the fill distance of the set x xi as hx sup min kx xi n as n the scaling of the fill distance is described by the following special case of lemma oates et al lemma let v be continuous monotone increasing and satisfy v and v x exp suppose further x d is bounded away from zero on x and x xi are samples from an uniformly ergodic markov chain targeting then we have ex v hx o v where can be arbitrarily small proof of thm initially consider fixed states x xi fixing the random seed and h from a standard result in functional approximation due to wu and schaback see also wendland thm there exists c and such that for all x x and hx x mn x kf kh for other kernels alternative bounds are wendland table we augment x with a finite number of states y yi m to ensure that always holds then from the regression bound lemma f f kf mn f x mn x x kf kh x kf kh it follows that now taking an expectation ex over the sample path x xi of the markov chain we have that ex cex cex from lemma above we have a scaling relationship such that for we have ex o for arbitrarily small from markov s inequality convergence in mean implies convergence in probability and thus using eqn we have op this completes the proof for h more generally if h is to then the result follows from the fact that for some proof of thm from theorem of dick and pillichshammer which assumes n the qmc rule based on a digital t m d net over zb for some prime b satisfies cd log n o for the sobolev space of dominating mixed smoothness order where cd is a constant that depends only on d and but not on n the result follows immediately from norm equivalence and lemma the contraction rate follows from lemma proof of prop denote by pn the posterior distribution on the integral conditional on a value of following prop this is a gaussian distribution with mean and variance given by en g x f vn g x x furthermore the posterior on the amplitude parameter satisfies p p f p f f exp n which corresponds to an distribution with parameters and f f we therefore have that g is distributed as and the marginal distribution for g is a distribution as claimed b kernel means in this section we propose approximate bayesian cubature a where the weights a wbc k a k x are an approximation to the optimal bc weights based on an approximation a k x of the kernel mean see also prop in sommariva and vianello the following lemma demonstrates that we can bound the contribution of this error and inflate our posterior pn a pn to reflect the additional uncertainty due to the approximation so that uncertainty quantification is still provided lemma approximate kernel mean consider an approximation a to of the form a pm w a j xj then bc can be performed analytically with respect to a denote this estimator by a moreover ka nka proof define z k x and a z a k x let a z z write a and consider pn bc a wi ka a n z z n x x bc bc k x x k x x a wi k xi a wi k xi h a wbc k a wbc wbc z k a z k k a z k a z z z k z z k z k use to denote the tensor product of rkhs now since a zi zi a xi xi a k xi ih we have k x k i a k xi h a k h i d e x a a k i k xi k i a x k i k xi k i from fact we have a kh ka so it remains to show that the second term is equal to indeed x k i k xi k i x h k i k l k xi k k xl k h i l x k i k l k il k tr kk kk i l this completes the proof under this method the posterior variance a vn g ka can not be computed in but computable can be obtained and these can then be used to propagate numerical uncertainty through the remainder of our statistical task the idea here is to make use of the triangle inequality ka ka a ka the first term on the rhs is now available analytically from fact its square is a k k x for the second term explicit upper bounds exist in the case where a k x k a states a xi are independent random samples from for instance from song thm we have for a radial kernel k uniform a wj and independent a xi r p log ka sup k x x m with probability at least for dependent a xj the m in eqn can be replaced with an estimate for the effective sample size write cn for a credible interval for f defined by the conservative upper bound described in eqns and then we conclude that cn is credible interval with probability at least note that even though the credible region has been inflated it still contracts to the truth since the first term on the rhs in lemma can be bounded by the sum of ka and ka both of which vanish as n m the resulting conservative posterior a pn can be viewed as a updating of beliefs based on an approximation to the likelihood function the statistical foundations of such an approach are made clear in the recent work of bissiri et al c additional numerics this section presents additional numerical results concerning the calibration of uncertainty for multiple parameters and in higher dimensions calibration in d in fig top row we study the quantification of uncertainty provided by eb in the same setup as in the main text but optimising over both parameter and magnitude parameter for both easy and hard test functions we notice that eb led to inferences in the low n regime but attains approximately correct frequentist coverage for larger calibration in d the experiments of sec based on bmc were repeated in dimension d results are shown in fig bottom row clearly more integrand evaluations are required for eb to attain a good frequentist coverage of the credible intervals due to the curse of dimension however the frequentist coverage was reasonable for large n in this task empirical convergence assessment the convergence of bqmc was studied based on digital nets the theoretical rates provided in sec for this method are o for any figure gives the results obtained for d left and d right in the one dimensional case the o theoretical convergence rate is attained by the method in all cases p considered however in the d case the rates are not observed for the number n of evaluations considered this helps us demonstrate the important point that in addition to numerical conditioning the rates we provide are asymptotic and may require large values of n before being observed figure evaluation of uncertainty quantification provided by eb for both and results are shown for d top and d bottom coverage frequencies cn computed from top or bottom realisations were compared against notional bayesian credible regions for varying level left easy test function right hard test function posterior standard deviation posterior standard deviation n n figure empirical investigation of bqmc in d left and d right dimensions and a sobolev space of mixed dominating smoothness the results are obtained using tensor product kernels of smoothness red green and blue dotted lines represent the theoretical convergence rates established for each kernel the black line represents standard qmc kernel parameters were fixed to left and right d supplemental information for case studies case study mcmc in this paper we used the manifold langevin algorithm girolami and calderhead in combination with population mcmc population mcmc shares information across temperatures during sampling yet previous work has not leveraged evaluation of the f from one ti to inform estimates derived from other i in contrast this occurs naturally in the probabilistic integration framework as described in the main text here mcmc was used to generate a small number n of samples on a basis in order to simulate a scenario where numerical error in computation of marginal likelihood will be a temperature ladder with m rungs was employed for the same reason according to the recommendation of calderhead and girolami no convergence issues were experienced the same mcmc has previously been successfully used in oates et al prior elicitation here we motivate a prior for the unknown function g based on the work of calderhead and girolami who advocated the use of a schedule ti i m based on an extensive empirical comparison of possible schedules a good temperature schedule approximately satisfies the criterion ti ti on the basis that this allocates equal area to the portions of the curve g that lie between ti and controlling bias for the trapezium rule substituting ti into this optimality criterion produces ti i i m now letting i we obtain o formally treating as continuous and taking the m limit produces and so t from this we conclude that the transformed function h t g t is approximately stationary and can reasonably be assigned a stationary gp prior however in an importance sampling transformation we require that t has support over for this reason we took t in our experiment variance computation the covariance matrix can not be obtained in due to intractability of the kernel mean kf we therefore explored an approximation a such that plugging in a in place of provides an approximation to the posterior variance vn log p y for the likelihood this took the form a j a a kf a kf x a kf x where an empirical distribution a was employed based on the first m samples while the remaining samples x xi were reserved for the kernel computation this heuristic approach becomes exact as m in the sense that a j j but underestimates covariance at finite kernel choice in experiments below both kf and kh were taken to be gaussian covariance functions for example kf x x exp kx x parametrised by and this choice was made to capture smoothness of both integrands f and h involved for this application we found that while the parameters were possible to learn from data using eb the parameters required a large number of data to pin down therefore for these experiments we fixed mean fi j and mean hi in both cases the remaining kernel parameters were selected using eb data generation as a that captures the salient properties of model selection discussed in the main text we considered variable selection for logistic regression p n y pi yi pi logit pi xi d where the model mk specifies the active variables via the binary vector a model prior p was employed given a model mk the active parameters were endowed with independent priors n where here a single dataset of size n were generated from model with parameter as such the problem is there are in principle different models and the true model is not the selected model is thus sensitive to numerical error in the computation of pmarginal likelihood in practice we limited the model space to consider only models with this speeds up the computation and in this particular case only rules out models that have much lower posterior probability than the actual map model there were thus models being compared case study background on the model the teal south model is a pde computer model for an oil reservoir the model studied is on an grid with layers it has parameters representing physical quantities of interest these include horizontal permeabilities for each of the layers the vertical to horizontal permeability ratio aquifer strength rock compressibility and porosity for our experiments we used an emulator of the likelihood model documented in lan et al in order to speed up mcmc however this might be undesirable in general due to the additional uncertainty associated with the approximation in the results obtained kernel choice the numerical in sec results were obtained using a kernel given by k r exp where r kx which corresponds to the sobolev space we note that f is satisfied we used eb over the parameter but fixed the amplitude parameter to variance computation due to intractability of the posterior distribution the kernel mean is unavailable in closed form to overcome this the methodology in supplement b was employed to obtain an empirical estimate of the kernel mean half of the mcmc samples were used with bc weights to approximate the integral and the other half with mc weights to approximate the kernel mean eqn was used to upper bound the intractable bc posterior variance for the upper bound to hold states a xj must be independent samples from whereas here they were obtained using mcmc and were therefore not independent in order to ensure that mcmc samples were as independent as possible we employed sophisticated mcmc methodology developed by lan et al nevertheless we emphasise that there is a gap between theory and practice here that we hope to fill in future research for the results in this paper we fixed in eqn so that cn cn is essentially a credible interval a formal investigation into the theoretical properties of the uncertainty quantification studied by these methods is not provided in this paper case study kernel choice the canonical weighted sobolev space is defined by taking each of the component spaces hu to be sobolev spaces of dominating mixed smoothness the space hu is to a tensor product of sobolev spaces each with smoothness parameter constructed in this way is an rkhs with kernel x x y x bk xi bk xi k where the bk are bernoulli polynomials theoretical results in finite dimensions d we can construct a digital net that attains optimal qmc rates for weighted sobolev spaces theorem let h be an rkhs that is to then bqmc based on a digital t m d over zb attains the optimal rate o for any where n bm proof this follows by combining thm of dick and pillichshammer with lemma the qmc rules in theorem do not explicitly take into account the values of the weights an algorithm that tailors qmc states to specific weights is known as the component by component cbc algorithm further details can be found in kuo in principle the cbc algorithm can lead to improved rate constants in high dimensions because effort is not wasted in directions where f varies little but the computational overheads are also greater we did not consider cbc algorithms for bqmc in this paper note that the weighted hilbert space framework allows us to bound the wce independently p of dimension providing that sloan and this justifies the use of in this context further details are provided in sec of dick et al case study kernel choice the function spaces that we consider are sobolev spaces sd for p d d obtained using the reproducing kernel k x pl x x x x s where d l and pl are normalised gegenbauer polynomials brauchart et a particularly simple expression for the kernel in d and sobolev space can be obtained by taking along with l l where a l a a k x is the pochhammer symbol specifically these choices produce k x r s this kernel is associated with a tractable kernel mean x k x x x r and hence the initial error is also available x mmd mc bmc qmc bqmc number of states n figure application to global illumination integrals in computer graphics left a spherical over right the wce or for monte carlo mc bayesian mc bmc quasi mc qmc and bayesian qmc bqmc theoretical results the states xi could be generated with mc in that case analogous results to those obtained in sec can be obtained specifically from thm of brauchart et al and bayesian lemma classical mc leads to slow convergence op the regression bound argument lemma together with a functional approximation result in le gia et al thm gives a faster rate for bmc of op in dimension d rather than focus on mc methods we present results based on spherical qmc point sets we briefly introduce the conceptr of a spherical bondarenko et which is define pn n d as a set xi s satisfying sd f n f xi for all polynomials f sd r of degree at most f is the restriction to sd of a polynomial in the usual euclidean sense r theorem for all d there exists cd such that for all n cd td there exists a spherical on sd with n states moreover for and d the use of a spherical leads to a rate o proof this property of spherical follows from combining hesse and sloan bondarenko et al and lemma the rate in thm is for a deterministic method in brauchart et although explicit spherical are not currently known in approximately optimal point sets have been numerically to high accuracy additional theoretical results on point estimates can be found in fuselier et al in particular they consider the conditioning of the associated linear systems that must be solved to obtain bc weights numerical results in fig the value of the wce is for each of the four methods considered mc qmc bmc bqmc as the number of states increases both bmc and bqmc appear to attain the same rate for although bqmc provides a constant our experiments were based on such point sets provided by womersley on his website http accessed the environment map used in this example is freely available at http html accessed may factor improvement over bmc note that o was shown by brauchart et al to be for a deterministic method in the space references bissiri holmes and walker a general framework for updating belief distributions stat soc ser stat brauchart saff sloan and womersley qmc designs optimal order quasi monte carlo integration schemes on the sphere math fuselier hangelbroek narcowich ward and wright b kernel based quadrature on spheres and other homogeneous spaces numer hesse and sloan i a errors in a sobolev space setting for cubature over the sphere bull aust math kuo y constructions achieve the optimal rate of convergence for multivariate integration in weighted korobov and sobolev spaces j complexity le gia sloan and wendland multiscale approximation for functions in arbitrary sobolev spaces by scaled radial basis functions on the unit sphere appl comput harmon sloan and when are carlo algorithms efficient for high dimensional integrals j complexity wu and schaback local error estimates for radial basis function interpolation of scattered data ima numer
10
cointegration of the daily electric power system load and the weather stefanov eso ead veslets sofia bulgaria szstefanov this paper makes a thermal predictive analysis of the electric power system security for a day ahead this predictive analysis is set as a thermal computation of the expected computation is obtained by cointegrating the daily electric power system load and the weather by finding the daily electric power system thermodynamics and by introducing tests for this thermodynamics the predictive analysis made shows the electricity consumers wisdom keywords predictive analysis security thermodynamics cointegration wisdom introduction the electric power system is affected by weather changes and by the exchanges with other eps the eps load is unpredictable the load of a eps is dynamically unpredictable under this model the load and the weather are cointegrated according to fezzi and have made a cointegration of the daily load and the wholesale price of electricity therefore there is cointegration of the daily eps load and the weather the load of a eps is thermodynamically unpredictable modelling the eps as a field is possible under the internal model principle from system theory under this model the eps is viewed as an open system under this model there is evolution of the eps behaviour the thermodynamic unpredictability diminishes when predicting rare events cooperative and competitive phenomena in the eps it is such type of events and phenomena that are predicted by the eps dispatchers an intelligent system can be viewed as a dissipative model of the brain dynamics from this intelligent system has a field computation and a field realization in the sense of that is why it is able to make a predictive analysis of the eps wehenkel and abed and have made a predictive analysis of the security of a networkmodelled eps from the data about the latter these analyses are incomplete they do not predict the change in the eps security caused by the evolution in the eps behaviour the aim of this paper is a thermal predictive analysis of the eps security for a day ahead this analysis is sought by means of cointegration of the daily eps load and the weather cointegration the daily eps load is modelled by one descriptive and two rescriptive models these models are constructed for a time whose moments are the calendar days the descriptive load model is a regression with indicators for the two load peaks a distributed lag which represents the load variability and a flow integrator in the load the regression with indicators is pat in denotes a impulse at the hour of the day here is the load which in the morning is equal to the afternoon load and in the afternoon to the yesterday morning load the distributed lag is for the last two regressors in the flow integrator leads to substitution in of the regressors and by the rescriptive load models are regressions with cointegration links a distributed lag which represents the load variability and a flow integrator in the load these regressions are pbt pct the distributed lag is for the last two regressors in respectively in the flow integrator leads to substitution in and of the regressors and by here the regressor has the same meaning as in model is descriptive and models and rescriptive in the sense of the regression models the normal load behaviour and the regressions and the evolution load behaviour predicting the daily load by these three models is predicting by an ensemble of models according to under each of these three models the data are treated sequentially and not in parallel the data are treated in this way because the sequential models of daily load forecast a better forecast than the parallel ones the flow integration reduces the data that is why the regressions and give a more accurate load forecast the flow in the data presents the eps exchanges in the load models the daily eps load is econometrically modelled by the regressions and this is modelling of the changes in the load and in the environment these regressions are models of the dynamic unpredictability of the load daily eps thermodynamics the cointegration distance between the regressions pat and pbt respectively pct and pbt is by the angle respectively the angle arctan pat pbt pat pat pbt pbt arctan pct pbt pct pct pbt pbt here denotes the mean for the time and the time series pat pbt pct are obtained from the time series pat pbt pct by subtracting the mean the eps entropy s and the environment entropy s ln ln ln ln arcos exp arcos exp the eps recoherence is s s the recoherence is a positive quantity because the entropy s is monotonically increasing the environment decoherence is the decoherence is a negative quantity because the entropy monotonically decreasing the decoherence and recoherence are related by the inverse temperature viewing the decoherence and recoherence as a forward and a reverse process gives let the quantities be set as follows min pat pbt pct max pat pbt pct min pat pbt pct max pat pbt pct the eps work for a day is ln ln ln ln this work is obtained from the transient fluctuation here is the inverse temperature from testing the daily thermodynamics the time test of the daily eps thermodynamics are the maximum likelihood seasonal cointegration tests for daily let be the following times in i is an integer such that k is an integer such that m is an integer such that and n is an integer such that in and are from and and are from the times and are obtained for the eps evolution they are set by the parity violation under here the eps evolution follows a spiral equivalent to the spiral obtained by imel baev and under coarsening of a system with loops the time is a maximum likelihood statistic for cointegration its critical value is the critical value of at acceptance region at the first level of seasonal cointegration at sample size and under a basic regression model with a constant seasonal dummies and no trend then the critical value of the time is that value from among the values and compared to which the time is smaller the time is a maximum likelihood statistic for cointegration its critical value is the critical value of at acceptance region at the second level of seasonal cointegration at sample size and under a basic regression model with a constant seasonal dummies and no trend then the critical value of the time is that value from among the values compared to which the time is smaller the time is a maximum likelihood statistic for cointegration its critical value is the critical value of at acceptance region at the first level of seasonal cointegration at sample size and under a basic regression model with a constant seasonal dummies and no trend then the critical value of the time is that value from among the values compared to which the time is smaller the time is a maximum likelihood statistic for cointegration its critical value is the critical value of at acceptance region at the third level of seasonal cointegration at sample size and under a basic regression model with a constant seasonal dummies and no trend then the critical value of the time is that value from among the values compared to which the time is smaller the time is a maximum likelihood statistic for cointegration its critical value is the critical value of at acceptance region at the second level of seasonal cointegration at sample size and under a basic regression model with a constant seasonal dummies and no trend then the critical value of the time is that value from among the values compared to which the time is smaller the time test of the daily eps thermodynamics consists in a critical value check of each of the times the energy test of the daily eps thermodynamics is a test for an energy reserve r exp exp this test follows from the hypergeometric function from the presentation of energy as a hypergeometric and from the relations in the energy test of the daily eps thermodynamics consists in checking the positiveness of the reserve thermal computation the evolution behaviour can be presented as a statistical submanifold evolution surface using the reversible entropic the mean and the standard deviation of the eps evolution behaviour by cafaro and are cosh sinh cosh cosh here are angles from and is work from the diffusion of the eps evolution behaviour gives the following expected daily prices of electricity in is the inverse temperature from in is the reserve from in if and if here is from these daily prices of electricity have been found as prices on a market in uncertainty by pennock and the multiplication by ten in is photographic enlargement made by of the price on the market in uncertainty to a price on the electricity market for a day ahead these three prices set the following prices cm min cs max the expected eps reliability with respect to a rare event and a competitive phenomenon is pr ca cs if ca cs pr cs ca if ca cs this reliability is found as a jordan curve descriptor introduced by zuliani and the expected eps reliability with respect to a cooperative phenomenon is pv pw pw cm ca if ca cs pw ca cm if ca cs this reliability is found as a realization descriptor introduced by daneev and the expected eps droop is kc here kc is set by the inverse temperature from for the eps scheme viewed as a euclidean by bannai and the expected daily price of electricity with respect to the eps reliability is ch this daily price of electricity minimizes the eps lifetime variance in accordance with the computation by thermalisation reduces the daily mean error relative to the daily peak load of the load forecast by here is in is from is from is from and the normalization of is from the euclidean by bannai and the quantity is determined by the computational potential introduced by anders and the multiplication by ten in is photographic enlargement made by of the computational potential to an eps potential daily artificial dispatcher the daily artificial dispatcher dad is a field intelligent system which makes the thermal predictive analysis set out above this thermal predictive analysis is a predictive analysis of the eps security because it gives the expected eps load from and the expected electricity price from and the expected reserve from the expected droop by and the expected eps reliability from predicting the times is predicting the synchronization in the eps predicting the energy reserve is predicting the stability in the eps dad predicting the synchronization in the eps shows that dad perceives the cointegration dad predicting the stability in the eps shows that dad interprets the cointegration dad is then an intelligent system in the sense of this intelligence is wisdom because it consists in evasion and prediction indeed falling out of synchronization is evasion and stability is prediction the thermalisation in finding the eps security is a field computation according to and therefore dad is indeed a field intelligent system dad resource is heat that is why dad logic is the logic of resources girard linear logic this conclusion is natural because of the between and linear logic dad has the wisdom of electricity consumers dad presents the average belief of these consumers about the evolution in the eps behaviour based on an expected warming of the weather the consumers average belief is that the eps reliability with respect to a cooperative phenomenon is pv from the consumers average belief is that the eps reliability with respect to a rare event and a competitive phenomenon is pr from dad stakes the following part of its resources on the eps reliability pr with respect to a rare event and a competitive phenomenon sr pv prpv if pvpr prpv sr if pvpr prpv dad stakes the following part of its resources on the eps reliability pv with respect to a cooperative phenomenon sv if pvpr prpv sv pr pvpr if pvpr prpv thus dad as a rational forecast gambler in pr and pv are set by the sufficient conditions given by wolfers and under which prediction market prices coincide with the average beliefs among traders the quantity pr is obtained from the following equation pv in is the standard deviation from this quantity is set by the greatest positive root pro of the equation pr pro if pro pr pro if pro the quantity pv is obtained from the following equation pr in is the standard deviation from this quantity is set by the greatest positive root pvo of the equation pv if pvo pv if pvo dad checks the expected eps synchronization by the time test of the daily eps thermodynamics dad checks the expected eps stability by the energy test of the daily eps thermodynamics thus dad verifies its forecasts the results of dad dad operates to help the dispatchers of the bulgarian eps the regressions and are estimated from the hourly sampled values of the daily load and the dry bulb temperature for the preceding nine days as well as from an hourly sampled daily forecast of the dry bulb estimate these regressions use is made of the load data supplied by the bulgarian electric power system operator eso ead and accuweather s weather forecast used by eso ead an estimate of the three regressions is obtained by the exact maximum likelihood method for dynamic regression estimation given by pesaran and this estimate is correct because the sample is and of small size the mean error relative to the daily peak load of the daily load forecast of the bulgarian eps obtained by dynamic regression is dad reduces this error by down to table gives the monthly average daily mean error relative to the daily peak load of the bulgarian eps load forecast made by dad this error is given for two months of the year choosing those months where the error is maximal table the monthly average daily mean error relative to the daily peak load of the bulgarian eps load forecast made by dad in percent year month mmre dad reduces the mean error relative to the daily peak load of the bulgarian eps daily load forecast by this reduction corresponds to a hypothetic increase in the average daily temperature by here it is assumed that a increase in the average daily temperature leads to a change in accordance with the results of crowley and a hypothetic warming of the weather by for a day ahead results in a correct prediction of the expected eps security by dad for example the expected zero reliability for a cooperative phenomenon gives a true prediction for an eps decoupling conclusion the aim of this paper is a thermal predictive analysis of the eps security for a day ahead this aim has been achieved as follows one descriptive and two rescriptive dynamic models for prediction of the daily load have been constructed these models have been obtained by cointegration of the daily eps load and the weather the daily eps thermodynamics has been found through the eps inverse temperature and through the eps work a time test and an energy test of the daily eps thermodynamics have been obtained thermal computation of the expected eps security for a day ahead has been made the predictive analysis of the eps security for a day ahead has been presented as a field intelligent system that shows to the eps dispatchers what the electricity consumers wisdom is it has been shown that the proposed predictive analysis enhances the eps security by more accurate prediction of the eps load and by the prediction of critical phenomena references hendry unpredictability and the foundations of economic forecasting econometric society australasian meetings fezzi and bunn structural analysis of high frequency electricity demand and supply interactions london business school working paper freeman and vitiello brain dynamics dissipation and spontaneous breakdown of symmetry arxiv maclennan field computation in natural and artificial intelligence technical report wehenkel and pavella preventive emergency control of power systems proc ieee pcse conf abed namachchivaya overbye pai sauer and sussman power system operations proc int conf comput part iii gurfil gauge theory for dynamical systems arxiv feinberg and genethliou load forecasting in applied mathematics for restructured electric power systems optimization control and computational intelligence eds chow wu and momoh springer new york fay ringwood condon and kelly electrical load data a sequential or partitioned time series neurocomputing urbanowicz and j noise reduction for flows using nonlinear constraints arxiv vincente pereira leite and caticha visualizing long term ecomonic relationships with cointegration maps arxiv and kim de sitter group as a symmetry for optical decoherence phys a math jarzynski rare events and the convergence of exponentially averaged work values arxiv kurchan work relations arxiv maximum likelihood seasonal cointegration tests for daily data economics bulletin asadov and kechkin parity violation and arrow of time in generalized quantum dynamics arxiv sh imel baev and chernysh coarsening of systems with loops in organizational control and artificial intelligence proc isa ran eds arlazarov and emil ianov editorial urss moskva in russian anderson vamanamurthy and vuorinen generalized convexity and inequalities arxiv belokolos eilbeck enolskii and salerno exact energy bands and fermi surfaces of separables abelian potentials phys a math vainstein and rubi gaussian noise and symmetry in langevin models arxiv cafaro ali and guffin an application of reversible entropic dynamics on curved statistical manifolds arxiv pennock lawrence giles and nielsen the power of play efficiency and forecast accuracy in web market games nec research institute technical report grenander pattern synthesis lectures in pattern theory springer new york zuliani kenney bhagavathy and manjunath drums and curve descriptors british machine vision conf uk daneev rusanov and yu sharpinski nonstationary realization in terms of the operator kibernetika i sistemny analiz in rurssian bannai and bannai on euclidian tight j math soc japan and momot reliability design of complex systems by minimizing the lifetime variance int appl math comput sci anders markham vedral and how much of computation is just thermodynamics arxiv stefanov general theory of intelligent systems cybernetics and systems blass propositional connectives and the set theory of the continuum cwi quarterly piotrowski and luczka the relativistic velocity addition law optimizes a forecast gambler s profit arxiv wolfers and zitzewitz prediction markets in theory and practice iza discussion paper pesaran and slater dynamic regression theory and algorithms ellis horwood chichester crowley and joutz weather effects on electricity loads modeling and forecasting final report for us epa on weather effects on electricity loads
5
published as a conference paper at iclr t raining wide residual networks for deployment using a single bit for each weight feb mark mcdonnell computational learning systems laboratory school of information technology and mathematical sciences university of south australia mawson lakes sa australia a bstract for fast and deployment of trained deep neural networks on embedded hardware each learned weight parameter should ideally be represented and stored using a single bit usually increase when this requirement is imposed here we report large improvements in error rates on multiple datasets for deep convolutional neural networks deployed with using wide residual networks as our main baseline our approach simplifies existing methods that binarize weights by applying the sign function in training we apply scaling factors for each layer with constant unlearned values equal to the standard deviations used for initialization for and imagenet and models with requiring less than mb of parameter memory we achieve error rates of and respectively we also considered mnist svhn and achieving test results of and respectively for cifar our error rates halve previously reported values and are within about of our errorrates for the same network with weights for networks that overfit we also show significant improvements in error rate by not learning batch normalization scale and offset parameters this applies to both full precision and networks using a schedule we found that training for is just as fast as networks with better accuracy than standard schedules and achieved about of peak performance in just training epochs for for full training code and trained models in matlab keras and pytorch see https i ntroduction fast parallel computing resources namely gpus have been integral to the resurgence of deep neural networks and their ascendancy to becoming methodologies for many computer vision tasks however gpus are both expensive and wasteful in terms of their energy requirements they typically compute using floating point bits which has now been recognized as providing far more precision than needed for deep neural networks moreover training and deployment can require the availability of large amounts of memory both for storage of trained models and for operational ram if methods are to become embedded in resourceconstrained sensors devices and intelligent systems ranging from robotics to the to cars reliance on computing resources will need to be reduced to this end there has been increasing interest in finding methods that drive down the resource burden of modern deep neural networks existing methods typically exhibit good performance but for the this work was conducted in part during a hosted visit at the institute for neural computation university of california san diego and in part during a sabbatical period at consilium technology adelaide australia published as a conference paper at iclr ideal case of parameters processing still fall of error rates on important benchmarks in this paper we report a significant reduction in the gap see figure and results between convolutional neural networks cnns deployed using weights stored and applied using standard precision floating point and networks deployed using weights represented by a each in the process of developing our methods we also obtained significant improvements in obtained by versions of the cnns we used in addition to having application in custom hardware deploying deep networks networks deployed using have previously been shown pedersoli et to enable significant speedups on regular gpus although doing so is not yet possible using standard popular libraries aspects of this work was first communicated as a subset of the material in a workshop abstract and talk mcdonnell et svhn mnist imagenet single crop imagenet bwn on imagenet test error rate gap test error rate figure our gaps between using and all points except black crosses are data from some of our best results reported in this paper for each dataset black points are results on the full imagenet dataset in comparison with results of rastegari et al black crosses the notation and corresponds to network width see section r elated w ork r es n ets in a new form of cnn called a deep residual network or resnet he et was developed and used to set many new accuracy records on benchmarks in comparison with older cnns such as alexnet krizhevsky et and vggnet simonyan zisserman resnets achieve higher accuracy with far fewer learned parameters and flops operations per image processed the key to reducing the number of parameters in resnets was to replace layers in nets with layers that have no learned parameters lin et springenberg et while simultaneously training a much deeper network than previously the key new idea that enabled a deeper network to be trained effectively was the introduction of he et many variations of resnets have since been proposed resnets offer the virtue of simplicity and given the motivation for deployment in custom hardware we have chosen them as our primary focus despite the increased efficiency in parameter usage similar to other cnns the accuracy of resnets still tends to increase with the total number of parameters unlike other cnns increased accuracy can result either from deeper he et or wider networks zagoruyko komodakis published as a conference paper at iclr in this paper we use wide residual networks zagoruyko komodakis as they have been demonstrated to produce better accuracy in less training time than deeper networks r educing the memory burden of trained neural networks achieving the best accuracy and speed possible when deploying resnets or similar networks on mobile devices will require minimising the total number of bits transferred between memory and processors for a given number of parameters motivated by such considerations a lot of recent attention has been directed towards compressing the learned parameters model compression and reducing the precision of computations carried out by neural hubara et al for a more detailed literature review recently published strategies for model compression include reducing the precision number of bits used for numerical representation of weights in deployed networks by doing the same during training courbariaux et hubara et merolla et rastegari et reducing the number of weights in trained neural networks by pruning han et iandola et quantizing or compressing weights following training han et zhou et reducing the precision of computations performed in during inference courbariaux et hubara et merolla et rastegari et and modifying neural network architectures howard et a theoretical analysis of various methods proved results on the convergence of a variety of methods li et from this range of strategies we are focused on an approach that simultaneously contributes two desirable attributes simplicity in the sense that deployment of trained models immediately follows training without extra processing implementation of convolution operations can be achieved without multipliers as demonstrated by rastegari et al overall approach and summary of contributions our strategy for improving methods that enable inference with was threefold baseline we sought to begin with a baseline deep cnn variant with close to error rates at the time of commencement in the on and was held by wide residual networks zagoruyko komodakis so this was our starting point while subsequent approaches have exceeded their accuracy resnets offer superior simplicity which conforms with our third strategy in this list make minimal changes when training for we aimed to ensure that training for could be achieved with minimal changes to baseline training simplicity is desirable in custom hardware with custom hardware implementations in mind we sought to simplify the design of the baseline network and hence the version with weights as much as possible without sacrificing accuracy c ontributions to f ull recision w ide r es n ets although this paper is chiefly about we exceeded our objectives for the fullprecision baseline network and surpassed reported error rates for and using wide resnets zagoruyko komodakis this was achieved using just convolutional layers most prior work has demonstrated best wide resnet performance using layers our innovation that achieves a significant drop for and in wide resnets is to simply not learn the scale and offset factors in the layers while retaining the remaining attributes of these layers it is important that this is done in conjunction with exchanging the ordering of the final weight layer and the global average pooling layer see figure we observed this effect to be most pronounced for gaining around in rate but the method is advantageous only for networks that overfit when overfitting is not an issue such as for imagenet removing learning of parameters is only detrimental published as a conference paper at iclr c ontributions to deep cnn s with single bit weights for inference ours is the first study we are aware of to consider how the gap in for compared to weights changes with accuracy across a diverse range of image classification datasets figure our approach surpasses by a large margin all previously reported error rates for error rates halved for networks constrained to run with at inference time one reason we have achieved lower error rates for the case than previously is to start with a superior baseline network than in previous studies namely wide resnets however our approach also results in smaller error rate increases relative to error rates than previously while training requires the same number of epochs as for the case of weights our main innovation is to introduce a simple fixed scaling method for each convolutional layer that permits activations and gradients to flow through the network with minimum change in standard deviation in accordance with the principle underlying popular initialization methods he et we combine this with the use of a method loshchilov hutter that enables us to report results for the case in far fewer epochs of training than reported previously l earning a model with convolution weights t he sign of weights propagate but full precision weights are updated we follow the approach of courbariaux et al rastegari et al merolla et al in that we find good results when using at inference time if during training we apply the sign function to weights for the purpose of forward and backward propagation but update weights using sgd with gradients calculated using however previously reported methods for training using the sign of weights either need to train for many hundreds of epochs courbariaux et merolla et or use computationallycostly normalization scaling for each channel in each layer that changes for each minibatch during training the bwn method of rastegari et al we obtained our results using a simple alternative approach as we now describe w e scale the output of conv layers using a constant for each layer we begin by noting that the standard deviation of the sign of the weights in a convolutional layer with kernels of size f f will be close to assuming a mean of zero in contrast the standard deviation of layer i in networks is initialized in the method of he et al to p f where is the number of input channels to convolutional layer i and i l where l is the number of convolutional layers and for rgb inputs when applying the sign function alone there is a mismatch with the principled approach to controlling gradient and activation scaling through a deep network he et although the use of can still enable learning convergence is empirically slow and less effective to address this problem for training using the sign of weights we use the initialization method of he et al for the weights that are updated but also introduce a scaling applied to the sign of the p weights this scaling has a constant unlearned value equal to the initial standard deviation of f from the method of he et al this enables the standard deviation of information to be equal to the value it would have initially in networks in implementation during training we multiply the sign of the weights in each layer by this value for inference we do this multiplication using a scaling layer following the weight layer so that all weights in the network are stored using and and deployed using see https hence custom hardware implementations would be able to perform the model s convolutions without multipliers rastegari et and significant gpu speedups are also possible pedersoli et published as a conference paper at iclr the fact that we scale the weights explicitly during training is important although for forward and backward propagation it is equivalent to scale the input or output feature maps of a convolutional layer doing so also scales the calculated gradients with respect to weights since these are calculated by convolving input and output feature maps as a consequence learning is dramatically slower unless learning rates are introduced to cancel out the scaling our approach to this is similar to the bwn method of rastegari et al but our constant scaling method is faster and less complex in summary the only differences we make in comparison with training are as follows let wi be the tensor for the convolutional weights in the convolutional layer these weights are processed in the following way only for forward propagation and backward propagation not for weight updates s sign wi i l where fi is the spatial size of the convolutional kernel in layer i see figure full precision weights bn relu conv bn relu bit conv scale figure difference between our and networks the bit conv and scale layers are equivalent to the operations shown in eqn m ethods common to baseline and single bit networks n etwork a rchitecture our resnets use the and identity mapping approach of he et al for residual connections for imagenet we use an design as in he et al for all other datasets we mainly use a network but also report some results for layers each residual block includes two convolutional layers each preceded by batch normalization bn and rectified linear unit relu layers rather than train very deep resnets we use wide residual networks wide resnets zagoruyko komodakis although zagoruyko komodakis and others reported that networks result in better test accuracy than networks we found for that just layers typically produces best results which is possibly due to our approach of not learning the scale and offset parameters our baseline resnet design used in most of our experiments see figures and has several differences in comparison to those of he et al zagoruyko komodakis these details are articulated in appendix a and are mostly for simplicity with little impact on accuracy the exception is our approach of not learning parameters t raining we trained our models following for most aspects the standard stochastic gradient descent methods used by zagoruyko komodakis for wide resnets specifically we use loss minibatches of size and momentum of both for learning weights and in situations where we learn scales and offsets for svhn and mnist where overfitting is evident in wide resnets we use a larger weight decay of for and full imagenet we use a weight decay of apart from one set of experiments where we added a simple extra approach called cutout we use standard light data augmentation including randomly flipping each image horizontally with probability for and for the two extra layers are counted when downsampling residual paths learn convolutional projections published as a conference paper at iclr residual block repeat times input bn relu conv bn relu conv bn relu conv bn relu conv bn gap sm figure wide resnet architecture the design is mostly a standard resnet he et zagoruyko komodakis the first convolutional layer conv and first or residual blocks have imagenet or other datasets output channels the next or blocks have or output channels and so on where k is the widening parameter the final convolutional layer is a convolutional layer that gives n output channels where n is the number of classes importantly this final convolutional layer is followed by bn prior to gap and softmax sm the blocks where the number of channels double are downsampling blocks details are depicted in figure that reduce each spatial dimension in the feature map by a factor of two the relu layer closest to the input is optional but when included it is best to learn the bn scale and offset in the subsequent layer downsampling residual blocks avg pool double channels stride using zero padding bn relu stride conv bn relu conv double channels figure downsampling blocks in wide resnet architecture as in a standard resnet he et zagoruyko komodakis downsampling convolution is used in the convolutional layers where the number of output channels increases the corresponding downsampling for skip connections is done in the same residual block unlike standard resnets we use an average pooling layer avg pool in the residual path when downsampling same datasets plus svhn we pad by pixels on all sides using random values between and and crop a patch from a random location in the resulting image for full imagenet we scale crop and flip as in he et al we did not use any we did not subtract the mean as the initial bn layer in our network performs this role and we did not use whitening or augment using color or brightness we use the initialization method of he et al we now describe important differences in our training approach compared to those usually reported batch norm scales offsets are not learned for svhn when training on and svhn in all layers except the first one at the input when a relu is used there we do not learn the scale and offset factor instead initializating these to and in all channels and keeping those values through training note that we also do not learn any biases for convolutional layers the usual approach to setting the moments for use in layers in inference mode is to keep a running average through training when not learning parameters we found a small benefit in calculating the moments used in inference only after training had finished we simply form as many minibatches as possible from the full each with the same data augmentation as used during training applied and pass these through the trained network averaging the returned moments for each batch published as a conference paper at iclr for best results when using this method using matconvnet we found it necessary to ensure the parameter that is used to avoid divisions by zero is set to this is different to the way it is used in keras and other libraries o ur network s final weight layer is a convolutional layer a significant difference to the resnets of he et al zagoruyko komodakis is that we exchange the ordering of the global average pooling layer and the final weight layer so that our final weight layer becomes a convolutional layer with as many channels as there are classes in the training set this design is not new but it does seem to be new to resnets it corresponds to the architecture of lin et al which originated the global average pooling concept and also to that used by springenberg et al as with all other convolutional layers we follow this final layer with a layer the benefits of this in conjunction with not learning the scale and offset are described in the discussion section w e use a warm restart learning rate schedule we use a warm restarts learning rate schedule that has reported wide resnet results loshchilov hutter whilst also speeding up convergence the method constantly reduces the learning rate from to according to a cosine decay across a certain number of epochs and then repeats across twice as many epochs we restricted our attention to a maximum of epochs often just epochs and no more than for using this method which is the total number of epochs after reducing the learning rate from maximum to minimum through epochs then and epochs for we typically found that we could achieve test error rates after epochs within of the error rates achievable after or epochs e xperiments with cutout for in the literature most experiments with and use simple standard data augmentation consisting of randomly flipping each training image with probability and padding each image on all sides by pixels and then cropping a version of the image from a random location we use this augmentation although with the minor modification that we pad with uniform random integers between and rather than additionally we experimented with cutout devries taylor this involves randomly selecting a patch of each raw training image to remove the method was shown to combine with other methods to set the latest results on see table we found better results using larger cutout patches for than those reported by devries taylor hence for both and we choose patches of size following the method of devries taylor we ensured that all pixels are chosen for being included in a patch equally frequently throughout training by ensuring that if the chosen patch location is near the image border the patch impacts on the image only for the part of the patch inside the image differently to devries taylor as for our padding we use uniform random integers to replace the image pixel values in the location of the patches we did not apply cutout to other datasets r esults we conducted experiments on six databases four databases of rgb svhn and the full ilsvrc imagenet database russakovsky et as well as mnist lecun et details of the first three datasets can be found in many papers zagoruyko komodakis is a downsampled version of imagenet where the training and validation images are cropped using their annotated bounding boxes and then downsampled to chrabaszcz et see published as a conference paper at iclr http all experiments were carried out on a single using matlab with gpu acceleration from matconvnet and cudnn we report results for wide resnets which except when applied to imagenet are and wider than baseline resnets to use the terminology of zagoruyko komodakis where the baseline has channels in the layers at the first spatial scale we use notation of the form to denote wide resnets with convolutional layers and channels in the first spatial scale and hence width for the full imagenet dataset we use wide resnets with channels in the first spatial scale but given the standard resnet baseline is width this corresponds to width on this dataset zagoruyko komodakis we denote these networks as table lists our error rates for indicates indicates the superscript indicates standard crop and flip augmentation and indicates the use of cutout table lists error rates for svhn and full imagenet we did not use cutout on these datasets both and results are tabulated for and imagenet for imagenet we provide results for testing and also for testing in the latter the decision for each test image is obtained by averaging the softmax output after passing through the network times corresponding to crops obtained by rescaling to scales as described by he et al and from random positions at each scale our imagenet error rates are slightly lower than expected for a wide resnet according to the results of zagoruyko komodakis probably due to the fact we did not use color augmentation table for our approach applied to and weights resnet epochs params table for our approach applied to svhn and imagenet weights resnet epochs params svhn imagenet single crop single crop table shows comparison results from the original work on wide resnets and subsequent papers that have reduced error rates on the and datasets we also show the only results to our knowledge for the current for svhn without augmentation is huang et and with cutout augmentation is devries taylor our result for svhn is only a little short of these even though we used only a resnet with less than million parameters and only training epochs table shows comparison results for previous work that trains models by using the sign of weights during training additional results appear in hubara et al where activations are quantized and so the error rates are much larger published as a conference paper at iclr table test error rates for networks with less than parameters sorted by method wrn zagoruyko komodakis weights wrn this paper wrn chrabaszcz et full precision wrn this paper weights wrn cutout this paper wrn cutout devries taylor wrn dropout zagoruyko komodakis xie et al full precision wrn cutout this paper densenets huang et regularization gastaldi cutout devries taylor params table test error rates using at test time and propagation during training method bc courbariaux et weight binarization merolla et bwn googlenet rastegari et cai et bc with resnet adam li et bw with vgg cai et this paper single center crop svhn this paper scales random crops imagenet full imagenet full imagenet full imagenet full imagenet full imagenet inspection of tables to reveals that our baseline networks when trained with cutout surpass the performance of deeper wide resnets trained with dropout even without the use of cutout our network surpasses by over the error rate reported for essentially the same network by zagoruyko komodakis and is also better than previous wide resnet results on and as elaborated on in section this improved accuracy is due to our approach of not learning the scale and offset parameters for our networks we observe that there is always an accuracy gap compared to full precision networks this is discussed in section using cutout for reduces error rates as expected in comparison with training very wide resnets on as shown in table it is more effective to use cutout augmentation in the network to reduce the error rate while using only a quarter of the weights figure illustrates convergence and overfitting trends for for wide resnets and a comparison of the use of cutout in wide resnets clearly even for resnets the gap in error rate between full precision weights and is small also noticeable is that the method enables convergence to very good solutions after just epochs training longer to epochs reduces test error rates further by between and it can also be observed that the network is powerful enough to model the training sets to well over accuracy but the modelling power is reduced in the version particularly for the reduced modelling capacity for weights is consistent with the gap in rate performance between the and cases when using cutout training for longer gives improved error rates but when not using cutout epochs suffices for peak performance finally for mnist and a wide resnet without any data augmentation our method achieved after just epoch of training and after epochs whereas our method achieved and after and epochs in comparison was reported for the case by courbariaux et al and by hubara et al published as a conference paper at iclr error rate error rate test test test test train train train train training epoch training epoch figure convergence through training left each marker shows the error rates on the test set and the training set at the end of each cycle of the training method for resnets less than million parameters right each marker shows the test error rate for resnets with and without cutout indicates and indicates a blation studies for both our wide resnets and our versions benefit from our method of not learning scale and offset parameters accuracy in the case also benefits from our use of a training schedule loshchilov hutter to demonstrate the influence of these two aspects in figure we show for how the test error rate changes through training when either or both of these methods are not used we did not use cutout for the purpose of this figure the comparison drops the learning rate from to to after and epochs it is clear that our methods lower the final error rate by around absolute by not learning the parameters the method enables faster convergence for the case but is not significant in reducing the error rate however for the case it is clear that for best results it is best to both use and to not learn parameters warm restart no learn bn our paper warm restart learn bn no warm restart no learn bn no warm restart learn bn test error rate test error rate warm restart no learn bn our paper warm restart learn bn no warm restart no learn bn no warm restart learn bn training epoch training epoch figure influence of and not learning bn gain and offset left case right case published as a conference paper at iclr d iscussion t he accuracy gap for bit vs bit weights it is to be expected that a smaller error rate gap will result between the same network using fullprecision and when the test error rate on the full precision network gets smaller indeed tables and quantify how the gap in error rate between the and cases tends to grow as the in the network grows to further illustrate this trend for our approach we have plotted in figure the gap in error rates vs the error rate for the full precision case for some of the best performing networks for the six datasets we used strong conclusions from this data can only be made relative to alternative methods but ours is the first study to our knowledge to consider more than two datasets nevertheless we have also plotted in figure the error rate and the gap reported by rastegari et al for two different networks using their bwn weight method the reasons for the larger gaps for those points is unclear but what is clear is that better accuracy results in a smaller gap in all cases a challenge for further work is to derive theoretical bounds that predict the gap how the magnitude of the gap changes with full precision error rate is dependent on many factors including the method used to generate models with for rate cases the loss function throughout training is much higher for the case than the case and hence the network is not able to fit the training set as well as the one whether this is because of the loss of precision in weights or due to the mismatch in gradients inherent is propagating with weights and updating weights during training is an open question if it is the latter case then it is possible that principled refinements to the weight update method we used will further reduce the gap however it is also interesting that for our networks applied to that the gap is much smaller despite no benefits in the full precision case from extra depth and this also warrants further investigation c omparison with the bwn method our approach differs from the bwn method of rastegari et al for two reasons first we do not need to calculate mean absolute weight values of the underlying full precision weights for each output channel in each layer following each minibatch and this enables faster training second we do not need to adjust for a gradient term corresponding to the appearance of each weight in the mean absolute value we found overall that the two methods work equally effectively but ours has two advantages faster training and fewer overall parameters as a note we found that the method of rastegari et al also works equally effectively on a basis rather than we also note that the focus of rastegari et al was much more on the case that combines activations and than solely it remains to be tested how our scaling method compares in that case it is also interesting to understand whether the use of renders scaling of the sign of weights robust to different scalings and whether networks that do not use might be more sensitive to the precise method used t he influence of not learning batch normalization parameters the unusual design choice of not learning the batch normalization parameters was made for svhn and mnist because for wide resnets overfitting is very evident on these datasets see figure by the end of training typically the loss function becomes very close to zero corresponding to severe overfitting inspired by regularization szegedy et that aims to reduce overconfidence following the softmax layer we hypothesized that imposing more control over the standard deviation of inputs to the softmax might have a similar regularizing effect this is why we removed the final layer of our resnets and replaced it with a convolutional layer followed by a layer prior to the global average pooling layer in turn not learning a scale and offset for this layer ensures that batches flowing into the softmax layer have standard deviations that do not grow throughout training which tends to increase the entropy of predictions following the softmax which is equivalent to lower confidence guo et published as a conference paper at iclr after observing success with these methods in wide resnets we then observed that learning the parameters in other layers also led to increased overfitting and increased test error rates and so removed that learning in all layers except the first one applied to the input when relu is used there as shown in figure there are significant benefits from this approach for both and networks it is why in table our results surpass those of zagoruyko komodakis on effectively the same wide resnet our network is essentially the same as the comparison network where the extra layers appear due to the use of learned convolutional projections in downsampling residual paths whereas we use average pooling instead as expected from the motivation we found our method is not appropriate for datasets such as where overfitting is not as evident in which case learning the batch normalization parameters significantly reduces test error rates c omparison with s queeze n et here we compare our approach with squeezenet iandola et which reported significant memory savings for a trained model relative to an alexnet the squeezenet approach uses two strategies to achieve this replacing many kernels with kernels deep compression han et regarding squeezenet strategy we note that squeezenet is an network that closely resembles the resnets used here we experimented briefly with our approach in many plain springenberg et squeezenet iandola et mobilenet howard et resnext xie et and found its effectiveness relative to baselines to be comparable for all variants we also observed in many experiments that the total number of learned parameters correlates very well with classification accuracy when we applied a squeezenet variant to we found that to obtain the same accuracy as our resnets for about the same depth we had to increase the width until the squeezenet had approximately the same number of learned parameters as the resnet we conclude that our method therefore reduces the model size of the baseline squeezenet architecture when no deep compression is used by a factor of albeit with an accuracy gap regarding squeezenet strategy the squeezenet paper reports deep compression han et was able to reduce the model size by approximately a factor of with no accuracy loss our method reduces the same model size by a factor of but with a small accuracy loss that gets larger as the accuracy gets smaller it would be interesting to explore whether deep compression might be applied to our models but our own focus is on methods that minimally alter training and we leave investigation of more complex methods for future work regarding squeezenet performance the best accuracy reported in the squeezenet paper for imagenet is error requiring for the model s weights our models achieve better than error and require mb for the model s weights l imitations and f urther w ork in this paper we focus only on reducing the precision of weights to a with benefits for model compression and enabling of inference with very few multiplications it is also interesting and desirable to reduce the computational load of inference using a trained model by carrying out layer computations in very few numbers of bits hubara et rastegari et cai et facilitating this requires modifying activations in a network from relus to quantized relus or in the extreme case binary step functions here we use only calculations it can be expected that combining our methods with reduced precision processing will inevitably increase error rates we have addressed this extension in a forthcoming submission acknowledgments this work was supported by a discovery project funded by the australian research council project number discussions and visit hosting by gert cauwenberghs and hesham mostafa of ucsd and van schaik and runchun wang of western sydney university are gratefully published as a conference paper at iclr acknowledged as are discussions with dr victor stamatescu and dr muhammad abul hasan of university of south australia r eferences cai x he j sun and vasconcelos deep learning with low precision by gaussian quantization corr url http chrabaszcz loshchilov and hutter a downsampled variant of imagenet as an alternative to the cifar datasets corr url http courbariaux bengio and david binaryconnect training deep neural networks with binary weights during propagations corr url http devries and taylor improved regularization of convolutional neural networks with cutout corr url http gastaldi regularization corr url http guo pleiss y sun and weinberger on calibration of modern neural networks corr url http han mao and dally deep compression compressing deep neural networks with pruning trained quantization and huffman coding corr url http he zhang ren and j sun delving deep into rectifiers surpassing performance on imagenet classification in proc ieee international conference on computer vision iccv see he zhang ren and j sun deep residual learning for image recognition technical report microsoft research he zhang ren and j sun identity mappings in deep residual networks technical report microsoft research howard zhu chen kalenichenko wang weyand andreetto and adam mobilenets efficient convolutional neural networks for mobile vision applications corr url http huang liu weinberger and van der maaten densely connected convolutional networks corr url http hubara courbariaux soudry and bengio quantized neural networks training neural networks with low precision weights and activations corr url http iandola moskewicz ashraf han dally and keutzer squeezenet accuracy with fewer parameters and model size corr url http krizhevsky sutskever and hinton imagenet classification with deep convolutional neural networks in nips neural information processing systems lake tahoe nevada lecun bottou bengio and haffner learning applied to document recognition proceedings of the ieee li de xu studer samet and goldstein training quantized nets a deeper understanding corr url http published as a conference paper at iclr lin chen and yan network in network corr url http loshchilov and hutter sgdr stochastic gradient descent with restarts corr url http mcdonnell wang and van schaik training and deployment of deep residual networks by stochastic binary quantization abstract and talk at the neuro inspired computational elements workshop nice held at ibm almaden march https merolla appuswamy arthur esser and modha deep neural networks are robust to weight binarization and other distortions corr url http pedersoli tzanetakis and tagliasacchi espresso efficient forward propagation for bcnns corr url http rastegari ordonez redmon and farhadi imagenet classification using binary convolutional neural networks corr url http russakovsky deng su krause satheesh ma huang karpathy khosla bernstein berg and imagenet large scale visual recognition challenge international journal of computer vision ijcv doi simonyan and zisserman very deep convolutional networks for image recognition corr url http springenberg dosovitskiy brox and riedmiller striving for simplicity the all convolutional net corr url http szegedy vanhoucke ioffe shlens and wojna rethinking the inception architecture for computer vision corr url http xie girshick tu and he aggregated residual transformations for deep neural networks corr url http zagoruyko and komodakis wide residual networks zhou yao guo xu and chen incremental network quantization towards lossless cnns with weights corr url http a d ifferences from s tandard r es n ets in our baseline o ur network downsamples the residual path using average pooling for skip connections between feature maps of different sizes we use for increasing the number of channels as per option of he et al however for the residual pathway we use average pooling using a kernel of size with stride for downsampling instead of typical lossy downsampling that discards all pixel values in between samples published as a conference paper at iclr w e use batch normalization applied to the input layer the literature has reported various options for the optimal ordering usage and placement of bn and relu layers in residual networks following he et al zagoruyko komodakis we precede convolutional layers with the combination of bn followed by relu however different to he et al zagoruyko komodakis we also insert bn and optional relu immediately after the input layer and before the first convolutional layer when the optional relu is used unlike all other layers we enable learning of the scale and offset factors this first bn layer enables us to avoid doing any on the inputs to the network since the bn layer provides necessary normalization when the optional relu is included we found that the learned offset ensures the input to the first relu is never negative in accordance with our strategy of simplicity all weight layers can be thought of as a block of three operations in the same sequence as indicated in figure conceptually followed by relu can be thought of as a single layer consisting of a relu that adaptively changes its centre point and positive slope for each channel and relative to each we also precede the global average pooling layer by a bn layer but do not use a relu at this point since nonlinear activation is provided by the softmax layer we found including the relu leads to differences early in training but not by the completion of training o ur first conv layer has as many channels as the first residual block the wide resnet of zagoruyko komodakis is specified as having a first convolutional layer that always has a constant number of output channels even when the number of output channels for other layers increases we found there is no need to impose this constraint and instead always allow the first layer to share the same number of output channels as all blocks at the first spatial scale the increase in the total number of parameters from doing this is small relative to the total number of parameters since the number of input channels to the first layer is just the benefit of this change is increased simplicity in the network definition by ensuring one fewer change in the dimensionality of the residual pathway b c omparison of r esidual n etworks and p lain cnn s we were interested in understanding whether the good results we achieved for weights were a consequence of the skip connections in residual networks we therefore applied our method to plain networks identical to our residual networks except with the skip connections removed initially training indicated a much slower convergence we found that altering the initial weights standard deviations to be proportional to instead of helped so this was the only other change made the change was also applied in equation as summarized in figure we found that convergence remained slower than our resnets but there was only a small accuracy penalty in comparison with resnets after epochs of training this is consistent with the findings of he et al where only resnets deeper than about layers showed a significant advantage over plain networks this experiment and others we have done support our view that our method is not particular to resnets published as a conference paper at iclr classification error rate epochs figure residual networks compared with networks the data in this figure is for networks with width with about million learned parameters
9
sandboxing for javascript technical report matthias keil and peter thiemann university of freiburg freiburg germany keilr thiemann jan abstract today s javascript applications are composed of scripts from different origins that are loaded at run time as not all of these origins are equally trusted the execution of these scripts should be isolated from one another however some scripts must access the application state and some may be allowed to change it while preserving the confidentiality and integrity constraints of the application this paper presents design and implementation of decentjs a sandbox for full javascript it enables scripts to run in a configurable degree of isolation with access control it provides a transactional scope in which effects are logged for review by the access control policy after inspection of the log effects can be committed to the application state or rolled back the implementation relies on javascript proxies to guarantee full interposition for the full language and for all code including dynamically loaded scripts and code injected via eval its only restriction is that scripts must be compliant with javascript s strict mode acm subject classification security and protection keywords and phrases javascript sandbox proxy introduction javascript is used by of all the websites most of them rely on libraries for connecting to social networks feature extensions or advertisement some of these libraries are packaged with the application but others are loaded at run time from origins of different trustworthiness sometimes depending on user input to compensate for different levels of trust the execution of dynamically loaded code should be isolated from the application state today s state of the art in securing javascript applications that include code from different origins is an choice browsers apply protection mechanisms such as the policy or the signed script policy so that scripts either run in isolation or gain full access while script isolation guarantees noninterference with the working of the application as well as preservation of data integrity and confidentiality there are scripts that must have access to part of the application state to function meaningfully as all included scripts run with the same authority the application script can not exert control over the use of data by an included script thus managing untrusted javascript code has become one of the key challenges of present research on javascript existing approaches are either based on restricting javascript code to a statically verifiable language subset according to http status as of march matthias keil and peter thiemann licensed under creative commons license leibniz international proceedings in informatics schloss dagstuhl informatik dagstuhl publishing germany sandboxing for javascript facebook s fbjs or yahoo s adsafe on enforcing an execution model that only forwards selected resources into an otherwise isolated compartment by filtering and rewriting like google s caja project or on implementing monitoring facilities inside the javascript engine however these approaches have known deficiencies the first two need to restrict usage of javascript s dynamic features they do not apply to code generated at run time and they require extra maintenance efforts because their analysis and transformation needs to be kept in sync with the evolution of the language implementing monitoring in the javascript engine is fragile and incomplete while efficient such a solution only works for one engine and it is hard to maintain due to the high activity in engine development and optimization contributions we present the design and implementation of decentjs a sandbox for javascript which enforces noninterference integrity and confidentiality by monitoring its design is inspired by revocable references and spidermonkey s compartment concept compartments create a separate memory heap for each website a technique initially introduced to optimize garbage collection all objects created by a website are only allowed to touch objects in the same compartment proxies are the only objects that can cross the compartment boundaries they are used as cross compartment wrappers to make objects accessible in other compartments decentjs adapts spidermonkey s compartment concept each sandbox implements a fresh scope to run code in isolation to the application state proxies implement a membrane to guarantee full interposition and to make objects accessible inside of a sandbox outline of this paper the paper is organized as follows section introduces decentjs s facilities from a programmer s point of view section recalls proxies and membranes from related work and explains the principles underlying the implementation section discusses decentjs s limitations and section reports our experiences from applying sandboxing to a set of benchmark programs finally section concludes appendix a presents an example demonstrating the sandbox hosting a library appendix b shows some example scenarios that already use the implemented system appendix c shows the operational semantics of a core calculus with sandboxing appendix d states some technical results appendix e discusses related work and appendix reports our experiences from applying sandboxing to a set of benchmark programs sandboxing a primer this section introduces sandboxing and shows a series of examples that explains how sandboxing works transactional sandboxing is inspired by the idea of transaction processing in database systems and transactional memory each sandbox implements a transactional scope the content of which can be examined committed or rolled back central to our sandbox is the implementation of a membrane on values that cross the sandbox boundary the membrane supplies effect monitoring and guarantees noninterference moreover it features identity preservation and handles shadow objects shadow objects allow modifications of objects without effecting there origins the modified version keil and thiemann function node value left right value left right function return function heightof node return heightof heightof node right function setvalue node if node node setvalue setvalue figure implementation of node each node object consists of a value field a left node and a right node its prototype provides a tostring method that returns a string representation function heightof computes the height of a node and function setvalue replaces the value field of a node by its height recursively is only visible inside of the sandbox and different sandbox environments may manipulate the same object sandboxing provides transactions a unit of effects that represent the set of modifications write effects on its membrane effects enable to check for conflicts and differences to rollback particular modifications or to commit a modification to its origin the implementation of the system is available on the access we consider operations on binary trees as defined by node in figure along with some auxiliary functions as an example we perform operations on a tree consisting of one node and two leaves all value fields are initially var root new node new node new node next we create a new empty sandbox by calling the constructor sandbox its first parameter acts as the global object of the sandbox environment it is wrapped in a proxy to mediate all accesses and it is placed on top of the scope chain for code executing inside the sandbox the second parameter is a configuration object a sandbox is a first class value that can be used for several executions var sbx new sandbox this some parameters https sandboxing for javascript one use of a sandbox is to wrap invocations of function objects to this end the sandbox api provides methods call apply and bind analogous to methods from for example we may call setvalue on root inside of sbx setvalue this root the first argument of call is a function object that is decompiled and redefined inside the sandbox this step erases the function s free variable bindings and builds a new closure relative to the sandbox s global object the second argument the receiver object of the call as well as the actual arguments of the call are wrapped in proxies to make these objects accessible inside of the sandbox the wrapper proxies mediate access to their target objects outside the sandbox reads are forwarded to the target unless there are local modifications the return values are wrapped in proxies again writes produce a shadow value cf section that represents the modification of an object initially this modification is only visible to reads inside the sandbox native objects like the math object in line are also wrapped in a proxy but their methods can not be decompiled because there exists no string representation thus native methods must either be trusted or forbidden fortunately most native methods do not have side effects so we chose to trust them given all the wrapping and sandboxing the call in line did not modify the root object returns but calling tostring inside the sandbox shows the effect root return effect monitoring during execution each sandbox records the effects on objects that cross the sandbox membrane the resulting lists of effect objects are accessible through and which contain all effects read effects and write effects respectively all three lists offer query methods to select the effects of a particular object heightof this root var rects this print effects of this function i e print e the code snippet above prints a list of all effects performed on this the global object by executing the heightof function on root the output shows the resulting accesses to heightof and math effects of this has get has get the first column shows a timestamp the second shows the name of the effect and the last column shows the name of the requested parameter the list does not contain write accesses to this but there are write effects to value from the previous invocation of setvalue keil and thiemann var wectso root print write effects of root function i e print e write effects of root set inspecting a sandbox the state inside and outside of a sandbox may diverge for different reasons we distinguish changes differences and conflicts a change indicates if the value has been changed with respect to the outside value a difference indicates if the outside value has been modified after the sandbox has concluded for example a difference to the previous execution of setvalue arises if we replace the left leaf element by a new subtree of height outside of the sandbox new node new node new node changes and differences can be examined using an api that is very similar to the effect api there are flags to check whether a sandbox has changes or differences as well as iterators over them a conflict arises in the comparison between different sandboxes two sandbox environments are in conflict if at least one sandbox modifies a value that is accessed by the other sandbox later on we consider only and conflicts to demonstrate conflicts we define a function appendright which adds a new subtree on the right function appendright node node a node b node c to recapitulate the global root is still unmodified and prints whereas the root in sbx prints now let s execute appendright in a new sandbox var new sandbox this some parameters appendright this root calling tostring in prints b a c however the sandboxes are not in conflict as the following command show returns false while both sandboxes manipulate root they manipulate different fields sbx recalculates the field value whereas replaces the field right neither reads data that has previously been written by the other sandbox however this situation changes if we call setvalue again which also modifies right setvalue this root var cofts returns a list of conflicts function i e print e it documents a conflict confict get set right sandboxing for javascript transaction processing the commit operation applies select effects from a sandbox to its target effects may be committed one at a time by calling commit on each effect object or all at once by calling commit on the sandbox object returns the rollback operation undoes an existing manipulation and returns to its previous configuration before the effect again rollbacks can be done on a basis or for the sandbox as a whole however a rollback did not remove the shadow object thus after rolling back the values are still shadow values in sbx returns tostring this root returns the revert operation resets the shadow object of a wrapped value the following code snippet reverts the root object in sbx root now root s shadow object is removed and the origin is visible again in the sandbox calling tostring inside of sbx returns sandbox encapsulation the implementation of decentjs builds on two foundations memory safety and reachability in a memory safe programming language a program can not access uninitialized memory or memory outside the range allocated to a datastructure an object reference serves as the right to access the resources managed by the object along with the memory allocated to it in javascript all resources are accessible via property read and write operations on objects thus controlling reads and writes is sufficient to control the resources decentjs ensures isolation of the actual program code by intercepting each operation that attempts to modify data visible outside the sandbox to achieve this behavior all functions and objects crossing the sandbox boundary are wrapped in a membrane to ensure that the sandboxed code can not modify them in any way this membrane is implemented using javascript proxies more precisely our implementation of sandboxing is inspired by revocable membranes and access control based on object capabilities identity preserving membranes keep the sandbox apart from the normal program execution we encapsulate objects passed through the membrane and redirect write operations to shadow objects section we encapsulate code section and we withhold external bindings from a function section no unprotected value is passed inside the sandbox proxies and membranes a proxy is an object intended to be used in place of a target object the proxy s behavior is controlled by a handler object that typically mediates access to the target object both target and handler may be proxy objects themselves the handler object contains trap functions that are called when a trapped operation is performed on the proxy operations like property read property write and function keil and thiemann handler target x proxy target x proxy proxy target target x target x figure proxy operations the operation invokes the trap target x proxy property get and the property set operation invokes target x proxy proxya x proxyb y targeta x targetb z proxyc y z targetc figure property access through an identity preserving membrane dashed line around target objects the property access through the wrapper returns a wrapper for the property access returns the same wrapper as application are forwarded to their corresponding trap the trap function may implement the operation arbitrarily for example by forwarding the operation to the target object the latter is the default behavior if the trap is not specified figure illustrates this situation with a handler that forwards all operations to the target a membrane is a regulated communication channel between an object and the rest of the program a membrane is implemented by a proxy that guards all operations on its target if the result of an operation is another object then it is recursively wrapped in a membrane before it is returned this way all objects accessed through an object behind the membrane are also behind the membrane common use cases of membranes are revoking all references to an object network at once or enforcing write protection on the objects behind the membrane figure shows a membrane for targeta implemented by wrapper proxya each property access through a wrapper returns a wrapped object after installing the membrane no new direct references to target objects behind the membrane become available an identity preserving membrane guarantees that no target object has more than one proxy thus proxy identity outside the membrane reflects target object identity inside for example if and refer to the same object sandboxing for javascript handler target x proxy target y proxy target y proxy proxy target shadow target x shadow y shadow y figure operations on a sandbox the property get operation invokes the trap handler get target x proxy which forwards the operation to the proxy s target the property set operation invokes the trap target y proxy which forwards the operation to a local shadow object the final property get operation is than also forwarded to the shadow object then and refer to the same wrapper object shadow objects our sandbox redefines the semantics of proxies to implement expanders an idea that allows a client side extension of properties without modifying the proxy s target a sandbox handler manages two objects a target object and a local shadow object the target object acts as a parent object for its proxy whereas the shadow object gathers local modifications write operations always take place on the shadow object a read operation first attempts to obtain the property from the shadow object if that fails the read gets forwarded to the target object figure illustrates this behavior which is very similar to javascript s prototype chain the sandboxed version of an object inherits everything from its outside cousin whereas modifications only appear inside the as sandbox encapsulation extends the functionality of a membrane each object visible inside the sandbox is either an object that was created inside or it is a wrapper for some outside object a special proxy wraps sandbox internal values whenever committing a value to the outside as shown in the last example this step mediates uses of a sandbox internal value in the outside this is form example required to wrap arguments values passed to committed sandbox function the wrapping guarantees that the sandbox never gets access to unprotected references to the outside sandbox scope apart from access restrictions protecting the global state from modification through the membrane is fundamental to guarantee noninterference to execute program code decentjs relies on an eval which is nested in a statement with sbxglobal body the with getter and setter functions require special treatment like other functions they are decompiled and then applied to the shadow object see section keil and thiemann var node new node some var sbx new sandbox this some parameters with sbxglobal function use strict function setvalue if node node setvalue setvalue figure scope chain installed by the sandbox when loading setvalue the dark box represents the global scope the dashed line indicates the sandbox boundary and the inner box shows the program code nested inside statement places the sandbox global on top of the current environment s scope chain while executing body this setup exploits that eval dynamically rebinds the free variables of its argument to whatever is in scope at its call site in this construction which is related to dynamic binding any property defined in sbxglobal shadows a variable deeper down in the scope chain we employ a proxy object in place of sbxglobal to control all variable accesses in the sandboxed code by trapping the sandbox global object s hasownproperty method when javascript traverses the scope chain to resolve a variable access it calls the method hasownproperty on the objects of the scope chain starting from the top inside the with statement the first object that is checked on this traversal is the proxied sandbox global if its hasownproperty method always returns true then the traversal stops here and the javascript engine sends all read and write operations for free variables to the sandbox global this way we obtain full interposition and the handler of the proxied sandbox global has complete control over the free variables in body figure visualizes the nested scopes created during the execution of setvalue as in the example from section the sandbox global sbxglobal is a wrapper for the actual global object which is used to access heightof and the library code is nested in an empty closure which provides a fresh scope for local functions and variables this step is required because javascript did not have standalone block scopes such as blocks in c or sandboxing for javascript global scope jscode jscode jscode figure nested sandboxes in an application the outer box represents the global application state containing javascript s global scope each sandbox has its own global object and the nested javascript code is defined to the sandbox global java variables and named functions created by the sandboxed code end up in this fresh scope this extra scope guarantees noninterference for dynamically loaded scripts that define global variables and functions the use strict declaration in front of the closure puts javascript in strict mode which ensures that the code can not obtain unprotected references to the global object figure shows the situation when instantiating different sandboxes during program execution every sandbox installs its own scope with a sandbox global on top of the scope chain scripts nested inside are defined with respect to the sandbox global the sandbox global mediates the access to javascript s global object its default implementation is empty to guarantee isolation however decentjs can grant access by making resources available in the sandbox global function recompilation in javascript functions have access to the variables and functions in the lexical scope in which the function was defined the mozilla says it remembers the environment in which it was calls to wrapped functions may still cause side effects through their free variables by modifying a variable or by calling another function thus sandboxing either has to erase external bindings of functions or it has to verify that a function is free of side effects the former alternative is the default in decentjs to remove bindings from functions passed through the membrane our protection mechanism decompiles the function and recompiles it inside the sandbox environment decompilation relies on the standard implementation of the tostring method of a javascript function that returns a string containing the source code of the function each use of an external function in a sandbox first decompiles it by calling its tostring method to bypass potential tampering we use a private copy of for this call next we apply eval to the resulting string to create a fresh variant of the function as explained in section this application of eval is nested in a with statement that supplies the desired environment decompilation also places a use strict statement in front to avoid a frequent decompilation and call of eval with respect to the same code our implementation caches the compiled function where applicable function created with function name body strict mode requires that a use of this inside a function is only valid if either the function was called as a method or a receiver object was specified explicitly using apply or call https keil and thiemann instead of recompiling a function we may use the string representation of a function to verify that a function is free of side effects for example by checking if the function s body is however it turns out that recompiling a function has a lower impact on the execution time than analyzing the function body functions without a string representation native functions like object or array can not be verified or sanitized before passing them through the membrane we can either trust these functions or rule them out to this end decentjs may be provided with a white list of trusted function objects in any case functions remain wrapped in a sandbox proxy to mediate property access in addition to normal function and method calls the access to a property that is bound to a getter or setter function needs to decompile or verify the getter or setter before its execution dom updates the document object model dom is an api for manipulating html and xml documents that underlie the rendering of a web page dom provides a representation of the document s content and it offers methods for changing its structure style content etc in javascript this api is implemented using special objects reachable from the document object unfortunately the document tree itself is not an object in the programming language thus it can not be wrapped for use inside of a sandbox the only possibility is to wrap the interfaces in particular the document object we grant access to the dom by binding the dom interfaces to the sandbox global when instantiating a new sandbox as all interfaces are wrapped in a sandbox proxy to mediate access there are a number of limitations by default dom nodes are accessed by calling query methods like getelementbyid on the document object effect logging recognizes these accesses as method calls rather than as operations on the dom all query functions are special native functions that do not have a string representation decompilation is not possible so that using a query function must be permitted explicitly through the white list a query function must be called as a method of an actual dom object implementing the corresponding interface thus dom objects can not be wrapped like other objects but they require a special wrapping that calls the method on the correct receiver object while read operations can be managed in this way write operations must either be forbidden or they affect the original dom thus guest code can modify the original dom unless the dom interface is restricted to operations with unrestricted operations it would be possible to insert new script elements in the document which loads scripts from the internet and executes them in the normal application state without further sandboxing however prohibiting write operation means that the majority of guest codes can not be executed in the sandbox to overcome this limitation decentjs provides guest code access to an emulated dom instead of the real one we rely on a javascript library emulating a full browser dom to implement a dom interface for scripts running in the sandbox this emulated dom is merged into the global sandbox object when executing scripts in ses a function can only cause side effects on values passed as an argument https sandboxing for javascript as this pseudo dom is constructed inside the sandbox it can be accessed and modified at will no special treatment is required however the pseudo dom is wrapped in a special membrane mediating all operations and performing effect logging on all dom elements as each sandbox owns a direct reference to the sandbox internal dom it provides the following features to the user the sandbox provides an interface to the sandbox internal dom and enables the host program to access all aspects of the dom this interface can control the data visible to the guest program a host can load a page template before evaluating guest code this template can be an arbitrary html document like the host s page or a blank web page as most libraries operate on page documents by reading or writing to a particular element this template can be used to create an environment guest code runs without restrictions for example guest code can introduce new script elements to load library code from the internet these libraries are loaded and executed inside the sandbox as well all operations on the interface objects are recorded for example the access to window when loading a document effects can examined using a suitable api cf section the host program can perform a inspection of the document tree it can search for changes and differences the host recognizes newly created dom elements and it can transfer content from the sandbox dom to the dom of the host program policies a policy is a guideline that prescribes whether an operation is allowed most existing sandbox systems come with a facility to define policies for example a policy may grant access to a certain resource it may grant the right to perform an operation or to cause a side effect our system does not provide access control policies in the manner known from other systems decentjs only provides the mechanism to implement an empty scope and to pass selected resources to this scope when a reference to a certain resource is made available inside the sandbox then it should be wrapped in another proxy membrane that enforces a suitable policy for example one may use this work s transactional membranes to shadow write operations access permission contracts to restrict the access on objects or revocable references to revoke access to the outside world discussion strict mode decentjs runs guest code in javascript s strict mode to rule out uncontrolled accesses to the global object this restriction may lead to dysfunctional guest code because semantics is subtly different from mode javascript however assuming strict mode is less restrictive than the restrictions imposed by other techniques that restrict javascript s dynamic features alternatively one could also provide a program transformation that guards uses of this that may access the global object keil and thiemann scopes decentjs places every load in its own scope hence variables and functions declared in one script are not visible to the execution of another script in the same sandbox indeed we deliberately keep scopes apart to avoid interference to enable communication decentjs offers a facility to load mutually dependant scripts into the same scope otherwise scripts may exchange data by writing to fields in the sandbox global object function decompilation if a top level closure is wrapped in a sandbox then its free valriables have to be declared to the sandbox or their bindings are removed decompilation may change the meaning of a function because it rebinds its free variables only pure functions can be decompiled without changing their meaning however decompiling preserves the semantics of a function if its free variables are imported in the sandbox the new closure formed within the sandbox may be closed over variables defined in that sandbox this task is rightfully manual as the availability of global bindings is part of a policy in conclusion decompilation is unavoidable to guarantee noninterference of a function defined in another scope as every property read operation may be the call of a getter function native functions decompilation requires a string that contains the source code of that function but calling the standard tostring method from does not work for all functions a native function does not have a string representation trust in a native function is regulated with a white list of trusted functions the method creates a new function with the same body but the first couple of arguments bound to the arguments of bind javascript does not provide a string representation for the newly created function object array and function initializer in javascript some objects can be initialized using a literal notation initializer notation examples are object literals using array objects using and function objects using the named or unnamed function expression function using the literal notation circumvents all restrictions and wrappings that we may have imposed on the object array and function constructors as we are not able to intercept the construction using the literal notation enables unprotected read access to the prototype objects and the newly created object always inherits from the corresponding prototype however we will never get access to the prototype object itself and we are not able to modify the prototype writes to the created objects always effect the object itself and are never forwarded to the prototype object a pure function is a function that only maps its input into an output without causing any observable side effect sandboxing for javascript even though all the elements contained in the native prototype objects are uncritical by default a global not sandboxed script could add sensitive data or a side effecting function to one of the prototype objects and thus bypass access to unprotected data function constructor the function constructor function creates a new function object based on the definition given as arguments in contrast to function statements and function expressions the function constructor ignores the surrounding scope the new function is always created in the global scope and calling it enables access to all global variables to prevent this leakage the sandbox never grants unwrapped access to javascipt s global function constructor even if the constructor is as a safe native function a special wrapping intercepts the operations and uses a safe way to construct a new function with respect to the sandbox noninterference the execution of sandboxed code should not interfere with the execution of application code that is the application should run as if no sandboxes were present this property is called noninterference ni by the security community the intuition is that sandboxed code runs at a lower level of security than application code and that the sandbox code must not be able to observe the results of the computation in the global scope decentjs guarantees integrity and confidentiality the default empty sandbox guarantees to run code in full isolation from the rest of the application whereas the sandbox global can provide protected references to the sandbox in summary the sandboxed code may try to write to an object that is visible to the application it may throw an exception or it may not terminate our membrane redirects all write operations in sandboxed code to local replicas and it captures all exceptions a timeout could be used to transform executions into an exception alas such a timeout can not be implemented in evaluation to evaluate our implementation we applied it to javascript benchmark programs from the google octane benchmark these benchmarks measure a javascript engine s performance by running a selection of complex and demanding programs benchmark programs run between and times google claims that octane measure s the performance of javascript code found in large web applications running on modern mobile and desktop browsers each benchmark is complex and demanding as expected the run time increases when executing a benchmark in a sandbox while some programs like earleyboyer navierstrokes mandreel and are heavily affected others are only slightly affected richards crypto regexp and code loading for instance the observed run time impact entirely depends on the number of values that cross the membrane the javascript timeout function only schedules a function to run when the currently running javascript some event it can not interrupt a running function https keil and thiemann from the running times we find that the sandbox itself causes an average slowdown of over all benchmarks this is more than acceptable compared to other systems the numbers also show that sandboxing with effect logging enabled causes an average slowdown of an additional factor of on top of pure sandboxing because the execution of program code inside of a sandbox is nothing else than a normal program execution inside of a with statement and with one additional call to eval when instantiating the execution the impact is influenced by i the number of wrap operations of values that cross the membrane ii the number of decompile operations on functions and iii the number of effects on wrapped objects readouts from internal counters indicate that the heavily affected benchmarks raytrace mandreel and perform a very large number of effects the raytrace benchmark for example performs million effects overall an average slowdown of is more than acceptable compared to other languageembedded systems as octane is intended to measure the engine s performance benchmark programs run between and times we claim that it is the heaviest kind of benchmark every library jquery is less demanding and runs without an measurable runtime impact appendix f also contains the score values obtained from running the benchmark suite and lists the readouts of some internal counters conclusion decentjs runs javascript code in a configurable degree of isolation with access control rather than disallowing all access to the application state it provides full browser compatibility all browsers work without modifications as long as the proxy api is supported and it has a better performance than other systems additionally decentjs comes with the following features sandbox decentjs is a javascript library and all aspects are accessible through a sandbox api the library can be deployed as a language extension and requires no changes in the javascript system full interposition decentjs is implemented using javascript proxies the proxybased implementation guarantees full interposition for the full javascript language including all dynamic features with eval decentjs works for all code regardless of its origin including dynamically loaded code and code injected via eval no source code transformation or avoidance of javascript s dynamic features is required sandboxing a decentjs sandbox provides a transactional scope that logs all effects wrapper proxies make external objects accessible inside of the sandbox and enable sandbox internal modifications of the object hence sandboxed code runs as usual without noticing the sandbox effects reveal conflicts differences and changes with respect to another sandbox or the global state after inspection of the log effects can be committed to the application state or rolled back acknowledgments this work benefited from discussions with participants of the dagstuhl seminar scripting languages and frameworks analysis and verification in in particular tom van cutsem provided helpful advice on the internals of javascript proxies sandboxing for javascript references adsafe making javascript safe for advertising http agten acker brondsema phung desmet and piessens jsand complete sandboxing of javascript without browser modifications in zakon editor annual computer security applications conference acsac orlando fl usa december pages acm arnaud denker ducasse pollet bergel and suen execution for dynamic languages in vitek editor objects models components patterns international conference tools spain june july proceedings volume of lecture notes in computer science pages spain june springer semantics in felleisen and gardner editors esop volume of lecture notes in computer science pages rome italy mar springer dewald holz and freiling adsandbox sandboxing javascript to fight malicious websites in shin ossowski schumacher palakal and hung editors proceedings of the acm symposium on applied computing sac sierre switzerland march pages sierre switzerland acm dhawan and ganapathy analyzing information flow in browser extensions in annual computer security applications conference acsac honolulu hawaii december pages ieee computer society dhawan shan and ganapathy enhancing javascript with transactions in noble editor ecoop programming european conference beijing china june proceedings volume of lecture notes in computer science pages springer facebook sdk for javascript https a felt hooimeijer evans and weimer talking to strangers without taking their candy isolating proxied content in stein and mislove editors proceedings of the workshop on social network systems sns glasgow scotland uk april pages acm j goguen and meseguer security policies and security models in ieee symposium on security and privacy pages a translator for securing web content http as of guarnieri and livshits gatekeeper mostly static enforcement of security and reliability policies for javascript code in monrose editor usenix security symposium montreal canada august proceedings pages usenix association guha saftoiu and krishnamurthi the essence of javascript in d hondt editor ecoop volume of lecture notes in computer science pages maribor slovenia june springer hanson and proebsting dynamic variables in proceedings of the conference on programming language design and implementation pages snowbird ut usa june acm press new york usa hedin et al jsflow tracking information flow in javascript and its apis in acm symposium on applied computing sac gyeongju korea mar heidegger bieniusa and thiemann access permission contracts for scripting languages in j field and hicks editors proceedings annual acm symposium on keil and thiemann principles of programming languages pages philadelphia usa acm press heidegger and thiemann a heuristic approach for computing effects in bishop and vallecillo editors proceedings of the international conference on objects models components and patterns volume of lecture notes in computer science pages zurich switzerland june springer keil and thiemann efficient dynamic access analysis using javascript proxies in proceedings of the symposium on dynamic languages dls pages new york ny usa acm keil and thiemann efficient dynamic access analysis using javascript proxies in proceedings of the symposium on dynamic languages dls pages indianapolis indiana usa acm keil and thiemann treatjs contracts for javascript in boyland editor european conference on programming ecoop july prague czech republic volume of lipics pages prague czech repulic july schloss dagstuhl fuer informatik keil and thiemann treatjs contracts for javascript technical report institute for computer science university of freiburg keil and thiemann treatjs online http maffeis and taly isolation of untrusted javascript in proceedings of the ieee computer security foundations symposium csf port jefferson new york usa july pages ieee computer society magazinius phung and sands safe wrappers and sane policies for self protecting javascript in aura editor the nordic conference in secure it systems lecture notes in computer science springer verlag meyerovich and livshits conscript specifying and enforcing security policies for javascript in the browser in ieee symposium on security and privacy pages california usa may ieee computer society miller robust composition towards a unified approach to access control and concurrency control phd thesis johns hopkins university baltimore md usa miller samuel laurie awad and stay caja safe active content in sanitized javascript http google white paper miller and shapiro paradigm regained abstraction mechanisms for access control in saraswat editor advances in computing science asian programming languages and distributed computation asian computing science conference mumbai india december proceedings volume of lecture notes in computer science pages springer patil dong li liang and jiang towards access control in javascript contexts in international conference on distributed computing systems icdcs minneapolis minnesota usa june pages ieee computer society phung sands and chudnov lightweight javascript in li susilo tupakula and varadharajan editors asiaccs pages sydney australia mar acm politz eliopoulos guha and krishnamurthi adsafety verification of javascript sandboxing in usenix security symposium san francisco ca usa august proceedings usenix association sandboxing for javascript richards hammer nardelli jagannathan and vitek flexible access control for javascript in hosking eugster and lopes editors proceedings of the acm sigplan international conference on object oriented programming systems languages applications oopsla part of splash indianapolis in usa october pages acm policy https secureecmascript ses https shavit and touitou software transactional memory in proceedings of the acm sigplan symposium on principles of distributed computing pages ottowa ontario canada acm press new york ny usa signed scripts in mozilla http strickland findler and flatt chaperones and impersonators support for reasonable interposition in leavens and dwyer editors oopsla pages acm terrace beard and katta javascript in javascript sandboxing scripts in maximilien editor usenix conference on web application development webapps boston ma usa june pages usenix association van cutsem and miller proxies design principles for robust intercession apis in clinger editor dls pages acm van cutsem and miller trustworthy proxies virtualizing objects with invariants in castagna editor ecoop volume of lecture notes in computer science pages montpellier france july springer wagner gal wimmer eich and franz compartmental memory management in a modern web browser in boehm and bacon editors proceedings of the international symposium on memory management ismm san jose ca usa june pages acm warth stanojevic and millstein statically scoped object adaptation with expanders in proceedings of the acm sigplan conference on object oriented programming systems languages and applications pages portland or usa acm press new york weikum and vossen transactional information systems theory algorithms and the practice of concurrency control and recovery morgan kaufmann publishers san francisco ca usa keil and thiemann doctype html html en head libraries script script script body body of the page headline headline script headline changed headline figure motivating example the listing shows a snippet of an file the script tags load libraries to the application state before executing the body within the body tag it uses jquery to modify the dom a motivation javascript is the most important client side language for web pages javascript developers rely heavily on libraries for calenders maps social networking feature extensions and so on thus the code of a web page is usually composed of dynamically loaded fragments from different origins however the javascript language has no provision for namespaces or encapsulation management there is a global scope for variables and functions and every loaded script has the same authority on the one hand javascript developers benefit from javascript s flexibility as it enables to extend the application state easily on the other hand once included a script has the ability to access and manipulate every value reachable from the global object that makes it difficult to enforce any security policy in javascript as a consequence program understanding and maintenance becomes very difficult because side effects may cause unexpected behavior there is also a number of security concerns as the library code may access sensitive data for example it may read user input from the browser s dom browsers normally provide isolation mechanisms however as isolation is not always possible for all scrips the key challenges of a javascript developer is to manage untrusted code to control the use of data by included scrips and to reason about effects of included code sandboxing for javascript doctype html html en head decentjs script runs datejs in a fresh sandbox script var sbx new sandbox this new date function sbx effect return effect instanceof in date body figure execution of library code in a sandbox the first script tag loads the sandbox implementation the body of the second script tag instantiates a new sandbox and loads and executes datejs inside the sandbox later it commits intended effects to the native date object javascript issues as an example we consider a web application that relies on scripts from various sources figure shows an extract of such a page it first includes datejs a library extending javascript s native date object with additional methods for parsing formatting and processing of dates next it loads jquery and a jquery plugin that also simplifies formatting of javascript date objects at this point we want to ensure that loading the code datejs and jquery does not influence the application state in an unintended way encapsulating the library code in a sandbox enables us to scrutinize modifications that the foreign code may attempt and only commit acceptable modifications isolating javascript transactional sandboxing is inspired by the idea of transaction processing in database systems and software transactional memory each sandbox implements a transactional scope the content of which can be examined committed or rolled back isolation of code a decentjs sandbox can run javascript code in isolation to the https https https keil and thiemann application state proxies make external values visible inside of the sandbox and handle sandbox internal write operations an internal dom simulates the browser dom as needed this setup guarantees that the isolated code runs without noticing the sandbox providing transactional features a decentjs sandbox provides a transactional scope in which effects are logged for inspection policy rules can be specified so that only effects that adhere to the rules are committed to the application state and others are rolled back appendix gives a detailed introduction to decentjs s api and provides a series of examples explaining its facilities figure shows how to modify the from figure to load the code into a sandbox we first focus on datejs and consider jquery later in section the comment is a placeholder for unmodified code not considered in this initially we create a fresh sandbox line the first parameter is the sandboxinternal global object for scripts running in the sandbox whereas the second parameter is a configuration the sandbox global object acts as a mediator between the sandbox contents and the external world cf section it is placed on top of the scope chain for code executing inside the sandbox and it can be used to make outside values available inside the sandbox it is wrapped in a proxy membrane to mediate all accesses to the host program next we instruct the sandbox to load and execute the datejs library line inside the sandbox afterwards the proxy for javascript s native date object is modified in several ways among others the library adds new methods to the date object and extends with additional properties write operations on a proxy wrapper produce a shadow value cf section that represents the modification of an object initially this modification is only visible to reads inside the sandbox reads are forwarded to the target unless there are local modifications in which case the shadow value is returned the return values are wrapped in proxies again committing intended modifications during execution each sandbox records the effects on all objects that cross the sandbox the sandbox api offers access to the resulting lists for inspection and provides query methods to select the effects of a particular object after loading datejs the effect log reports reads and writes on three different however as the manual inspection of effects is impractical and requires a lot of effort decentjs allows us to register rules with a sandbox and apply them automatically a rule combines a sandbox operation with a predicate specifying the state under which the operation is allowed to be performed for example as we consider an extension to the date object as intended modification we install a rule that automatically commits new properties to the date object in line in general a rule commiton takes a target object date and a predicate the predicate function gets invoked with the sandbox object sbx and an effect object describing an effect on the target object in our example the predicate checks if the effect is a property appendix shows the full html code is a predefined configuration object for the standard use of the sandbox it consists of simple pairs verbose false the lists do not contain effects on values that were created inside of the sandbox appendix shows a readout of the effect lists sandboxing for javascript write operation extending javascript s native date object and that the property name is not already present if we construct a function inside of a sandbox and this function is written and committed to an outside object then the free variables of the function contain objects inside the sandbox and arguments of a call to this function are also wrapped that is calling this function on the outside only causes effects inside the sandbox furthermore committing an object in this way wraps the object in a proxy before writing it to its outside target both measures are required to guarantee that the sandbox never gets access to unwrapped references from the outside world at this point we have to mention that the data structure of the committed functions is constructed inside of sbx all bound references of those functions still point to objects inside of the sandbox and thus using them only causes effect inside of the sandbox furthermore committing an object wraps the object in a proxy before writing it to its target this intercepts the use of the committed object to wrap the arguments of a committed function before invoking the function this is as an illustration figure shows an extract of the membrane arising from javascript s native date object in appendix executing datejs in sbx shown on the left in the first box creates a proxy for each element accessed on date date and only date and are wrapped because proxies are created on demand as proxies forward each read to the target the structure visible inside of the sandbox is identical to the structure visible outside extending the native date object in sbx yields the state shown in the second box all modifications are only visible inside of the sandbox the new elements are not wrapped because they only exist inside of the sandbox however a special proxy wraps sandbox internal values whenever committing a value to the outside as shown in the last box this step mediates further uses of the sandbox internal value for example wrapping the this value and all arguments when calling a function defined in the sandbox the wrapping guarantees that the sandbox never gets access to unprotected references to the outside shadowing dom operations the example in figure omits the inclusion of jquery for simplification purposes however our initial objective is to sandbox all code to i reason about the modifications done by loading the code ii prevent the application state from unintended modifications isolating a library like jquery is more challenging as it needs access to the browser s dom calls to the native dom interface expose a mixture of public and confidential information so the access can neither be fully trusted nor completely forbidden to address this issue decentjs provides an internal dom that serves as a shadow for the actual dom when running a web library in the sandbox figure demonstrates loading the jquery library in a web sandbox as we extend the first example we create a new empty sandbox line and initialize the sandbox internal dom by loading an html template line using the configuration activates the shadow dom by instructing the sandbox to create a dom interface and to merge this interface with the global object the shadow dom initially see section for a more detailed discussion keil and thiemann contains an empty document it can be instantiated with the actual html body or with an html template as shown in line figure shows the template which is an extract of the original containing only the script tags for the jquery library and selected parts of the html body loading the template also loads and executes the code inside the sandbox afterwards the internal effect log reports two write operations to the fields and jquery of the global window object and one write operation to the htmlbodyelement interface a child of node both of which are part of the dom interface to automatically commit intended modifications to the global window object we install a suitable rule in lines and as jquery has been instantiated the sandbox internal dom using it modifies the sandbox internal dom instead of the browser s dom these modifications must be committed to the browser s dom to become visible line alternatively decentjs allows us to grant access to the browser s dom by white listing the window and document objects however white listing can only expose entire objects and can not restrict access to certain parts of the document model using transactions for wrapped objects decentjs supports a mechanism in the first examples figure we prevent the application state from unintended modification when loading untrusted code and commit only intended ones however datejs and might both modify javascript s native date object to avoid undesired overwrites decentjs allows us to inspect the effects of both libraries and to check for conflicts before committing to date the predicate in line checks for conflicts which arise in the comparison between different sandboxes a conflict is flagged if at least one sandbox modifies a value that is accessed by the other sandbox later furthermore we prescribe that in case of conflicts the methods from datejs should be used to this end a second rule discards the modifications on date from the second sandbox when detecting conflicts the rollback operation undoes an existing manipulation and returns to its previous configuration such a partial rollback does not result in an inconsistent state as we do not delete objects and the references inside the sandbox remain unchanged full html example figure shows the full html code from the example in section effects lists this sections shows the resulting effect logs recorded by the sandboxes in section a see appendix for a detailed explanation of the output effects of sbx all read effects on this we consider only and conflicts conflicts are not handled because the hazard represents a problem that only occurs in concurrent executions sandboxing for javascript this function e print e has get has get has get has get this function e print e none all read effects on date date function e print e get get get get get get all write effects on this all write effects on date date function e print e set set set set set set set set set set set keil and thiemann set set set set set set set set set set set set set all read effects on function e print e get all write effects on function e print e set set set set set set set set set set set set set set set set set set set set set set sandboxing for javascript set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set set keil and thiemann set set set set set set set set set set set set set set set set set set set set set set set set set set set set set effects of all read effects on this this function e print e has get has get has get has get has get has get sandboxing for javascript has get has get has get has get this function e print e none all read effects on window window function e print e get getownpropertydescriptor getownpropertydescriptor get getownpropertydescriptor getownpropertydescriptor getownpropertydescriptor getownpropertydescriptor getownpropertydescriptor getownpropertydescriptor get get has get getownpropertydescriptor get getownpropertydescriptor get get all write effects on this all write effects on window window function e print e set set keil and thiemann date now date prototype tostring date now prototype date now prototype tostring isweekday date tostring now now prototype tostring isweekday date prototype tostring now prototype tostring isweekday figure shadow objects in the sandbox when loading datejs cf section the structure of javascrip s native date object is shown in solid lines on the left the shadow values are enclosed by a dashed line solid lines are direct references to objects whereas dashed lines are indirect references and proxy objects dotted lines connect to the target object the first box shows the sandbox after reading whereas the second box shows the sandbox after modifying the structure of date the third box shows the situation after committing the modifications on date sandboxing for javascript doctype html html en head decentjs script runs jquery in a fresh sandbox script var new sandbox this new this jquery new this body body of the page headline headline script headline changed headline headline getelementbyid headline figure execution of a web library in a sandbox the first script tag loads the sandbox implementation the body of the second script tag instantiates a new sandbox and initializes the sandbox with a predefined html template see figure later it commits intended effects to the application state and copies data from the sandbox internal dom doctype html html en head script script body body of the page headline headline figure file contains the script tags for loading the jquery code from in figure keil and thiemann doctype html html en head checks for conflicts with datejs script new date function sbx effect return date new date function sbx effect return date body figure checking for conflicts the html code first checks for conflicts between datejs and jquery before it commits the modification of the library or rolls back sandboxing for javascript doctype html html en head decentjs script runs datejs in a fresh sandbox script var sbx new sandbox this new date function sbx effect return effect instanceof in date runs jquery in a fresh sandbox script var new sandbox this new this jquery new this checks for conflicts with datejs script new date function sbx effect return date new date function sbx effect return date body body of the page headline headline script headline changed headline headline getelementbyid headline figure execution of library code in a sandbox cf section a in the paper the first script tag loads the sandbox implementation the second script tag instantiates a new sandbox sbx and loads and executes datejs inside the sandbox later it commits intended effects to the native date object the third script tag instantiates sandbox and initializes the sandbox with a predefined html template see figure in the paper later it commits intended modifications to the application state the last script tag checks for conflicts between datejs and jquery before it commits further modification on date or rolls back the script tag included in the body performs a modification of the sandbox internal dom and copies the changes to the global dom keil and thiemann b application scenarios this section considers some example scenarios that use the implemented system all examples are drawn from other projects and use this work s sandboxing mechanism to guarantee noninterference treatjs treatjs is a contract system for javascript which enforces contracts by monitoring treatjs is implemented as a library so that all aspects of a contract can be specified using the full javascript language for example the base contract typenumber checks its argument to be a number var typenumber function arg return typeof arg number asserting a base contracts to a value causes the predicate to be checked by applying the predicate to the value in javascript any function can be used as any return value can be converted to typenumber accepted treatjs relies on the sandbox presented in this work to guarantee that the execution of contract code does not interfere with the contract abiding execution of the host program as access to objects and functions is safe and useful in many contracts treatjs facilitates making external references visible inside of the sandbox for example the isarray contract below references the global object array var isarray array array function arg return arg instanceof array however treatjs forbids all write accesses and traps the unintended write to the global variable type in the following code var typenumberbroken function arg type typeof arg return type number treatjs online is a web frontend for experimentation with the treatjs contract system it enables the user to enter code fragments that run in combination with the treatjs code all aspects of treatjs are accessible to the user code however the user code should neither be able to compromise the contract system nor the website s functioning by writing to the browser s document or window objects without any precaution a code snippet like javascript programmers speak of truthy or falsy about values that convert to true or false http sandboxing for javascript function observer target handler var sbx new sandbox parameters omitted var controller get function target name receiver var trap var result trap trap target name receiver var raw target name return observerof raw result result raw return new proxy target controller figure implementation of an observer proxy excerpt the get trap evaluates the user specific trap in a sandbox to guarantee noninterference afterwards it performs the usual operation and compares the outcomes of both executions other traps can be implemented in the same way function arg return arg could change the contract objects to influence subsequent executions to avoid these issues the website creates a fresh sandbox environment builds a function closure with the user s input and executes the user code in the sandbox the sandbox grants access to the treatjs api and to javascript s objects like object function array and so on but it does not provide access to browser objects like document and window further each new invocation reverts the sandbox to its initial state observer proxies an observer is a restricted version of a javascript proxy that can not change the behavior of the proxy s target arbitrarily it implements a projection in that it either implements the same behavior as the target object or it raises an exception a similar feature is provided by racket s chaperones such an observer can cause a program to fail more often but in case it does not fail it would behave in the same way as if not observer were present figure contains the getter part of the javascript implementation of observer the constructor of an observer proxy it accepts the same arguments as the constructor of a normal proxy object it returns a proxy but interposes a different handler controller that wraps the execution of all user provided traps in a sandbox the controller s get trap evaluates the user s get trap if one exists in a sandbox next it performs a normal property access on the target value to produce the same side effects and to obtain a baseline value to compare the results observerof checks whether the sandboxed result is suitably related to the baseline value this observer proxy in this subsection should not be confused with the observer proxy mention in the paper the observer mentions in section is a normal proxy implementing a membrane keil and thiemann constant variable x y expression e f g c x op e f e f new e e f e f g figure syntax of value u v w c l closure dictionary object d c v d f v environment store x v l o figure semantic domains of c semantics of sandboxing this section first introduces an untyped lambda calculus with objects and object proxies that serves as a core calculus for javascript inspired by previous work it defines its syntax and describes its semantics informally later on we extends to a new calculus which adds a sandbox to the core calculus j core syntax of figure defines the syntax of a expression is either a constant a variable an operation on primitive values a lambds abstraction an application a creation of an empty object a property read or a property assignment variables x y are drawn from denumerable sets of symbols and constants c include javascript s primitive values like numbers strings booleans as well as undefined and null the syntax do not make proxies available to the user but offers an internal method to wrap objects sandbox expression e f g s fresh e term fresh s wrap v object values l b l u v w b s figure extensions of j sandboxing for javascript term t e op v f op v w l f l v new v l f l c l f g l c g l c w figure intermediate terms of semantic domains figure defines the semantic domains of its main component is a store that maps a location l to an object o which is a native object object represented by a triple consisting of a dictionary d a potential function closure f and a value v acting as prototype a dictionary d models the properties of an object it maps a constant c to a value an object may be a function in which case its closure consists of a lambda expression and an environment that binds the free variables it maps a variable x to a value a object is indicated by in this place a value v is either a constant c or a location evaluation of a semantics introduces intermediate terms to model partially evaluated expressions figure an intermediate term is thus an expression where zero or more subexpressions are replaced by their outcomes the evaluation judgment is similar to a standard evaluation judgment except that its input ranges over intermediate terms it states that evaluation of term t with initial store and environment results in a final store and value ti vi figure defines the standard evaluation rules for expressions e in the inference rules for expressions e are mostly standard each rule for a composite expression evaluates exactly one subexpression and then recursively invokes the evaluation judgment to continue once all subexpressions are evaluated the respective rule performs the desired operation sandboxing of this section extends the base calculus to a calculus which adds sandboxing of j function expressions the calculus describes only the core features that illustrates the principles of our sandbox further features of the application level can be implemented in top of the calculus figure defines the syntax and semantics of as an extension of expressions j now contain a sandbox abstraction s and a sandbox construction fresh e that instantiates a fresh sandbox terms are extended with a fresh s term a new internal wrap v term which did not occour in source programs wraps a value in a sandbox environment objects now contain object proxies a proxy object is a single location controlled by a proxy handler that mediates the access to the target location for simplification we represent handler objects by there so each handler is an sandbox handler that enforces viz by an secure location b l that acts as an shadow object for the proxies target object l and a single secure environment keil and thiemann const var c c x x e v op v f w op e f w f u op v u w op op v f w op v u w w op v u abs dom e l l f w l null l e f w f v l v w app d u l l f w x v e w l v w e v new v w new dom new e w l v new v l f c l c w e f w get d f v l l f w c dom d d f l dom d c v l c d c l c v d f c l e l l f w dom d l c undefined e l l f g w e f g w f c l c g w l f g w g v l c v w l c g w put d f u l l d c v f u l c v v figure inference rules for intermediate terms of sandboxing for javascript e fresh v fresh e v b fresh b b wrap v vb x vb e w b b v w b figure sandbox abstraction and application rules of j for clarity we write vb u b w b for wrapped values that are imported into a sandbox for a sandbox environment that only contains wrapped values and b l for locations of proxies and shadow objects consequently values are extended with sandboxes which represents an sandbox expression wrapped in a sandbox environment that is to be executed when the value is used in an application evaluation of j figure contains its inference rules for sandbox abstraction and sandbox application of the formalization employs semantics to model side effects while j keeping the number of evaluation rules manageable the rule for expression fresh e rule evaluates the subexpression and invokes the evaluation judgment to continue the other rules show the last step in a pretty big step evaluation once all subexpressions are evaluated the respective rule performs the desired operation sandbox execution happens in the context of a secure sandbox environment to preserve noninterference so a sandbox definition abstraction will evaluate to a sandbox closure containing the sandbox expression the abstraction together with an empty environment rule each sandbox executions starts from a fresh environment this guarantees that not unwrapped values are reachable by the sandbox sandbox abstraction rule proceeds only on secure environments which is either an empty set or an environment that contains only secure wrapped values sandbox execution rule applies after the first expression evaluates to a sandbox closure and the second expression evaluates to a value it wraps the given value and triggers the evaluation of expressions e in the sandbox environment after binding the wrapped value vb value vb acts as the global object of the sandbox it can be used to make values visible inside ob the sandbox sandbox encapsulation the sandbox encapsulation figure distinguishes several cases a primitive value and a sandbox closure is not wrapped to wrap a location that points to a object the location is packed in a fresh proxy along with a fresh shadow object and the current sandbox environment this packaging ensures that each further access to the wrapped location has to use the current environment keil and thiemann wrap c c wrap b s b s dom l b l compile l b l b l dom b l b l wrap l l b l l b wrap l b l b l l b wrap b l b l figure inference rules for sandbox encapsulation l null b l d v compile l b l b l dom l dom d b v b l b b l dom l b null l d v compile l b l b l d b v compile b l b l l b l compile b l b l l l compile l b l figure inference rules for object in case the location is already wrapped by a sandbox proxy or the location of a sandbox proxy gets wrapped then the location to the existing proxy is returned this rule ensures that an object is wrapped at most once and thus preserves object identity inside the sandbox the shadow object is build from recompiling figure the target object a shadow objects is a new empty object that may carry a sandboxed replication of its closure part for a object recompiling returns an empty object that later on acts as a sink for property assignments on the wrapped objects for a function object recompiling extracts the function body from the closure and redefines the body with respect to the current sandbox environment the new closure is put into a new empty object this step erases all external bindings of function closure and ensures that the application of a wrapped function happens in the context of the secure sandbox environment in case the function is already recompiled function recompilation returns the existing replication application read and assignment function application property read and property assignment distinguish two cases either the operation applies directly to a object or it applies to a proxy if the target of the operation is not a proxy object then the usual rules apply figure contains the inference rules for function application and property access for the sandboxing for javascript wrap v vb b l b v w b b l l l l v w b c dom b l l b l b l c vb l b l l c vb c dom b l l b l c v wrap v vb l b l l c vb l b l b l c v v l b l l c v v figure inference rules for function application property read and property assignment cases the application of a wrapped function proceeds by unwrapping the function and evaluating it in the sandbox environment contained in the proxy the function argument and its result are known to be wrapped in this case a property read on a wrapped object has two cases depending on if the accessed property has been written in the sandbox before or not the notation c dom l is defined as an shortcut of a dictionary lookup c dom d with l d f v a property read of an affected field reads the property from the shadow location otherwise it continues the operation on the target and wraps the resulting value an assignment to a wrapped object is continues with the operation on the shadow location b in javascript write operations do only change properties of the object s dictionary they do not affect the object s prototype therefor the shadow object did not contain any prototype informations it acts only a shadow that absorbs write operations keil and thiemann d technical results as javascript is a memory safe programming language a reference can be seen as the right ti modify the underlying object if an expressions body can be shown not to contain unprotected references to objects then it can not modify this data to prove soundness of our sandbox we show termination insensitive noninterference it requires to show that the initial store of a sandbox application is observational equivalent to the final store that remains after the application in detail the sandbox application may introduce new objects or even write to shadow objects only reachable inside of the sandbox but every value reachable from the outside remains unmodified as the calculus in appendix c did not support variable updates on environments the only way to make changes persistent is to modify objects thus proving noninterference relates different stores and looks for differences in the store with respect to all reachable values observational equivalence on stores first we introduce an equivalence relation on stores with respect to other semantic elements i definition two stores are equivalent constants c if the constants are equal c c c i definition two stores are equivalent locations l if they are equivalent on the location s target l l i definition two stores are equivalent objects d f v f v if they are equivalent on the objects s constituents d f v f v d f f v v i definition two stores are equivalent dictionaries d if they are equivalent on the dictionary s content d dom d dom dom d d c c i definition two stores are equivalent closures if the are equivalent on the closure s environment and both abstractions are equal i definition two stores are equivalent environments if the are equivalent on the environment s content dom dom dom x x sandboxing for javascript i definition two stores are equivalent proxy objects l b l b if they are equivalent on the objects s constituents l b l b l b l b i definition two stores are equivalent sandbox closures b b if the are equivalent on the sandbox s environment and both abstractions are equal b b now the observational equivalence for stores can be states as follows i definition two stores are observational equivalent under environment if they are equivalent on all values v x x dom dom x x i lemma suppose that ei vi i then for all with ei vj i with and v w proof proof by induction on the derivation of j noninterference i theorem for each and with fresh f i vi it holds that proof proof by induction on the derivation of j keil and thiemann e related work there is a plethora of literature on securing javascript so we focus on the distinguishing features of our sandbox and on related work not already discussed in the body of the paper sandboxing javascript the most closely related work to our sandbox mechanism is the design of access control wrappers for revocable references and membranes in a language a function can only cause effects to objects reachable from references in parameters and global variables a revocable reference can be instructed to detach from the objects so that they are no longer reachable and safe from effects however as membranes by themselves do not handle side effects every property access can be the call of a getter they do not implement a sandbox in the way we did agten et al implement a javascript sandbox using proxies and membranes as in our work they place wrappers around sensitive data dom elements to enforce policies and to prevent the application state from unprotected script inclusion however instead of encapsulating untrusted code they require that scripts are compliant with ses a subset of javascript s strict mode that prohibits features that are either unsafe or that grant uncontrolled access and use an to execute those scripts a javascript parser transforms scripts at run time but doing so restricts the handling of dynamic code compared to our approach treatjs a javascript contract system uses a sandboxing mechanism similar to the sandbox presented in this work to guarantee that the execution of a predicate does not interfere with the execution of a contract abiding host program as in our work they use javascript s dynamic facilities to traverse the scope chain when evaluating predicates and they use javascript proxies to make external references visible when evaluating predicates arnaud et al provide features similar to the sandboxing mechanism of treatjs both approaches focus on access restriction to guarantee free contract assertion however neither of them implements a sandbox because writing is completely forbidden and always leads to an exception our sandbox works in a similar way and guarantees access to target objects but redirects write operations to shadow objects such that local modifications are only visible inside the sandbox however access restrictions in all tree approaches affect only values that cross the border between two execution environments values that are defined and used inside local values were not restricted write access to those values is fine patil et al present jcshadow a reference monitor implemented as a firefox extension their tool provides access control to javascript resources similar to decentjs they implement shadow scopes that isolate scripts from each other and which regulate the granularity of object access unlike decentjs jcshadow achieves a better runtime performance while more efficient their approach is as it is tied to a specific engine and requires active maintenance to keep up with the development of the enigine decentjs in contrast is a javascript library based on the reflection api which is part of the standard most other approaches implement restrictions by filtering and rewriting untrusted code or by removing features that are either unsafe of that grant uncontrolled access for exampe caja compiles javascript code in a sanitized javascript subset that can safely be executed on normal engines because static guarantees do not apply to code created at run time using eval or other mechanisms caja restricts dynamic features sandboxing for javascript and rewrites the code to a cajoled version with additional checks that prevent access to unsafe function and objects static approaches come with a number of drawbacks as shown by a number of papers first they either restrict the dynamic features of javascript or their guarantees simply do not apply to code generated at run time second maintenance requires a lot of effort because the implementation becomes obsolete as the language evolves thus dynamic effect monitoring and dynamic access restriction plays an important role in the context of javascript security as shown by a number of authors effect monitoring richards et al provide a webkit implementation to monitor javascript programs at run time rather than performing syntactic checks they look at effects for access control and revoke effects that violate policies implemented in transcript a firefox extension by dhawan et al extends javascript with support for transactions and speculative dom updates similar to decentjs it builds a transactional scope and permits the execution of unrestricted guest code effects within a transaction are logged for inspection by the host program they also provide features to commit updates and to recover from effects of malicious guest code jscontest is a framework that helps to investigate the effects of unfamiliar javascript code by monitoring the execution and by summarizing the observed access traces to access permission contracts it comes with an algorithm that infers a concise effect description from a set of access paths and it enables the programmer to specify the effects of a function using access permission contracts jscontest is implemented by an offline source code transformation because of javascrip s flexibility it requires a lot of effort to construct an offline transformation that guarantees full interposition and that covers the full javascript language this the implementation of jscontest has known omissions no support for with and prototypes and it does not apply to code created at run time using eval or other mechanisms is a redesign and a reimplementation of jscontest using javascript proxies the new implementation addresses shortcomings of the previous version it guarantees full interposition for the full language and for all code regardless of its origin including dynamically loaded code and code injected via eval monitors read and write operations on objects through access permission contracts that specify allowed effects a contract restricts effects by defining a set of permitted access paths starting from some anchor object however the approach works differently has to encapsulate sensitive data instead of encapsulating dubious functions systems jsflow is a full javascript interpreter that enforces information flow policies at run time like decentjs jsflow itself is implemented in javascript compared to decentjs the jsflow interpreter causes a significantly higher impact than the our sandbox which only reimplements the javascript semantics on the membrane a similar slowdown is reported for another javascript interpreter conceived to execute untrusted javascript code its implementation provides a wealth of powerful features similar to decentjs access control support for the full javascript language and full browser compatibility however its average slowdown in the range of to is significantly higher than decentjs s keil and thiemann f evaluation results this section reports on our experience with applying the sandbox to select programs we focus on the influence of sandboxing on the execution time we use the google octane benchmark to measure the performance of the sandbox implementation octane measures a javascript engine s performance by running a selection of complex and demanding programs benchmark programs run between and times google claims that octane measure s the performance of javascript code found in large web applications running on modern mobile and desktop browsers each benchmark is complex and demanding we use octane as it is intended to measure the engine s performance benchmark programs run between and times we claim that it is the heaviest kind of benchmark every library jquery is less demanding and runs without an measurable runtime impact octane consists of that range from performance tests to web applications figure from an os kernel simulation to a portable pdf viewer each program focuses on a special purpose for example function and method calls arithmetic and bit operations array manipulation javascript parsing and compilation etc testing procedure all benchmarks were run on a machine with two amd opteron processors with ghz and gb memory all example runs and measurements reported in this paper were obtained with the spidermonkey javascript engine for benchmarking we wrote a new start script that loads and executes each benchmark program in a fresh sandbox by setting the sandbox global to the standard global object we ensure that each benchmark program can refer to properties of the global object as needed as sandboxing wraps the global object in a membrane it mediates the interaction of the benchmark program with the global application state all run time measurements were taken from a deterministic run which requires a predefined number of and by using a run results figure and figure contains the statistics for all benchmark programs in two different configurations which are explained in the figure s caption and lists the readouts of some internal counters multiple read effects to the same field of an object are counted as one effect as expected the run time increases when executing a benchmark in a sandbox while some programs like earleyboyer navierstrokes mandreel and are heavily affected others are only slightly affected richards crypto regexp and code loading for instance unfortunately deltablue and zlib do not run in our sandbox deltablue attempts to add a new property to the global object as our sandbox prevents unintended modifications to the new property is only visible inside of the https https programs run either for one second or for a predefined number of iterations if there are too few iterations in one second it runs for another second sandboxing for javascript benchmark richards deltablue crypto raytrace earleyboyer regexp splay splaylatency navierstokes mandreel mandreellatency gameboy emulator code loading zlib typescript total baseline time sec sandbox effects time sec slowdown sandbox w effects time sec slowdown sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec sec figure timings from running the google octane benchmark suite the first column baseline gives the baseline execution times without sandboxing the column sandbox effects shows the time required to complete a sandbox run without effect logging and the relative slowdown sandbox time the column sandbox w effects shows the time and slowdown baseline of a run with effect logging current sandbox and only to objects created with new object and but not to those created using object literals the zlib benchmark uses an indirect to eval to write objects to the global scope which is not allowed by the ecmascript specification another benchmark code loading also uses an indirect call to eval a small modification makes the program compatible with the normal eval which can safely be used in our sandbox in the first experiment we turn off effect logging whereas in the second one it remains enabled doing so separates the performance impact of the sandbox system proxies and shadow objects from the impact caused by the effect system from the running times we find that the sandbox itself causes an average slowdown of over all benchmarks our experimental setup wraps the global object in a membrane and mediates the interaction between the benchmark program and the global application state as each benchmark program contains every source required to run the benchmark in separation except global objects and global functions the only thing that influences the execution time is access to global elements in absolute times raw sandboxing causes a run time deterioration of per sandbox operation effects with effect logging enabled for example the benchmark requires seconds to complete and performs effects on its membrane its baseline requires seconds thus sandboxing takes an additional seconds hence there is an overhead of per operation with effect logging enabled an indirect call invokes the eval function by using a name other than eval keil and thiemann benchmark objects effects size of effect list reads writes calls richards deltablue crypto raytrace earleyboyer regexp splay splaylatency navierstokes mandreel mandreellatency gameboy emulator code loading zlib typescript total figure numbers from internal counters column objects shows the numbers of wrap objects and column effects gives the total numbers of effects column size of effect list lists the numbers of different effects after running the benchmark column reads shows the number of read effects distinguished from there number of write effects column writes and distinguished from there number of call effects column calls multiple effects to the same field of an object are counted as one effect the results from the tests also indicate that the garbage collector runs more frequently but there is no significant increase in the memory consumption for the benchmark we find that the virtual memory size increases from raw run to full run with effect logging however when looking at all benchmarks the difference in the virtual memory size compared with the baseline run ranges from to for a raw sandbox run without effect logging and from to for a full run with fine grained effect logging appendix shows the memory usage of the different benchmark programs and their difference compared with the baseline google octane scores values octane reports its result in terms of a score the octane explains the score as follows in a nutshell bigger is better octane measures the time a test takes to complete and then assigns a score that is inversely proportional to the run the constants in this computation are chosen so that the current overall score the geometric mean of the individual scores matches the overall score from earlier releases of octane and new benchmarks are integrated by choosing the constants so that the geometric mean remains the same the rationale is to maintain comparability https sandboxing for javascript benchmark richards deltablue crypto raytrace earleyboyer regexp splay splaylatency navierstokes mandreel mandreellatency gameboy emulator code loading zlib typescript isolation effects baseline figure scores for the google octane benchmark suite bigger is better block isolation contains the score values of a raw sandbox run without effect logging whereas block effects contains the score values of a full run with effect logging the last column baseline gives the baseline scores without sandboxing figure contains the scores of all benchmark programs in different configurations which are explained in the figure s caption all scores were taken from a deterministic run which requires a predefined number of and by using a run as expected all scores drop when executing the benchmark in a sandbox in the first experiment we turn off effect logging whereas the second run is with effect logging this splits the performance impact into the impact caused by the sandbox system proxies and shadow objects and the impact caused by effect system memory consumption figure figure and figure shows the memory consumption recorded when running the google octane benchmark suite the numbers indicate that there is no significant increase in the memory consumed for example the difference of the virtual memory size ranges from to for a raw sandbox run and from to for a full run with fine grained effect logging programs run either for one second or for a predefined number of iterations if there are to few iterations in one second it runs for another second keil and thiemann benchmark virtual size richards deltablue crypto raytrace earleyboyer regexp splay splaylatency navierstokes mandreel mandreellatency gameboy emulator code loading zlib typescript baseline resident size size shared size figure memory usage when running the google octane benchmark suite without sandboxing column virtual shows the virtual memory size column resident shows the resident set size column shows the segment size and column shows the segment size all values are in mbyte benchmark richards deltablue crypto raytrace earleyboyer regexp splay splaylatency navierstokes mandreel mandreellatency gameboy emulator code loading zlib typescript virtual size diff sandbox effects resident size diff size diff shared size diff figure memory usage of a raw sandbox run without effect logging column virtual shows the virtual memory size column resident shows the resident set size column shows the segment size and column shows the segment size size shows the size in mbyte and diff shows the difference to the baseline sandbox size baseline size in mbyte sandboxing for javascript benchmark richards deltablue crypto raytrace earleyboyer regexp splay splaylatency navierstokes mandreel mandreellatency gameboy emulator code loading zlib typescript virtual size diff sandbox w effects resident size diff size diff shared size diff figure memory usage of a full run with effect logging column virtual shows the virtual memory size column resident shows the resident set size column shows the segment size and column shows the segment size size shows the size in mbyte and diff shows the difference to the baseline sandbox size baseline size in mbyte
6
enumeration of groups whose order factorises in at most primes feb bettina eick february abstract let n n denote the number of isomorphism types of groups of order we consider the integers n that are products of at most not necessarily distinct primes and exhibit formulas for n n for such introduction the construction up to isomorphism of all groups of a given order n is an old and fundamental problem in group theory it has been initiated by cayley who determined the groups of order at most many publications have followed cayley s work a history of the problem can be found in the enumeration of the isomorphism types of groups of order n is a related problem the number n n of isomorphism types of groups of order n is known for all n at most see and for almost all n at most see asymptotic estimates for n n have been determined in however there is no closed formula known for n n as a function in many details on the group enumeration problem can be found in and higman considered pm his porc conjecture suggests that n pm as a function in p is porc polynomial on residue classes this has been proved for all m see for m bagnera or girnat for m newman o brien for m and o brien for m to exhibit the flavour of the results we recall the explicit porc polynomials for n pm for m as follows theorem bagnera n for all primes n for all primes n for all primes n and n for all primes p n n and n gcd p gcd p for all primes p determined a formula for n n for all for m n let m denote the number of different primes dividing for m n and a prime p let cm p denote the number of prime divisors q of m with q mod the following is also proved in prop theorem let n n be square free then n n x y n m pcm p the aim here is to determine formulas for n n if n is a product of at most primes if n is a or is then such formulas follow from the results cited above hence it remains to consider the numbers n that factorise as q q q or qr for different primes p q and for each of these cases we determine an explicit formula for n n see theorems and each of these formulas is a polynomial on residue classes that is there are finitely many sets of conditions on the involved primes so that n n is a polynomial in the involved primes for each of the we summarise this in the following theorem theorem see theorems below let p q and r be different primes and n q q q qr then n n is a polynomial on residue classes the enumerations obtained in this paper overlap with various known results for example considered the groups of order q western those of order q le vavasseur and lin those of order q and glenn those of order qr moreover laue considered all orders of the form pa q b with a b and a and b as well as the orders dividing q r so why are these notes written there are two reasons first they provide a uniform and reasonably compact proof for the considered group numbers and they exhibit the resulting group numbers as a closed formula with few case distinctions laue s work also contains a unified approach towards the determination of its considered groups and this approach is similar to ours but it is not easy to read and to extract the results our second aim with these notes is to provide a uniform and reliable source for the considered group enumerations the reliability of our results is based on its proofs as well as on a detailed comparision with the small groups library acknowledgements we give more details on the results available in the literature that overlap with the results here in the discussions before the theorems and most of these details have been provided by mike newman the author thanks mike newman for this and also for various discussions on these notes divisibility for r s n we define the function wr s via wr s if s r and wr s otherwise the following remark exhibits the relation of wr s and the underlying gcd s remark for r s n it follows that y gcd r s d wr s counting subgroups of linear groups for r n and a group g we denote with sr g the number of conjugacy classes of subgroups of order r in we recall the following result remark let n n and p prime then gl n p has an irreducible cyclic subgroup of order m if and only if m pn and m pd for each d further if there exists an irreducible cyclic subgroup of order m in gl n p then it is unique up to conjugacy the next theorem counts the conjugacy classes of subgroups of certain orders in gl p with crs we denote the direct product of cyclic groups of order theorem let p q and r be different primes and let g gl p a g and for q it follows that sq g q q b g and for q it follows that g q c g r sqr g q q q r and for r q it follows that qr q r qr qr qr proof let m n with p as a preliminary step in this proof we investigate the number of conjugacy classes of cyclic subgroups of order m in gl p by remark an irreducible subgroup of this form exists if and only if m and m p and its conjugacy class is unique in this case a reducible cyclic subgroup of order m in gl p and thus gl p embeds into the group of diagonal matrices note that d has a reducible cyclic subgroup of order m if and only if m divides the exponent p of in this subgroup u if m p then there exists a unique subgroup u cm contains every cyclic subgroup of order m in the group gl p acts on d and on u by permutation of the diagonal entries of an element of a each group of prime order q is cyclic an irreducible subgroup of order q exists if and only if q and q p since p p and gcd p p a reducible subgroup of order q exists if and only q p in this case the number of conjugacy classes of reducible cyclic subgroups of order q in gl p can be enumerated as if q and q otherwise as there are q subgroups of order q in and all but the subgroups with diagonals of the form a a or a for a have orbits of length two under the action of gl p by permutation of diagonal entries b we first consider the cyclic subgroups of order q in gl p for the irreducible case note that if q and q p then either q and p or q and q p for the reducible case we note that if q p then there are q q conjugacy classes of reducible cyclic subgroups of order q in gl p as there are q q subgroups and all but those with diagonal of the form a a and a for a have orbits of length two thus the number of conjugacy classes of cyclic subgroups of order q in gl p is if q and q q q q if q it remains to consider the subgroups of type such a subgroup is reducible and exist if q p in this case there exists a unique conjugacy class of such subgroups c we first consider the cyclic subgroups of order qr in gl p if q then p thus and p if and only if r p as in the previous cases this yields that there are r r cyclic subgroups of order in gl p if q then r the number of cyclic subgroups of order qr in gl r in this case is qr r q qr qr qr using the same arguments as above it remains to consider the case of subgroups such a subgroup h is irreducible and satisfies q if h is imprimitive then cr is diagonalisable there is one such possibility if r p if h is primitive then cr is irreducible there is one such possibility if r p we extend theorem with the following remark let p and q be different primes and let g gl p if h is a subgroup of order q in g then ng h cg h the number of groups h of order q in g satisfying ng h cg h is for q and q q for q proof consider the groups h of order q in gl p if h is irreducible then h is a subgroup of a singer cycle and this satisfies ng h cg h if h is reducible then h is a subgroup of the group of diagonal matrices the group d satisfies ng d cg d where the normalizer acts by permutation of the diagonal entries as in the proof of theorem a only the group h with diagonal of the form a a for a satisfies ng h cg h theorem let p be a prime let g gl p and let q then g and for q it follows that sq g q q w q q proof we first consider the diagonalisable subgroups of order q in gl p these exist if q p if this is the case then the group d of diagonal matrices has a subgroup of the form and this contains all subgroups of order q in the group d has q q subgroups of order q and these fall under the permutation action of diagonal entries into orbits if q into q orbits if q and q and into q orbits if q and q next we consider the groups that are not diagonalisable these can arise from irreducible subgroups in gl p or in gl p in the first case there exists one such class if q and q p as in theorem in the second case by remark there exists one such class if q and q and q p note that the two cases are mutually exclusive in summary there exists an irreducible subgroup of order q in gl p if q and q p p and q p we note that gcd p p p for each prime thus for q it follows that w q w q q which simplifies the formula in theorem counting split extensions for two groups n and u let u n denote the set of all group homomorphisms u aut n the direct product aut u aut n acts on the set u n via g g for aut u aut n u n and g n if is the conjugation by in aut n then this action can be written in short form as given u n the stabilizer of in aut u n is called the group of compatible pairs and is denoted by comp if n is abelian then n is an u via for each u n in this case comp acts on u n induced by its action on u n via g h g h for u n comp and g h u these constructions can be used to solve the isomorphism problem for extensions in two different settings we recall this in the following extensions with abelian kernel suppose that n is abelian and that n is fully invariant in each extension of n by u this is for example the case if n and u are coprime or if n maps onto the fitting subgroup in each extension of n by u the following theorem seems to be folklore theorem let n be finite abelian and u be a finite group so that n is fully invariant in each extension of n by u let o be a complete set of representatives of the aut u aut n orbits in u n and for each o let denote the number of orbits of comp on u n then the number of isomorphism types of extensions of n by u is x the following theorem proved in th exploits the situation further in a special case again cl denotes the cyclic group of order theorem dietrich eick let p be a prime let n cp and let u be finite with sylow p so that p cp then there are either one or two isomorphism types of extensions of n by u there are two isomorphism types of extensions if and only if n and p are isomorphic as nu p a special type of split extensions in this section we recall a variation of a theorem by taunt as a preliminary step we introduce some notation let n and u be finite solvable groups of coprime order let s denote a set of representatives for the conjugacy classes of subgroups in aut n let k denote the set of representatives for the aut u of normal subgroups in u and let o s k s s k k with s for s k o let s denote a fixed isomorphism let ak denote the subgroup of aut induced by the action of stabaut u k on and denote with as the subgroup of aut induced by the action of naut n s on s and thus via on then the double cosets of the subgroups ak and as in aut are denoted by dc s k ak aut as theorem let n and u be finite solvable groups of coprime order then the number of isomorphism types of split extensions n u is x s k s k proof taunt s theorem claims that the number of isomorphism types of split extensions n u correspond to the orbits of aut u aut n on u n in turn these orbits correspond to the union of orbits of ak naut n s on the set of isomorphisms the latter translate to the double cosets dc s k we apply theorem in two special cases in the following again let cl denote the cyclic group of order theorem let q k be a let n cqk and let u be a finite group of order coprime to denote gcd q q for l let kl be a set of representatives of the aut u of normal subgroups k in u with cl for k kl let indk aut ak if k or q is odd then the number of isomorphism types of split extensions n u is x x indk proof we apply theorem the group aut n is cyclic of order p hence for each l there exists a unique subgroup s of order l in aut n and this subgroup is cyclic next as aut n is abelian it follows that naut n s aut n and aut n acts trivially on s by conjugation hence as is the trivial group and s k indk for each theorem let q k be a let u cqk and let n be a finite group of order coprime to q let s be a set of conjugacy class representatives of cyclic subgroups of order dividing q k in aut n then the number of isomorphism types of split extensions n u equals proof again we use theorem for each divisor pl of pk there exists a unique normal subgroup k in u with pl and is cyclic of order note that ak aut for each such hence s k in all cases groups of order q the groups of order q have been considered by by lin by laue and in various other places the results by lin and laue agree with ours lin s results have some harmless typos we also refer to prop for an alternative description and proof of the following result theorem let p and q be different primes a if q then n q b if q then n q q q q p proof the classification of groups of order yields that there are two nilpotent groups of order q the groups cq and cq it remains to consider the groups of the desired order note that every group of order q is solvable by burnside s theorem groups with normal subgroup these groups have the form n u with and u cq we use theorem to count the number of such split extensions thus we count the number of conjugacy classes of subgroups of order q in aut n c then aut n gl p and the number of conjugacy classes of subgroups p of order q in aut n is exhibited in theorem a n then aut n cp is cyclic thus there is at most one subgroup of order q in aut n and this exists if and only if q p groups with normal subgroup these groups have the form n u with n cq and we use theorem to count the number of such split extensions for this purpose we have to consider the aut u of proper normal subgroups k in u with cyclic u then u has a two proper normal subgroups k with cyclic quotient the case k arises if and only if q and the case k cp arises if and only if p q in both cases indk u then there exists one aut u of normal subgroups k with cyclic quotient in u and this has the form k cp and yields indk this case arises if p q groups without normal sylow subgroup let g be such a group and let f be the fitting subgroup of as g is solvable and it follows that f as g has no normal sylow subgroup we obtain that pq g f thus p and pq is the only option next acts on faithfully on f by conjugation since f is the fitting subgroup hence pq f p and this is a contradiction thus this case can not arise groups of order q the groups of order q have been determined by western and laue western s paper is essentially correct but the final summary table of groups has a group missing in the case that q mod p the missing group appears in western s analysis in section there are further minor issues in section of western s paper there are disagreements between our results and the results of laue for the case p and the case q we have not tried to track the origin of these in laue s work theorem let p and q be different primes a b c d there are two special cases n and n if q then n q for all p if p then n q for all q if p and q are both odd then n q q q p p q w q q q before we embark on the proof of theorem we note that the formula of theorem d can be simplified by distinguishing the cases q and q for q and p odd it follows that w q q thus theorem d for q and p odd reads n if q and p is odd then w q q w q holds again this can be used to simplify the formula of theorem d proof the proof follows the same strategy as the proof of theorem burnside s theorem asserts that every group of order q is solvable it is easy to see that there are five nilpotent groups of order q the groups g cq with it remains to consider the groups of the desired order groups with normal subgroup these groups have the form n u with and u cq using theorem they correspond to the conjugacy classes of subgroups of order q in aut n there are five isomorphism types of groups n of order for p the groups aut and aut gl have subgroups of order coprime to that is aut has one conjugacy class of subgroups of order and gl has one conjugacy class of subgroups of order and this leads to the special cases in a and shows that in all other cases on q this type of group does not exist for p it remains to consider the case p n then aut n is cyclic of order p thus aut n has at most one subgroup of order q and this exists if and only if q p this adds q n cp in this case aut n is solvable and has a normal sylow thus aut n contains a subgroup of order q if with of the form and only if q p in this case there are q such subgroups in and these translate to conjugacy classes of such subgroups in aut n this adds q q n is extraspecial of exponent p then aut n aut n gl p is surjective thus the conjugacy classes of subgroups of order q in aut n correspond to the conjugacy classes of subgroups of order q in gl p these are counted in theorem a hence this adds q q if q if q n is extraspecial of exponent then aut n is solvable and has a normal sylow with of the form thus there is at most one subgroup of order q in aut n and this exists if and only if q p this adds q n gl p the conjugacy classes of subgroups of order q in then aut n gl p are counted in theorem hence this adds q q w q q if q if q groups with normal subgroup these groups have the form n u with n cq and using theorem we have to determine the aut u of proper normal subgroups k in u with cyclic of order dividing q and then for each such k determine indk u in this case there are the options k cp and all of these have indk hence this adds p u cp in this case there are two normal subgroups k with cp and up to aut u all of these there is one normal subgroup k with c p have indk thus this adds p u extraspecial of exponent p or in this case there is one aut u of normal subgroups k with cp and this has indk hence this adds p u extraspecial of exponent or then there are two aut u of normal subgroups k with cp one has indk and the other has indk p thus this case adds p in this case there is one aut u of normal subgroups k with cp and this has indk thus this adds p groups without normal sylow subgroup let g be such a group and let f be the fittingsubgroup of since g is solvable it follows that f is not trivial further pq g f by construction thus p or are the only options recall that acts faithfully on f if f is cyclic of order p or then a group of order pq can not act faithfully on f hence f is the only remaining possibility in this case has order pq and embeds into aut f gl p thus q p p as f is the fitting subgroup of g of order it follows that can not have a normal subgroup of order hence cq cp and p q this is only possible for p and q and thus is covered by the special case for groups of order except for special cases it now remains to sum up the values ci for p and ci for p this yields the formulas exhibited in the theorem groups of order q the groups of order q have been determined by lin le vavasseur and laue lin s work is essentially correct it only has minor mistakes and it agrees with the work by laue and our results lin seems unaware of the work by le vavasseur we have not compared our results with those of le vavasseur theorem let p and q be different primes with p q a there is one special case n b if p then n q c if p then n q p p p p proof first all groups of this order are solvable by burnside s theorem and it is obvious that there are isomorphism types of nilpotent groups of this order next we consider sylow s theorems let mq denote the number of sylow in a group of order q and recall that p q then mq mod q and mq thus mq p if mq p then q p and this is impossible if mq then q p p thus either q p or q p again this is impossible unless p and q thus if p q then mq and g has a normal sylow groups with normal sylow these groups have the form n u with q and we consider the arising cases n and u then aut n cq and thus there are at most one subgroup of order p or in aut n we use theorem to determine that this case adds p and u this case is similar to the first case and adds p n let u aut n gl q and denote k ker if p then cp in both cases on the isomorphims type of u there is one aut u of normal subgroups of order p and this satisfies ak aut hence in this case it remains to count the number of conjugacy classes of subgroups of order p in gl q see theorem a thus this adds if p p p if p if then u clearly there is one aut u of such normal subgroups and it satisfies ak aut hence in this case it remains to count the number of conjugacy classes of subgroups of order in gl q see theorem b thus this adds if p p p p if p the number of groups of order q can now be read off as and this yields the above formulae groups of order qr the groups of order qr have been considered by glenn and laue glenn s work has several problems there are groups missing from the summary tables there are duplications and some of the invariants are not correct this affects in particular the summary table laue does not agree with glenn we have not compared our results with those of laue theorem let p q and r be different primes with q a there is one special case n b if q then n qr r r p c if q then n qr is equal to p p p p p p q q q q q q q q qr pq q qr qr r r q qr pq p q q p p p q p q r q proof the exists one group of order qr and this is the group of order further there are two nilpotent groups of order qr in the following we consider the solvable groups g of order qr let f be the fitting subgroup of then f and are both and acts faithfully on f f so that no normal subgroup of stabilizes a series through f this yields the following cases case p in this case f f and aut f hence is abelian and has a normal subgroup isomorphic to cp this is a normal subgroup which stabilizes a series through f as this can not occur this case adds case q in this case f f and aut f and thus r q and this is impossible since r q thus this case adds case r in this case f f and aut f and q hence g cr q since aut cr is cyclic it has at most one subgroup of order q and this exists if and only if q r by theorem this case adds q case and f then g cqr and cqr acts faithfully on note that aut is cyclic of order p p thus this case arises if and only if qr p and in this case there is a unique subgroup of aut f of order qr again by theorem this case adds qr case and f then g h with h of order qr and h embeds into aut cp gl p by theorem the number of groups g corresponds to the number of conjugacy classes of subgroups of order qr in gl p by theorem c this adds r r if q qr q r qr qr qr if q case pq in this case f and aut f thus is abelian and hence cp cr note that r q and thus acts on f in such a form that cp acts as subgroup of and cr acts as subgroup of this implies that r p and p q this is a contradiction to r q and hence this case adds case pr in this case f and aut f thus is abelian and hence cp cq it follows that p r and q p r there are two cases to distinguish first suppose that the sylow of acts on the sylow of f then q p and g splits over f by th thus the group g has the form f for a monomomorphism aut f as in theorem the number of such groups g is given by the number of subgroups of order pq in aut f as the image of the sylow of under is uniquely determined it remains to evaluate the number of subgroups of order q in aut f that act on the sylow of f this number is q q as second case suppose that cq acts trivially on cp then q r and the action of on f is uniquely determined it remains to determine the number of extensions of by f by th there exist two isomorphism types of extensions in this case in summary this case adds p q q q q case qr in this case g has the form g n u with n cq cr and p and u aut n a monomorphism by theorem we have to count the number of subgroups of order in aut n note that aut n thus the number of subgroups of the form in aut n is p p it remains to consider the number of cyclic subgroups of order in aut n this number depends on gcd q pa and gcd r pb if a b then this case does not arise thus suppose that a if b then this yields group if b then this yields p groups if b then this yields p p groups a similar results holds for the dual case b we obtain that this adds p p p p p p in summary this case adds p p p p p p n u with n case pqr in this case g has the form g cq cr and p and has a kernel k of order again we use theorem to count the number of arising cases the group u is either cyclic or u in both cases there is one aut u of normal subgroups k of order p in u and this satsifies that autk u maps surjectively on aut thus it remains to count the number of subgroups of order p in aut n as aut n this case adds p p p p p case q then g f u with f n cq and and u cr by theorem we have to count the number of conjugacy class representatives of subgroups of order r in aut f recall that r q and aut f aut n thus it remains to count the number of conjugacy classes of subgroups of order r in aut n if n is cyclic then aut n cp and this number is r if n cp then aut n gl p and this number is determined in theorem a in summary this case adds r r case r this case is dual to the previous case with the exception that now the bigger prime r is contained in a group of this type has the form n u with n nilpotent of order r and u cq we consider the two cases on n if n is cyclic then n cr and aut n cp in this case we can use theorem to count the desired subgroups as q q q q q now we consider the case that n cr in this case we use a slightly different approach and note that all groups in this case have the form m v with m and qr if v is cyclic then v aut m has a kernel k of order the other option is that v is and thus of the form v cr cq in this case the kernel k of has order r or qr we use theorem to count the number of arising cases if v is cyclic then it is sufficient to count the conjugacy classes of subgroups of order q in aut m by theorem this adds sq gl p if v is then q r we consider theorem in more detail first suppose that qr then is trivial and uniquely defined thus it remains to consider the case that then the subgroup ak aut v induced by stabaut v k is the trivial group let s be a set of conjugacy class representatives of subgroups of order q in gl p and for s s let as denote the subgroup of aut v induced by the normalizer of s in gl p then theorem yields that the number of groups arising in this case is x q aut v next note that as can be determined via remark as aut v it follows that if q q q q q q otherwise in summary this case adds and this can be evaluated to if q and if q then q q q q q q q q q it now remains to sum up the values for the different cases to determine the final result final comments the enumerations of this paper all translate to group constructions it would be interesting to make this more explicit and thus to obtain a complete and irredundant list of isomorphism types of groups of all orders considered here references bagnera works rend circ mat palermo with a preface by pasquale vetro edited by guido zappa and giovanni zacher besche and eick construction of finite groups symb besche eick and o brien smallgroups a library of groups of small order a gap package webpage available at besche eick and o brien a millenium project constructing small groups internat algebra blackburn neumann and venkataraman enumeration of finite groups cambridge university press cayley on the theory of groups as depending on the symbolic equation n philos conway dietrich and o brien counting groups gnus moas and other exotica math intelligencer dietrich and eick groups of order algebra eick horn and hulpke constructing groups of small order recent results and open problems submitted to dfg proceedings girnat klassifikation der gruppen bis zur ordnung staatsexamensarbeit tu braunschweig glenn determination of the abstract groups of order qr p q r being distinct primes trans amer math higman enumerating ii problems whose solution is porc proc london math die gruppen der ordnungen pq pqr math die gruppen mit quadratfreier ordnungzahl kl pages nachr ges wiss laue zur konstruktion und klassifikation endlicher gruppen le vavasseur les groupes d ordre q p un nombre premier plus grand que le nombre premier acad sci paris vie le vavasseur les groupes d ordre q p un nombre premier plus grand que le nombre premier ann de l norm lin on groups of order q q tamkang j newman o brien and groups and nilpotent lie rings whose order is the sixth power of a prime j o brien and the groups with order for odd prime algebra pyber group enumeration and where it leads us in european congress of mathematics vol ii budapest volume of progr pages basel robinson applications of cohomology to the theory of groups in campbell and robertson editors groups andrews number in lms lecture note series pages cambridge university press taunt remarks on the isomorphism problem in theories of construction of finite groups proc cambridge philos a western groups of order proc london mat
4
asymmetric dynamics of outer automorphisms aug mark bell university of illinois mcbell march abstract we consider the action of an irreducible outer automorphism on the closure of outer space this action has dynamics and so under iteration points converge exponentially to for each n we give a family of outer automorphisms out fn such that as k goes to infinity the rate of convergence of goes to infinity while the rate of convergence of goes to one even k if we only require the rate of convergence of to remain bounded away from one no such family can be constructed when n this family also provides an explicit example of a property described by handel and mosher that there is no uniform upper bound on the distance between the axes of an automorphism and its inverse keywords outer automorphism rate of convergence asymmetric mathematics subject classification an irreducible outer automorphism out fn acts on cv n the closure of outer space section with dynamics theorem therefore its action has a pair of fixed points n and under iteration points in cv n other than converge to there is a natural embedding of cv n into rpc where c is the set of conjugacy classes of the free group fn in this embedding the w c coordinate of a marked graph g g cv n is given by the length of the shortest loop in g which is freely homotopic to g w page in this coordinate system the convergence to is exponential to measure the rate of this convergence we recall two definitions from definition definition suppose that f z x is a polynomial with roots ordered such that the spectral ratio of f is f definition suppose that out fn is an irreducible outer automorphism let z x denote the minimal polynomial of its stretch factor the spectral ratio of is as described in the rate of convergence of t to is determined by thus we state the main result of this paper a complete characterisation of when it is possible to build outer automorphisms which converge rapidly in one direct but slowly in the other in terms of the spectral ratio theorem there is a family of fully irreducible outer automorphisms out fn such that but k if and only if n this gives another difference between irreducible outer automorphisms and mapping classes of surfaces a mapping class h s has its spectral ratio h defined in terms of its dilatation definition however h and have the same dilatation proposition and so h automatically it is straightforward to prove the forward direction of theorem by considering its contrapositive when n there is nothing to check as all automorphisms of are finite order when n any irreducible outer automorphism out is geometric hence there is a mapping class on the oncepunctured torus which induces on however the spectral ratio of any mapping class of the torus is at least where is the golden ratio section therefore and so these are both bounded away from one thus we devote the remainder of this paper to constructing an explicit family of outer automorphisms of fn an i for fixed n to do this we start by considering the polynomials f x y xn yxn and g x y xn yxn in q x y since these are linear polynomials in y they are irreducible over q therefore by hilbert s irreducibility theorem chapter there are infinitely many integers k such that pk x f x k and qk x g x k are both irreducible lemma suppose that k the polynomial pk has n roots inside of the unit circle and one root in k k similarly the polynomial qk has n k k and one root in roots inside of the unit circle one root in k k for example see figure proof let h z n then when we have that pk h z n z z therefore by s theorem pk must have the same number of roots inside of the unit circle as h does hence pk has n roots inside of the unit circle furthermore since deg pk n there is only one more root to locate the intermediate value theorem now shows it must lie in k k applying the same argument with h z n to qk shows it must have n roots inside of the again we unit circle verify that the other two roots of qk lie in k k and k k by using the intermediate value theorem x x figure the polynomials and when n in fact knowing the positions of the roots allows us to show many of these polynomials are irreducible directly lemma whenever k the polynomial pk is irreducible over z proof first assume that there is a z such that and pk z and so n kz n however this would mean that which contradicts the fact that k hence pk can not have a root on the unit circle now assume that pk is reducible then one of its factors q z x must have all of its roots inside of the unit circle therefore the constant term of q which is the product of its roots must have modulus less than one however this means that the constant term of q must be zero and so zero is a root of pk which is false similarly by taking into account the symmetry of the roots the same argument also shows that qk is irreducible whenever k and n is even furthermore for many of the low odd values of n we can find a prime p and k such that the image of in x is irreducible it then follows that qk is irreducible whenever k k mod p some of these values are shown in table n p table qk is irreducible whenever k k mod p now following page let out fn be the outer automorphism given by an an an an we use this family to conclude the remaining direction of theorem theorem suppose that k is such that pk and qk are both irreducible the outer automorphisms and and fully irreducible k are furthermore k while proof we interpret as a homotopy equivalence f of the n petalled rose since this is a positive automorphism this is a train track map the transition matrix of this map is k a and up to a sign the characteristic polynomial of a is xn kxn pk x we chose k so that this polynomial is irreducible and so pk now by lemma this polynomial has n roots that lie in the interior of the unit circle and one root in k k hence k and pk on the other hand k is given by an an n again we consider this as a homotopy equivalence g of the n petalled rose direct calculation of the turns involved shows that this is again a train track map and that its transition matrix is b k up to a sign the characteristic polynomial of b is xn kxn qk x again we chose k so that this polynomial is irreducible and so qk thus by lemma this polynomial has n roots that lie in the interior of the unit circle in k k and one root in k k thus k and so q k k k as required it also follows from this computation that k and so is a automorphism now suppose that f has no periodic nielsen paths page note that the local whitehead graph of f is connected page and a is a frobenious matrix thus by the full irreducibility criterion lemma we have that is fully irreducible on the other hand if f has a periodic nielsen path then must be parageometric page thus k is not parageometric corollary and so g has no periodic nielsen paths again the local whitehead graph of g is connected and b is a matrix hence k is fully irreducible in either case and are both fully irreducible k handel and mosher described how there is no uniform upper bound depending only on n on the distance between the axes of and section we finish by noting that is an explicit family of such outer automorphisms theorem suppose that k is such that pk and qk are both there are axes for and k which are separated by at least n log k proof for ease of notation let and k begin by noting that the eigenvectors of the matrices a and b in the proof of theorem are n k and k n respectively let and be the metric graphs in outer space corresponding to n roses with these lengths assigned then and its images xi lie on an axis of similarly and its images yi k lie on an axis of k we note that d xi log and d yi log furthermore direct calculation of the lengths of candidate loops shows that d xi yi d xi yj for every i and j and that n n k k k n log k d xi yi log k therefore suppose that the distance between these two axes is realised by points x and y without loss of generality we may assume that x lies in the segment xi and that y lies in the segment yj however as shown in figure d x y d xi yj d xi x d y yj as k and k the right hand side of this inequality is bounded below by n log k log k n log k as required log k xi x n log k y yj k log k figure axes of and k we note that for each the turns ai aj are all illegal hence these are never lone axis automorphisms theorem similarly for each the k turns a where the indices wrap over so a a and a n n i are all illegal and so these are also never lone axis automorphisms question are there axes of and that remain within a uniformly k bounded distance of each other remark from the eigenvectors above we also see that as k goes to infinity these axes enter thinner and thinner parts of outer space acknowledgements the author wishes to thank yael and christopher leininger for many helpful discussions regarding this result references mark bell and schleimer slow dynamics on pml arxiv december mladen bestvina and michael handel train tracks and automorphisms of free groups ann of math benson farb and dan margalit a primer on mapping class groups volume of princeton mathematical series princeton university press princeton nj michael handel and lee mosher parageometric outer automorphisms of free groups trans amer math electronic michael handel and lee mosher parageometric outer automorphisms of free groups trans amer math electronic michael handel and lee mosher axes in outer space mem amer math serge lang fundamentals of diophantine geometry new york gilbert levitt and martin lustig irreducible automorphisms of fn have dynamics on compactified outer space inst math jussieu mosher and pfaff lone axes in outer space arxiv november pfaff constructing and classifying fully irreducible outer automorphisms of free groups arxiv may karen vogtmann automorphisms of free groups and outer space in proceedings of the conference on geometric and combinatorial group theory part i haifa volume pages
4
topology estimation using graphical models in power distribution grids mar deepjyoti deka michael chertkov and scott backhaus los alamos national laboratory new mexico usa institute of science and technology moscow russia grid is the medium and low voltage part of a large power system structurally the majority of distribution networks operate radially such that energized lines form a collection of trees forest with a substation being at the root of any tree the operational may change from time to time however tracking these changes even though important for the distribution grid operation and control is hindered by limited monitoring this paper develops a learning framework to reconstruct radial operational structure of the distribution grid from synchronized voltage measurements in the grid subject to the exogenous fluctuations in nodal power consumption to detect operational lines our learning algorithm uses conditional independence tests for continuous random variables that is applicable to a wide class of probability distributions of the nodal consumption and gaussian injections in particular moreover our algorithm applies to the practical case of unbalanced power flow algorithm performance is validated on ac power flow simulations over ieee distribution grid test cases networks power flow unbalanced threephase graphical models conditional independence computational complexity i ntroduction the operation of a large power grid is separated into different transmission grid that consists of high voltage lines connecting the generators to the distribution substations and distribution grid consisting of the medium and low voltage lines that connect distribution substations to loads the structure of the transmission grid is made loopy to reliable delivery of electricity to substations via multiple redundant paths on the other hand typical distribution grids are operationally radial with the substation at the root and loads positioned along the nodes of the tree the radial topology is selected from a subset of power distribution lines which overall structurally form a loopy graph by switching breakers an illustration of the radial operational topology is presented in fig topology estimation in the distribution grid thus refers to the problem of determining the current set of operational lines which are energized with respective switch statuses on these deepjyoti deka and scott backhaus are with los alamos national laboratory los alamos nm email deepjyoti corresponding author backhaus chertkov is with los alamos national laboratory los alamos nm and skolkovo institute of science and technology russia email chertkov this work was supported by department of energys office of electricity as part of the doe grid modernization initiative changes may conducted in an way without proper and timely reporting to the distribution system operator accurate topology estimation in the distribution grid is necessary for checking system failure detection and also for taking consecutive optimization and control decisions topology estimation is hindered by the limited presence of measurements flow breaker statuses in the distribution grid this reduced monitoring ability is often a legacy in the past and still today distribution grids have received much less attention as compared to the system however these practices are under review there has been an emerging effort in providing better observability at the distribution level through deployment of advanced nodal measurement devices like phasor measurement units pmus frequency monitoring network fnet systems and alike similarly smart devices like thermostats and electric vehicles have monitors for their performance regulation through participation in demand response programs such devices and smart meters are capable of providing real time measurements of voltages and frequency though often limited to the grid nodes the goal of this paper is to utilize nodal measurements of voltages captured by smart meters to efficiently estimate the operational radial topology of the distribution grids it is worth mentioning that given a loopy set of permissible edges one can construct a large set of candidate radial topologies thus brute force search of the current operating topology is computationally prohibitive in this work we overcome this combinatorial difficulty by utilizing the framework of graphical models to characterize nodal voltages in the operationally radial grid and design a topology learning algorithm based on it our learning framework is very general in particular applicable to both and systems characterized by possibly unbalanced power flows furthermore our learning algorithm does not require explicit information on the individual nodes injections or the value of line impedances this reconstruction with limited information is of practical significance as line parameters in the distribution grids while are seldom calibrated and may not be known with the precision sufficient for topology estimation a prior work topology learning in power grids in general and in distribution grids in particular is a fast growing area of research past research efforts in this area differ by the methodology for edge detection available measurements and the types of the flow model used in the case of loopy transmission grids uses a maximum likelihood estimator with and sparsityenforcing regularizers to estimate the grid structure from the information availaible via electricity prices uses a markov random field model for bus phase angles to build a dependency graph to detect faults in grids for linearized power flow models in radial grids with constant resistance to reactance ratio presents a topology identification algorithm using the sign of inverse covariance matrix present greedy structure learning algorithm using trends in second order moments of nodal voltage magnitudes for linearized power flow models where observations are known only at a subset of the grid nodes sets of line flow measurements have been used for topology estimation using maximum likelihood tests in further there have been efforts to identify topology and phase in distribution grids include machine learning schemes that compare available observations from smart meters with database of permissible signatures to identify topology changes phase recovery and parameter estimation the mentioned literature considers measurements from static power flow models for measurements arising from system dynamics swing equations a graphical model based reconstruction scheme has been proposed to identify the operational topology in radial grids loopy grids in the broader category of random distribution graphical models provide an important graphical tool to model the structure between different variables it has been used in learning estimation and prediction problems in diverse fields like natural languages genetic networks social interactions decoding in communication schemes etc a group lasso and graph lasso based approximate scheme for topology identification in loopy grids is discussed in in contrast to prior work a majority of distribution grids are unbalanced and have voltages and injections at different nodes the overarching goal of this work is to present an efficient algorithm for topology estimation using nodal voltages from a radial distribution grid using the graphical model framework well as additional edges relating neighbors in the gridgraph based on this factorization of the voltage distribution we present our learning algorithm that uses conditional independence based tests nodes per test which we also call quartet to identify the operational edges in the special case where nodal injection fluctuations are modelled by gaussian probability distributions the conditional independence quartet test reduces to a test of voltage covariances at the nodes of the quartet our learning framework has several computational and practical advantages first the framework is independent of the exact probability distribution for each individual node s power usage and voltage profile and hence applicable to general distributions second it does not require knowledge of line impedances and similar network parameters that are calibrated infrequently and hence may not be known accurately third the computational complexity of each edge detection test does not grow with the network size that is the test is local each test considers only a set of four nodes furthermore the tests can be conducted in a distributed fashion these results extend what was reported earlier in a conference version where a version of this paper s results were presented with limited simulations to the best of our knowledge this is the first work claiming a provable topology learning algorithm for distribution grids the material in the manuscript is organized as follows the next technical introduction section provides a brief discussion of the distribution grid topology power flow models and associated nomenclature the linearized power flow model and its counterpart are described in section iii section iv analyzes the graphical model of power grid voltage measurements we emphasize that the induced graphical model is built from but different from the itself conditional independence properties of the voltage distribution are introduced and utilized in section v to develop our main learning algorithm section vi is devoted to discussion of the details of the conditional independence quartet test experimental results on the algorithm s ieee radial networks test are presented in section viii the last section is reserved for conclusions b contribution in this paper we consider power flows in a distribution grid and aim to estimate the radial operational topology using measurements of nodal voltages the first contribution of this work is the development of a linearized coupled power flow model that generalizes our prior work in power flow model and also relates to similar linearized models specific to radial grids we analyze the probability distribution of the complex nodal voltages in the grid through the framework of graphical models the second contribution of this work consists of proving that under standard assumptions about fluctuations of power consumption the distribution of nodal voltages can be described by a specific chordal graphical model we show that the edges in the graphical model include both the actual operational of the as ii d istribution g rid s tructure p ower f lows a structure radial structure the distribution grid is represented by a radial graph g v e where v is the set of of the graph and e is the set of undirected the operational edge set e is realized by closing switches in an set of functional e f ull see fig the operational grid note a difference with the functional gridgraph is a collection of k disjoint trees k tk where each tree tk spans a subset of the nodes vtk with a substation at the root node and connected by the set of operational edges etk we denote nodes by letters i j and so on the undirected ledge line connecting nodes i and j is denoted by i j the terminal nodes with degree one are called leaf nodes node connected to a leaf node is called its parent node nodes with degree greater than are called fig schematic of a radial distribution grid with substations represented by large red nodes the operational grid is formed by solid lines black load nodes within each tree are marked with the same color nodes path pi j i j denotes the unique set of nodes such that edges j connect i and j to circumvent notational complications we limit in the remainder of the manuscript our topology learning problem to grids containing only one operational tree t g v e extension to the case of an operational forest layout is straightforward however prior to discussing the learning we will first review in the next subsection ac power flow models on a general which is not necessarily a tree b power flow models notations valued quantities are represented in case we use notation for a vector variable with components in multiple phases the per phase component is described without the hat with a particular phase as a superscript the value of a variable at a specific grid location bus or line is marked by a subscript if no subscript is mentioned it refers to the vector of values for the variable at all permissible locations for example represents a complex real variable at location i while wia wai represents its value at the node i and phase w a wa represents the vector with complex real values at all locations for phase power flow here the voltages currents and injections in the entire grid are defined over a say phase a that is skipped from notation in this paragraph notations for convenience according to the kirchhoff s laws the complex valued equations governing power flow leaving node i of the g v e is given by v pi pi vi v j j j i j where the real valued scalars vi pi and qi denote respectively the voltage magnitude phase active and reactive power injection at node vi vi exp and pi mark respectively the nodal complex voltage and injection zi j ri j j is the impedance of the line i j ri j xi j stands for the line resistance reactance the unbalanced power flow equations over three phases are described next unbalanced power flow in a setting ac power systems have three unbalanced note that even though we discuss solely a system our analysis extends to a general m phase system phases that operate over four wires one for each phase and a common ground consequently the active and reactive power injections at each bus are not scalars but have components each similarly the complex voltage magnitude and phase at each bus and the power flows on each line are of sizes with a component for each phase using the notation described earlier the complex power injections and complex voltages in the grid are a a a vi e i pi pi qi vi b v pbi c c c c c pi pi qi vi vbi here t stands for the voltage phase angles at node i reference phase angles in the three phase system all defined modulo are f t impedance on line i j is represented by a symmetric matrix which relates the three phase current j over line i j to the voltage difference according to j j j iiaj iibj iicj t j aa ab ac aa ab ac ri j ri j ri j xi j xi j xi j bc bb where j j j ribbj ribcj i j xi j xi j cc cb ca cc cb ca xi j xi j xi j ri j ri j ri j note that the diagonal values in j denote the impedances while the values denote the interphase impedances of the line the resistance j and reactances j have a similar structure the kirchoff laws in three phases for the configuration f are then given by v diag diag jh j j i j j i j where j the inverse of is the three phase admittance matrix with in and out of phase components as described for z note that eq is the generalization of the eq from one to three phases in the next section we introduce linear approximation to power flows it is done first for the case and then extended to the case of the three phase unbalanced grids the approximation is derived under assumptions that fluctuations in voltage magnitudes and phases between connected nodes are small the assumptions are realistic for distribution grids functioning in the operationally stable regime iii l inear p ower flow m odels in this section we discuss a linear coupled approximate model for three phase unbalanced power flow we show that it generalizes the linear power flow model lcpf that was discussed in our previous work for the case of the network in terms of functionality model is geared more towards medium and low voltage distribution grids rather than high voltage transmission grids we first describe the linear coupled power flow model in the format as it helps to transition to the case of the three phase model linear coupled power flow model in this model the pf eq is linearized jointly over the phase angle difference between neighboring buses j and deviations of the voltage magnitude vi from the reference voltage of both of which are considered to be small we arrive at the following set of lc equations pi ri j vi v j xi j j j j i j qi xi j vi v j ri j j j j i j one can conveniently express the linear equations of the lcpf model in the matrix form p p m t z m v where z is a diagonal matrix where the nonzero elements are complex conjugates of the respective line impedances and m is the edge to node incidence matrix for g every edge i j e is represented by a row mi j eti etj where ei is the standard basis vector associated with the vertex i note that in deriving eq we ignore losses of both active and reactive powers in lines thus getting conservation of the net active and reactive powers to make eq invertible one fixes the voltage magnitude to unity and phase to zero at the bus then injections at the reference bus are equal to the negative sum of injections at all other buses with these standard manipulations we effectively remove the reference bus from the system and without a loss of generality measure voltage magnitudes and phases at other buses relative to that at the reference bus removing entries corresponding to the reference bus from matrix m and vectors p q v we arrive at an invertible system resulting in t v v m z m p where v v next we describe linear approximations of the three phase power flow linear coupled three phase power flow consider the three phase pf described in eq with reference phase angle f in eq here we consider small deviations in voltage magnitude and phase angle from nominal at each bus and small angle difference between neighboring buses in three phases our assumption can be stated as follows i j e j k i v e i yi j e v v i j j i j a b c f eq states that for each node the angles at different phases are roughly separated by the same amount as the reference phase angles by under these small deviation assumptions we approximate pf eq at each phase for vi v j j yi j a b c j i j where eq is derived similar to eqs by ignoring second order terms in voltage magnitude and phase angle differences the linear coupled power flow model in three phases is given by eq note that equations reduce to the eqs if the number of phases is limited to one this linearization provides several important attributes first the contributions of voltage magnitudes and angles in are additive second either quantity s contribution is expressed in terms of differences in values at neighboring nodes for the same phase only third the is lossless in all phases individually sum of injections at all nodes for a given phase is zero we collect nodal voltage magnitudes and angles for each phase into vectors and line admittances for each pair of phases into diagonal matrices to express as a linear equation similar to the case in eq e f m t y m a b c combining the expressions for power injections in all three phases we arrive at aa pa y pb m ab y ac pc va y ac b b bc m e v y y cc vc y ab y bb y bc t m y m v pa va where pb v vb m diag m m m pc vc here y is a square block matrix where every block y is a diagonal matrix constructed from admittances of the respective pairs due to this specific structure inverse of y has a similar block sparse pattern as y in fact the following holds theorem let j and j be the three phase admittance and impedance matrices respectively for edge i j e where h j j define y y aa y ab y ac y ab y bb y bc y ac y bc y cc where each block is a diagonal matrix with admittances on all lines in e for a phase pair define z v k vai vbi vci t k f f f a b c node i v as follows z aa z ab z ac z ab z bb z bc z ac z bc z cc of y takes the following form h similarly then the inverse aa zh ab zh ac zh ab zh bb zh bc zh ac zh bc zh cc zh the proof is omitted it reduces to showing that y z equal to an identity matrix h is our next step consists in inverting eq and thus deriving expression for p via v since each m is blockdiagonal m is rank deficient by to resolve this apparent difficulty we follow the logic for single phase calculations of the specifically we reduce the system by considering a reference bus with reference voltage magnitudes and angles for all three phases furthermore as is lossless injections at the reference bus are given by the negative sum of injections at all other nodes therefore removing entries corresponding to the reference bus for each phase in m and we invert eq and express the three phase voltages in terms of the three phase injections in the reduced system as follows v h z t m note that both reduced eq and eq are defined over general grids possibly loopy proves that for distribution grids is equivalent to a first order approximation of the distflow model on the other hand by ignoring line resistances and voltage magnitude deviations one reduces the model to the dc model similarly the model over a radial grid appears equivalent to the lossless approximation of the three phase distflow model shows a different derivation of the which ignores components corresponding to line losses in the power flow equations we will see in the following that the learning algorithms naturally extend to other linear models including the model aside from being lossless another characteristic of the aforementioned models is that they represent nodal complex voltages as an invertible function of nodal injections at the buses this property is fundamental to determining the graphical model of nodal voltages discussed in the following section iv g raphical m odel from p ower f lows we now describe the probability distribution of nodal voltages in the distribution t see fig a considered under model or model first we make the following assumptions regarding statistics of the power injections in the distribution assumption at all nodes are modeled as pq nodes for any instance p and q at a node is kept constant loads at the nodes are assumed generated from a probability distribution modeling an exogenous process loads at different nodes are statistically independent note that the latter most important part of the assumption concerning the independence is a formalization of the observation that act switching their devices independently at short intervals similar assumptions of independence are reported in the literature note that assumption does not require that p and q components at the same node are uncorrelated in particular active and reactive injections at the same node are allowed to be dependent if each individual is by itself an of many random processes one would expect according to the law of large numbers that fluctuations of the load is well modeled by a gaussian process under assumption the continuous random vector of injections at the nodes within the t p under eq and under eq are described by the following probability distribution functions pdf p p pi pi p pi where pi pi and pi are the for injection at node i in single phase and three phases respectively note that and contain the exact information as the relation between them is governed by a phase specific constant rotation using the invertible relations between voltages and injections in the and models one arrives at the following pdf of the complex voltage vector v v for the nodes p p p v satisfy eq v p v satisfy eq p v v p v where v and v represent the determinants of the jacobian matrices for the invertible linear transformation from injections to voltages in the and models respectively note that as the transformation is linear the jacobian determinants are constant we now describe the graphical model gm representation of the probability distribution for nodal voltages that we use below in the following section for topology estimation graphical model a n dimensional random vector x xn t is described by an undirected graphical model gm with node set vgm n and edge set egm representing conditional dependence edge i j egm if and only if vgm i j p xi j xc p xi here xc represents random variables corresponding to nodes in the set stated differently the set of neighbors of node i is represented by random variables that are conditionally dependent it follows from this definition that if deletion of a set of nodes c separates the graphical model gm into two disjoint sets a and b then each node in a is conditionally independent of a node in b given all nodes in we now discuss the and models over the distribution t and analyze the structure of the respective gms of nodal voltages note that each node in the gm for single phase voltages represents two scalar variables the voltage magnitude and phase at the corresponding node similarly each node in the three phase gm corresponds to the complex voltage at each node in three phases six scalar variables consider model for tree t v e pdf of the single phase voltages is given by eq using eq one derives p v pi pi v v pi v v j i j where the is constant note that each term under the product sign at the right hand side of eq includes voltages corresponding to a node and all its neighbors in t consider two nodes i and j in v if i and j are neighbors then terms pi pi p j pj include voltages at both i and j similarly if i and j are two hops away and share a common neighbor k voltages at i and j appear in pk pk however for i and j that are three or more hops away no term includes voltages at i and voltages at j thus the pdf p v can be product separated into terms containing only i or j but not both this implies that voltages at two nodes i and j are conditionally independent given voltages at all other nodes if and only if the distance between them in t is greater than hops next consider the gm for the pdf of the three phase voltages eq using eq with eq the pdf of nodal voltages can be expanded as pi v v e re f via v j yi j a b c f b e vi v j yi j p i v j i j a b c f vic v j yi j p v a b c using the same analysis as that for the single phase case one observes that the pdf of nodal voltages in has a similar feature with voltages at nodes greater than two hops being conditionally independent using the aforementioned definition of conditional independence and gm structure we arrive at the following lemma lemma graphical models gm for the pdf of single phase voltages eq and gm for the pdf of three phase voltages eq contains edges between single and two hops nodes in t fig b shows an example construction of a gm correspondent to either of the aforementioned power flow models each node in gm represents the single or three phase voltage at its corresponding node in t a few properties of the gm of voltages are worth mentioning first the gm unlike the t itself is loopy due to edges between nodes separated by two hops in t note that each node and its immediate neighbors in t form a clique set in the gm as shown in fig b it can be shown that gm gm are chordal every cycle of size greater than has a chord and have based factorizations with edges between nodes as separators as we do not use the properties of chordal graphs in this paper we omit further discussion on these properties finally note that the structure of the gm requires independence of nodal injections but it is agnostic to the exact distribution of each nodal injection l l j u v t j u k a v t k b fig a load nodes and edges in distribution tree t b gm for or in t dotted lines represent two hop neighbors a the gaussian case in prior work loads have been modeled as independent gaussian random variables as linear functions of gaussian random variables are gaussian random variables too the distribution of nodal voltages in or under the gaussian assumption is a multivariate gaussian for gaussian distribution two properties are particularly useful the structure of the gm is given by the offdiagonal terms in the inverse covariance matrix of the random vector variables i and j are conditionally independent given variables in set c if the conditional covariance of i and j given set c is zero the first property can be used to validate lemma for the gaussian gm as shown in the supplementary material in the next section we use the specific structure of the gm described in this section to develop our topology learning algorithm for voltage measurements in particular arising from the gaussian distributions t opology l earning using voltage c onditional i ndependence consider the t with pdfs of nodal voltages p v and p v correspondent respectively to and let the corresponding gm for voltages be gm single phase and gm three phase as described in the previous section the gm includes edges between true neighbors and two hop neighbors in t our topology learning algorithm is based on separability properties of the gms that are only satisfied for edges corresponding to true neighbors in t to learn the topology without ambiguity we make the following mild assumption on the structure of the operational assumption the depth length of the longest path of the t excluding the substation node is greater than three theorem i j is an operational edge between nodes i and j in t if and only if there exists distinct nodes k and l such that v for single phase and v for three phase the proof is given in the supplementary material theorem enables detection of edges between all nodes using their voltage measurements let tnl see fig a comprise of connections between all nodes vnl in grid t estimated using theorem let be the set of nodes of degree in tnl eg node i note that consists of nodes in grid t that are neighbors of only one node then the next theorem provides results helping to determine true parent of each leaf node in t the proof is presented in the supplementary material let us emphasize that the first result in theorem identifies connections between leaves and parents that have a single nonleaf neighbor eg parent i in fig the remaining leaves are children of nodes with two or more neighbors eg node j in fig such connections are identified by the second result in the theorem theorems and imply the topology learning steps in algorithm in the remainder of this section we discuss execution and complexity of algorithm execution our topology learning algorithm proceeds in three steps first edges between nodes are identified based on theorem in steps and radial network of nonleaf nodes tnl is constructed in step next leaves in t connected to nodes of degree set in tnl are identified using theorem in steps finally edges between leaves and nodes connected to two or more nodes set are identified using theorem in steps it is worth mentioning that the learning algorithm does not require any other information other than the voltage measurements at the grid nodes it does not require information e vnl for all i j e f ull do if l v i j vkin vlin viin v jin then e e i j vnl vnl i j end if end for tnl vnl e nodes of degree in tnl vnl for all k v vnl do for all i ki ine f ull do pick j l vnl with i j jl e if vkin vlin viin v jin then e e ik vnl vnl k end if end for end for for all k v vnl do for all i ki e f ull do if vkin vlin viin v jin j l vnl i j jl e then e e ik vnl vnl k end if end for end for t vnl e l theorem let tnl be of t with nodes and respective edges removed let be the set of nodes of degree in tnl then the following statement holds node i is the parent of leaf node k in t if and only if for any nodes j l vnl with edges i j jl v for single phase and v for three phase let leaf node k s parent be in set vnl then i is the parent of k if and only if for all nodes j l t i k with edges i j jl v for single phase and v for three phase input complex voltage observations v jin v single phase or v three phase at all nodes j v permissible edge set e f ull output operational edge set e y a b p x y b p b p y b x algorithm topology learning for grid tree t note that assumption is satisfied when t includes at least two nodes that are two or more hops away this is thus not restrictive for the majority of distribution grids real world and test cases that have long paths under assumption the next theorem lists conditional independence properties in voltage distributions that distinguish true edges we use the following notation for conditional independence of random variables x y given variables a b l j u t j u v k l v t j u k v t k a b c fig learning steps in algorithm for radial grid in fig a a learning edges between nodes b determining children of nodes that are neighbors of one node c determining children of nodes that have more than one neighbors of line impedances or statistics of nodal injections if the set of permissible lines is not available all node pairs are considered as permissible edges in set e f ull as an example the steps in reconstruction for the radial grid in fig a are shown in fig complexity the worst case computational complexity is o n where n is the number of nodes in the grid the derivation is included in the supplementary material next we describe the conditional independence test in algorithm and discuss specifically the effect of the gaussianity of loads gaussian voltage distribution we discuss the special case for gaussian nodal voltages in detail as they are used in our numerical simulations voltages are gaussian distributed if the are gaussian and relations between loads and voltages are linearwithin and as noted in section conditional independence of the gaussian random variables is equivalent to vanishing conditional covariance consider the following real covariance matrices for and h ih it in in in in where kl i j e xkl i j e xkl i j xkl i j e xkl i j t vk vl vi v j j in in xkl i vak val re im re im v t in re and im refer to the real and imaginary parts vector thus voltages at k l are conditionally where of complex independent given voltages at i j if the following hold for the th entry in the inverse h kl i j kl i j note that in and its inverse are of size while in is of size such conditional kl i j independence test per edge requires inversion of the matrix which is the task of o single phase or o three phase complexity as mentioned already in the previous section an important feature of the test is its independence from the size of the network thresholding note that due to numerical errors empirical estimates of true covariances may not be zero thus we use the fig layout of radial distribution grid red circle marks the bus black lines mark operational edges the additional permissible edges available to algorithm are represented by dotted green lines following thresholding in the test of the empirical conditional covariance to make decision on conditional independence of voltages in algorithm h vkin vlin viin v jin if kl i j we call this condabs with positive threshold however voltages at nodes k l that are far apart may have low correlation and appear uncorrelated given even pair i j thus we consider a relative test termed condrel with threshold i h h vkin vlin viin v jin if kl i j kl i j from the graphical model it is clear that removing a single node i or j does not make k l conditionally independent even if one of them exists in the path from k to empirically however covariance between k and l after conditioning on one of the two nodes i j may be significantly reduced despite i j not being an edge we thus consider a hybrid test for conditional covariance termed condmod where we also look at the effect of conditioning on both i j relative to only one of i or j vkin vlin viin v jin if h h in in kl i j kl i j min h h in in i j vi c onditional i ndependence t est algorithm performs the edge detection test for each edge by verifying if the complex voltages at nodes k l are conditionally independent given voltages at two other nodes i j to reduce complexity we check for conditional independence of voltage magnitudes in one phase at k l given complex voltages at i j note that complex voltage at a node in the single phase case consists of two scalars voltage magnitude and phase angle and correspondingly of scalars in three phase case the total number of scalar variables per conditional independence test is in the case and in the case general voltage distributions as voltage measurements are continuous random variables testing their conditional independence for general distributions is a task among tests for conditional independence distances between estimated conditional densities or between characteristic functions have been proposed one can also bin the domain of continuous values and use discrete valued conditional independence test another line of work focuses on conditional independence tests in these schemes conditional independence is characterized using vanishing norm of covariance operators in reproducing kernel hilbert spaces rkhs an advantage of the relative tests eq over the absolute test eq is through the feature that the thresholds used are less affected by network parameters nodal injection covariances and other features that can vary within the network vii s imulations r esults we test algorithm by extracting the operational edge set e of tree grid t from a loopy original edge set e f ull if e f ull is not available all node pairs are considered as potential edges we first discuss the choice of conditional independence test to be used in the algorithm we consider a tree distribution network with load nodes and one substation as shown in fig we simulate active and reactive load profiles to follow gaussian random variables uncorrelated across nodes with covariance values around of the means we generate nodal complex voltage samples from relative errors in topology estimation relative errors in topology estimation thres thres thres rel abs mod number of samples thres thres rel mod b a number of samples x a relative errors in topology estimation fig layouts of the modified ieee test distribution grids the red circles represent substations a network b three phase network c three phase network b fig accuracy of topology learning algorithm with different conditional independence tests and increasing number of voltage samples for bus test case in fig a e f ull has edges b e f ull has edges all node pairs are considered legitimate cov cov cov cov c number of samples x fig accuracy of topology learning algorithm with increasing number of voltage samples generated by and for bus test case in fig a injection covariances are taken to be and permissible edge set e f ull has edges the gaussian indepenedent through the lcpf model the voltage measurements are provided as input together with permissible edge set e f ull of edges true edges and additional edges as shown in fig we test the algorithm with three conditional tests described above in eqs for sample data set of varying size the average estimation errors relative to the number of operational edges generated by algorithm are presented in fig a note that while increasing the number of samples leads to lesser number of errors for all three tests the performance of condmod eq is the best at higher sample sizes this is further demonstrated in fig b where algorithm inputs all the node pairs in total as a permissible set of edges observe that the performance of condmod is better than that of condrel the tolerance values used in the three tests are manually optimized by trial and error in practice they can be selected from experiments conducted with historical data in the remainder of this section we use condmod for the conditional covariance estimation and edge detection next we discuss topology estimation using voltage samples generated by single phase lossy equations we consider a radial modification of the system shown in fig a the input set e f ull comprises of edges true and additional edges selected randomly as before we consider gaussian load fluctuations and generate samples using matpower we show performance of the algorithm for different sample sizes and two distinct injection covariance tests in fig for comparison we also present the performance of algorithm over voltage samples generated from the same injection samples as in the model observe that the performance over voltage samples is similar to the one observed in and it improves with sample size increase thus confirming that algorithm even though built on the linearization principles performance empirically well over the data generated from the ac power flow models finally we discuss performance of the three phase power flow models first we compare three phase voltages generated by linearized model with that of three phase power flow model we consider three phase and bus test cases that have been modified from ieee and test cases we modify all nodal loads to be three phase remove shunts and make all line impedances to be y with three phases the networks are depicted in figs b and c fig a and b show relative errors in bus voltage magnitudes in each phase for with respect to the true ac values generated by a conventional sweep method the results for each test network include two choices of the reference bus bus or bus note that the maximum relative error is less than for both networks and any choice of the reference bus this motivates us to evaluate the performance of topology identification using true three phase voltages generated by and compare it with we consider bus as reference node in both three phase test networks for our topology learning algorithm as it has degree we consider two different gaussian nodal injection covariances and in both networks and generate input voltage samples using and for the three phase network we include all node pairs as permissible edges in e f ull the performance of algorithm for different samples sizes for this case is depicted in fig a phase a ref bus phase b ref bus phase c ref bus phase a ref bus phase b ref bus phase c ref bus relative error in bus volt mag relative error in bus volt mag relative errors in topology estimation phase a ref bus phase b ref bus phase c ref bus phase a ref bus phase b ref bus phase c ref bus a bus number bus number cov cov cov cov number of samples x b a for the bus network we pick at random additional edges added to the true edges and input a permissible edge set e f ull of size to algorithm along with the three phase voltage samples the relative errors in topology estimation for different input sample sizes for this network are shown in fig b note that for either of three phase networks the errors for ac power flows are less or comparable to errors observed in linerized power flow model furthermore the errors decrease with increase in the sample size as before we optimize values of the thresholds for condmod used in algorithm using trial and error search that in practice can be determined from historical or simulated data viii c onclusion in this paper we develop algorithm which allows to estimate the radial topology of distribution grids in particular we derive linearized power flow model in single and unbalanced three phase cases and develop a graphical model based learning algorithm that is able to estimate operational topology of the networks from samples of nodal voltages our learning algorithm is very general as it does not require information on nodal injection statistics or line parameters to the best of our knowledge this is the first approach which develops algorithm with guarantees for topology estimation in both balanced effectively single phase and unbalanced three phase networks the learning algorithm uses conditional independence results for voltages at the quartets of nodes which reduces to a conditional covariance test over grids with gaussian statistics of computational complexity of the algorithm scales polynomially with the size of the networkprimarily due to the fact that complexity of each conditional independence test is independent of the network size we demonstrate empirical efficacy of our algorithm on a number of ieee test cases this work has a number of promising future extensions first realistic networks may have portions where the three phase layout is split into three lines of different lengths extension of our algorithm to this case is straightforward second the linear flow model based topology learning relative errors in topology estimation fig accuracy of voltages generated by linear relative to voltages from model for test systems in fig b and c with base load and selection of reference buses or cov cov cov cov number of samples x b fig accuracy of topology learning algorithm with increasing number of three phase voltage samples generated by and with injection covariances and a bus test case in fig b all node pairs are considered as permissible edges b bus test case in fig c permissible edge set e f ull has edges can be used jointly with phase identification and impendance estimation see for an example of the latter in the single phase case topology estimation in the presence of missing nodes in three phase layout is another important research direction that we plan to pursue in the near future finally we plan to extend our empirical ac nonlinear approach towards establishing rigorous bounds on the errors between linearized and flow models this shall enable us to extend theoretcial guarantees for our reconstruction algorithm from the linearized versions of the power flow models to the formulations r eferences deka chertkov and backhaus structure learning in power distribution networks ieee transactions on control of network systems hoffman practical state estimation for electric distribution networks in power systems conference and exposition psce ieee pes ieee pp phadke synchronized phasor measurements in power systems computer von meier culler mceachern and arghandeh microsynchrophasors for distribution systems pp zhong xu billian zhang tsai conners centeno phadke and liu power system frequency monitoring network fnet implementation power systems ieee transactions on vol no pp kekatos giannakis and baldick grid topology identification using electricity prices arxiv preprint he and zhang a dependency graph approach for fault detection and localization towards secure smart grid smart grid ieee transactions on vol no pp bolognani bof michelotti muraro and schenato identification of power distribution network topology via voltage correlation analysis in decision and control cdc ieee annual conference on ieee pp deka backhaus and chertkov learning topology of the power distribution grid with and without missing data in control conference ecc european ieee pp tractable structure learning in radial physical flow networks in decision and control cdc ieee conference on ieee pp learning topology of distribution grids using only terminal node measurements in ieee smartgridcomm sevlian and rajagopal feeder topology identification arxiv preprint cavraro arghandeh von meier and poolla approach for distribution network topology detection arxiv preprint peppanen reno thakkar grijalva and harley leveraging ami data for distribution system model calibration and situational awareness ieee transactions on smart grid vol no pp arya jayram pal and kalyanaraman inferring connectivity model from meter measurements in distribution networks in proceedings of the fourth international conference on future energy systems acm pp talukdar deka materassi and salapaka exact topology reconstruction of radial dynamical systems with applications to distribution system of the power grid in accepted in the american control conference acc talukdar deka lundstrom chertkov and salapaka learning exact topology of a loopy power grid from ambient dynamics in proceedings of the eighth international conference on future energy systems acm pp lokhov vuffray shemetov deka and chertkov online learning of power transmission ieee wainwright and jordan graphical models exponential families and variational inference foundations and trends r in machine learning vol no pp liao weng liu and rajagopal urban distribution grid topology estimation via group lasso arxiv preprint deka talukdar and salapaka topology estimation in bulk power grids theoretical guarantees and limits in accepted in the bulk power systems dynamics and control symposiumirep kersting distribution system modeling and analysis in electric power generation transmission and distribution third edition crc press pp gan and low convex relaxations and linear approximation for optimal power flow in multiphase radial networks in power systems computation conference pscc ieee pp chen chen hwang kotas and chebli distribution system power flow rigid approach ieee transactions on power delivery vol no pp deka backhaus and chertkov estimating distribution grid topologies a graphical learning based approach in power systems computation conference pscc ieee pp baran and wu optimal sizing of capacitors placed on a radial distribution system power delivery ieee transactions on vol no pp jan optimal capacitor placement on radial distribution systems power delivery ieee transactions on vol no pp jan network reconfiguration in distribution systems for loss reduction and load balancing power delivery ieee transactions on vol no pp apr abur and exposito power system state estimation theory and implementation crc press bolognani bof michelotti muraro and schenato identification of power distribution network topology via voltage correlation analysis in ieee decision and control cdc ieee pp lee and baldick wind power ensemble prediction based on gaussian processes and neural networks ieee transactions on smart grid vol no pp dvorkin lubin backhaus and chertkov uncertainty sets for wind power generation ieee transactions on power systems vol no pp zhu and giannakis sparse overcomplete representations for efficient identification of power line outages ieee transactions on power systems vol no pp su and white a nonparametric hellinger metric test for conditional independence econometric theory vol no pp a consistent test for conditional independence margaritis learning of bayesian network structure in continuous domains fukumizu bach and jordan dimensionality reduction for supervised learning with reproducing kernel hilbert spaces the journal of machine learning research vol pp gretton fukumizu teo song and smola a kernel statistical test of independence in advances in neural information processing systems pp zhang peters janzing and conditional independence test and application in causal discovery arxiv preprint eminoglu and hocaoglu a new power flow method for radial distribution systems including voltage dependent load models electric power systems research vol no pp online available http ieee standard for interconnecting distributed resources with electric power online available http kersting radial distribution test feeders in power engineering society winter meeting ieee vol ieee pp garces a linear load flow for power distribution systems ieee transactions on power systems vol no pp park deka and chertkov exact topology and parameter estimation in distribution grids with minimal observability in power systems computation conference pscc ieee ix s upplementary m aterial validation of lemma under gaussian injections as mentioned in section structure of gaussian gm is given by the entries in the inverse the steps in the proof of theorem voltages at k l are conditionally independent given voltages at i j for the converse let i be the true parent of then path pkl from k to l in t does not include node i as i is connected to only one node if pkl does not include j then k l are not disconnected in gm or gm after removing nodes i j otherwise if pkl k j l there exists a path from k to l in the gm containing the two hop i j neighbors of j after removing i j therefore the relation i k p k k k k k j k does not hold if i is not the parent of let node vnl be the true parent of leaf t where m z m is the reduced weighted laplacian node if edges j jl exist then using similar argument matrix for tree t with weight for each edge i j given by as theorem the conditional independence relation holds one arrives at for the converse consider i vnl i consider path pik i k in t if k i are separated by h i j j j p j j j j more than two hops voltages at k are not conditionally i i i j p i i i i if i j e i j independent given voltages at nodes i next consider i k j k p k k k k if ik jk e pik i k has exactly two hops as vnl otherwise has a neighbor r pik the assumption about we observe that the gm contains edges between nodes that k r violates the relation as edge kr belongs to the gm are separated by less than three hops in t as proposed above therefore the conditional independence relation is satisfied the inverse covariance matrix for voltages in model only by the true parent of k in vnl under gaussian injections can be derived in a similar way covariance matrix consider the eq where the vector of injection profiles follows an uncorrelated multivariate gaussian distribution with diagonal covariance matrices p pq denoting the variance of active reactive injections and covariance of active and reactive injections respectively the covariance matrix of complex voltages e v e v v e v satisfies b proof of theorem note gm single phase and gm three phase includes edges between node pairs in t that are one or two hops away see lemma below we present the proof for gm only as its extension to gm is straightforward for the if part consider nodes i j such that voltages at k l are conditionally independent given voltages at i j this means that removing nodes i j from gm separates nodes k l into disjoint groups we prove that i j is an edge between nonleaf nodes i j by contradiction let pkl be the unique path in t between nodes k for separability of k l at least one of nodes i j is included in pkl let node j be excluded and pkl k i edge exists in the gm as they are two hop neighbors thus removing i j does not disconnect k l when only one of i j is included in pkl finally consider pkl k i j l such that there is at least one node between i and j due to edges between two hop neighbors in the gm removing i j does not disconnect k note that as leaf nodes are not part of any path between two other distinct nodes i j are both nonleaf nodes hence i j has to be an edge between nodes in t by contradiction for the only if part consider neighbors i j in t there exits neighbor k of i and neighbor l of j in t as shown in fig b with corresponding edges ki k j il l j i j in graphical model gm every path from k to l in gm includes an edge in ki k j and il jl removing nodes i j thus disconnects nodes k l in gm and makes voltages at k l conditionally independent proof of theorem i is connected to one other node by assumption there exist nodes j l such that i j jl are edges in t if k is connected to i using computational complexity of algorithm edge detection in algorithm depends on the conditional independence tests each test is conducted over voltages at four nodes only thus quartet unlike in the case of a general gm the computational complexity c in each test is thus independent of the size of the network identifying the edge between a pair of nodes i j requires o n tests in the worst case as all combinations of k l in step are considered therefore total complexity of identifying the network of nodes is o n determining the nodes of degree in tnl has complexity o n edge between leaf k and degree one node i in tnl can be verified by conditional independent test with a single neighbor and two hop neighbor of i therefore complexity of the edge detection of nodes in tnl of degree one and leaves has complexity o n as number of leaves in t and tnl can be o n finally all combinations of neighbors and two hop neighbors are needed to verify leaves in vnl steps thus have complexity o n the overall complexity of the algorithm is o n where c is independent of the network size note that we do not assume any prior information of the number of edges or of a node for example if a set of permissible edges e f ull is given then edge detection tests can be restricted to that set the complexity will then reduce to o n f ull
3
jul groups that are over a extension philippe gille abstract let k be a discretly henselian field whose residue field is separably closed answering a question raised by prasad we show that a semisimple group g is if and only if it after a finite tamely ramified extension of keywords linear algebraic groups galois cohomology theory msc introduction let k be a discretly valued henselian field with valuation ring o and residue field we denote by knr the maximal unramified extension of k and by kt its maximal tamely ramified extension if is a semisimple simply connected groups theory is available in the sense of and the galois cohomology set h knr g can be computed in terms of the galois cohomology of special fibers of group schemes this permits to compute h k g when the residue field k is perfect on the other hand if k is not perfect wild cohomology classes occur that is h kt g is such examples appear for example in the study of bad unipotent elements of semisimple algebraic groups under some restrictions on g we would like to show that h kt g vanishes see corollary this is related to the following result theorem let g be a semisimple simply connected which is over kt if the residue field k is separably closed then g is g knr is date july the author is supported by the project anr geolie the french national research agency gille this theorem answers a question raised by gopal prasad who found another proof by reduction to the inner case of type a th our first observation is that the result is quite simple to establish under the following additional hypothesis if the variety of borel subgroups of g carries a of degree one then it has a point property holds away of section it is an open question if holds for groups of type for the case and actually for a strongly inner g of theorem our proof is a galois cohomology argument using buildings section we can make at this stage some remarks about the statement since knr is a discretly valued henselian field with residue field ks we observe that implies also a weak approximation argument prop reduces to the complete case if the residue field k is separably closed of characteristic zero we have then cd k so that the result follows from steinberg s theorem cor in other words the main case to address is that of characteristic exponent p acknowledgements we are grateful to prasad for raising this interesting question and also for fruitful discussions the variety of borel subgroups and of degree one let k be a field let ks be a separable closure and let gal ks be the absolute galois group of let q be a nonsingular quadratic form a celebrated result of springer states that the witt index of q is insensitive to odd degree field extensions in particular the property to have a maximal witt index is insensible to odd degree extensions and this can be rephrased by saying that the algebraic group so q is iff it is over an odd degree field extension of this fact generalizes for all semisimple groups without type theorem let g be a semisimple algebraic without quotient of type let kr be finite field extensions of k with coprime degrees then g is if and only if gki is for i the proof is far to be uniform hence gathers several contributions note that the split version in the absolutely almost simple case is th c we remind the reader that a semisimple g is isomorphic to an inner twist of a group gq and that such a gq is unique up to isomorphism denoting by gqad the adjoint quotient of gq this means that there exists a galois cocycle z gal ks gqad ks such that g is isomorphic to z gq we denote by gsc q gqad the simply connected cover of gq then z gsc q is the simply connected cover of z gq extension lemma the following are equivalent i g is ii z h k gqad if furthermore z z sc for a z sc gal ks gsc q ks i and ii also equivalent to iii z sc h k gsc q proof the isomorphism class of g is encoded by the image of z under the map h k gqad h k aut gq the right handside map has trivial kernel since the exact sequence gqad aut gq out gq is split or whence the implication ii i the reverse inclusion i ii is obvious now we assume that z lifts to a z sc the implication iii ii is then obvious the point is that the map h k gsc q h k gq has trivial kernel whence the implication ii iii we proceed to the proof of theorem proof let x be the variety of borel subgroups of g a projective the g is iff x has a point thus we have to prove that if x has a of degree one then x has a without loss of generality we can assume that g is simply connected according q to we have that g s rlj gj where gj is an absolutely almost simple simply connected group defined over a finite separable field extension lj of k the notation rlj gj stands as usual for the weilqrestriction to kk to k the variety of borel subgroup x of g is then isomorphic to s rlj xj where xj is the lj of borel subgroups of gj reduction to the absolutely almost simple case our assumption is that x ki for i r hence xj ki lj for i r and j since lj is separable ki lj is an lj for i r and it follows that xj carries a of degree one if we know to prove the case of each xj we have xj kj hence x k from now on we assume that g is absolutely almost simple we denote by the chevalley group over z such that g is a twisted form of reduction to the characteristic zero case if k is of characteristic p let o be a cohen ring for the residue field k that is a complete discrete valuation ring such that its fraction field k is of characteristic zero and for which p is an uniformizing parameter the isomorphism class of g is encoded by a galois cohomology class in h k aut since aut is a smooth affine scheme we can use hensel s lemma o aut h k aut so that g lifts in a semisimple simply connected group scheme g over o let x be the of borel subgroups of g it is smooth and projective for i r let ki be an unramified field extension of k of degree ki k and of residue gille field ki denoting by oi its valuation ring we consider the maps x ki x oi x ki the left equality come from the projectivity and the right surjectivity is hensel s lemma it follows that x ki for i r so that xk has a of degree one assuming the result in the characteristic zero case it follows that x k x o whence x k we may assume from now that k is of characteristic zero we denote by the center of g and by tg h k the tits class of g since the tits class of the form gq of g is zero the classical restrictioncorestriction argument yields that tg in other words g is a strong inner form of its form gq it means that there exists a galois cocycle z with value in gq ks such that g z gq that is the twist by inner conjugation of g by lemma shows that our problem is rephrased in serre s question on the triviality of the kernel of the map y h k gq h ki gq r that kernel is indeed trivial in our case th whence the result we remind the reader that one can associate to a semisimple g its set s g of torsion primes which depends only of its type since an algebraic group splits after an extension of degree whose primary factors belong to s g we get the following refinement corollary let g be a semisimple algebraic without quotient of type let kr be finite field extensions of k such that k kr k is prime to s g then g is if and only if gki is for i lemma together with the corollary implies the following statement corollary let g be a semisimple simply connected algebraic without factors of type let kr be finite field extensions of k such that k kr k is prime to s g then the maps y h k g h ki g and r h k gad y h ki gad r have trivial kernels we can proceed now on the proof of theorem away of since theorem shows that the condition is fullfilled in that case proof of theorem under assumption here k is a discretly valued henselian field we are given a semisimple g satisfying assumption and such that extension g becomes after a finite tamely ramified extension note that l k is prime to we denote by x the of borel subgroups of we want to show that x k we are then reduced to the following cases i k is perfect and the absolute galois group gal ks is a for a prime l ii gal ks is a by weak approximation prop we may assume that k is complete note that this operation does not change the absolute galois group ibid case i we have that cdl k cdl k so that cd k since k is perfect steinberg s theorem cor yields that g is case ii the extension k has no proper tamely ramified extension hence our assumption implies that g is remarks a in case i of the proof there is no need to assume that k is perfect and l can be any prime different from the point is that if gal ks is a then the separable cohomological dimension of k is less than or equal to and then any is see b it an open question whether a of type is split if it is split after coprime degree extensions ki a positive answer to this question would imply serre s vanishing conjecture ii for groups of type c serre s injectivity question has a positive answer for an arbitrary classical group simply connected or adjoint and holds for certain exceptional cases cohomology and buildings the field k is as in the introduction proposition assume that k is separably closed let g be a split semisimple connected then h kt g proof we can reason at finite level and shall prove that h g for a given finite tamely ramified extension of we put gal it is a cyclic group whose order n is prime to the characteristic exponent p of let b gl be the building of gl it comes equipped with an action of g l let b t be a killing couple for the split t defines an apartment a tl of b gl which is preserved by the action of ng t l we are given a galois cocycle z g l it defines a section uz g l of the projection map g l this provides an action of on b gl called the twisted action with respect to the cocycle z the fixed point theorem provides a point y b gl which is fixed by the twisted action this point belongs to an apartment and since g l acts transitively on the set of apartments of b gl there exists a suitable g g l such that g x a tl gille we observe that a tl is fixed pointwise by for the standard action so that x is fixed under we consider the equivalent cocycle g g and compute x x g g g y g g y x y is fixed under the twisted action without loss of generality we may assume that x for each we put px stabg l x since x is fixed by the group px is preserved by the action of let px the ol scheme attached to x we have px ol px and we know that its special fiber px k is smooth connected that its quotient mx px k by its split unipotent radical ux is split reductive an important point is that the action of on px ol arises from a semilinear action of on the ol px as explained in the beginning of of it induces then a of the group on px k on ux and on mx since x belongs to a tl px carries a natural maximal split ol tx and tx tx k is a maximal torus of px k and its image in mx still denoted by tx is a maximal torus of mx we observe that acts trivially on the tx but tx mx aut mx idtx it follows that acts on mx by means of a group homomorphism tx ad k where tx ad tx mx mx mx mx ad for each m mx k we have m int now we take a generator of and denote by the image in mx k of px and by its image in mx mx k the cocycle relation yields and more generally observe that is fixed by j for j since n we get the relation n then is an element of order n of mx ad k so is semisimple but k is separably closed so that belongs to a maximal torus m tx ad with m mx k it follows that m tad x k since belongs to tad x k we have that tad x k hence m tad x k it follows that m tx k since the map px ol mx k is surjective we can then assume that tx k without loss of generality so that the cocycle a takes value in tx k but tx k is a trivial so that a is given by a homomorphism fa tx k this homomorphism lifts uniquely to a homomorphism fea tx ol the main technical step is claim the fiber of h px h mx k at fa is fea extension using the claim we have z fea h px its image in h g l belongs to the image of the map h tx l h g l but h tx l hilbert theorem thus z h g l as desired it remains to establish the claim we put ker px mx k and this group can be filtered by a decreasing filtration by normal subgroups u i such that for each i j there is a split unipotent u i j equipped with an action of such that u i j u i j k page we denote by fea px the px twisted by the cocycle fea there is a surjection h e px on the fiber at fa of the fa map h px h mx k cor it is then enough to show that h fea px it happens fortunately that the filtration is stable under the adjoint action of the image of fea by using the u lim u j and lemma in the next subsection we have that h fea px h fea u k since h fea u k maps onto the kernel of fiber of h px h mx k at fea cor we conclude that the claim is established this permits to complete the proof of theorem proof of theorem by the usual reductions the question boils down to the semisimple simply connected case and even the absolutely almost semisimple simply connected case taking into account the cases established in section it remains to deal with the case of type denote by the split group of type we have aut it follows that g z with z h k our assumption is that gkt is so that z h kt proposition states that h kt whence g is split we record the following cohomological application corollary let g be a semisimple algebraic which is over kt we assume that g is simply connected or adjoint then h kt g proof theorem permits to assume that g is we denote by g gad the adjoint quotient of since the map h k g h k gad has trivial kernel lem me we can assume that g is adjoint let z h kt g we consider the twisted knr z g of since is isomorphic to gkt is and theorem shows that is hence isomorphic to it means that z belongs to the kernel of the map h k g h k aut g int but the exact sequence of g aut g out g splits so that the above kernel is trivial thus z h knr g appendix galois cohomology of groups let k be a separably closed field let u be a algebraic equipped with an action of a finite group that is u admits a decreasing filtration gille u by normal pro unipotent which are stabilized by and such that ui is an unipotent algebraic for i lemma we assume that is invertible in k and that the ui s are smooth and connected then h u k proof we start with the algebraic case that is of a smooth connected unipotent according to u admits a central characteristic filtration u un such that ui is a twisted form of a gna i since is smooth and k is separably closed we have the following exact sequence of k ui k ui k the multiplication by on the abelian group ui k is an isomorphism so that h ui k the exact sequence above shows that the map h k h ui k is onto by induction it follows that h un k maps onto h u k whence h u k we consider now the case since the s are smooth we have that u k lim k therefore by successive approximations the kernel of the map h u k lim h k is trivial but according to the first case the right handside is trivial thus h u k references lenstra forms in odd degree extensions and normal bases amer j math j black zero cycles of degree one on principal homogeneous spaces algebra bourbaki commutative ch berlin bruhat tits groupes sur un corps local i inst hautes etudes sci publ math bruhat tits groupes sur un corps local ii existence d une radicielle pub math ihes bruhat tits groupes sur un corps local iii et application la cohomologie galoisienne fac sci univ tokyo gabber gille principaux sur les corps algebraic geometry garibaldi the rost invariant has trivial kernel for groups of low rank comment math helv gille la sur les groupes sur un corps global publications de l gille unipotent subgroups of reductive groups of characteristic p duke math j gille groupes sur un corps de dimension cohomologique monograph in preparation extension knus merkurjev rost tignol the book of involutions ams colloq publ providence prasad a new approach to unramified descent in theory preprint prasad finite group actions on reductive groups and descent in bruhattits theory preprint de de l en groupes par demazure et grothendieck lecture notes in math springer serre cohomologie galoisienne new york serre cohomologie galoisienne et bourbaki tits sur les des extensions de corps les groupes simples acad sci paris i math univ lyon claude bernard lyon cnrs umr institut camille jordan blvd du novembre villeurbanne cedex france
4
submitted to artificial life march a simple model of unbounded evolutionary versatility as a trend in organismal evolution peter turney institute for information technology national research council of canada ottawa ontario canada phone fax abstract the idea that there are any trends in the evolution of biological organisms is highly controversial it is commonly believed for example that there is a trend in evolution towards increasing complexity but empirical and theoretical arguments undermine this belief natural selection results in organisms that are well adapted to their local environments but it is not clear how local adaptation can produce a global trend in this paper i present a simple computational model in which local adaptation to a randomly changing environment results in a global trend towards increasing evolutionary versatility in this model for evolutionary versatility to increase without bound the environment must be highly dynamic the model also shows that unbounded evolutionary versatility implies an accelerating evolutionary pace i believe that unbounded increase in evolutionary versatility is a trend in evolution i discuss some of the testable predictions about organismal evolution that are suggested by the model keywords evolutionary trends evolutionary progress trends evolutionary versatility evolvability baldwin effect running head unbounded evolutionary versatility national research council canada turney a simple model of unbounded evolutionary versatility as a trend in organismal evolution introduction ruse argues that almost all evolutionary theorists before after and including darwin believe that there is progress in evolution progress implies that there is a trend and that the trend is good for example it is commonly believed by the layperson that there is a trend in evolution towards increasing intelligence and that this trend is good several scientists have suggested that we should focus on the scientific question of whether there are any trends without regard to the question of whether such trends are good mcshea presents an excellent survey of eight serious candidates live hypotheses for trends in evolution entropy energy intensiveness evolutionary versatility developmental depth structural depth adaptedness size and complexity complexity appears to be the most popular candidate the standard objection to trends in evolution is that natural selection is a local process that results in organisms that are well adapted to their local environments and there is no way for this local mechanism to yield a global trend on the other hand it does seem that complexity for example has increased steadily since life on earth began this seems to suggest that natural selection favours increasing complexity however many evolutionary theorists deny that there is any driving force such as natural selection behind any of the apparent trends in evolution gould has presented the most extensive arguments against a driving force gould admits that there may be trends in evolution but he argues that any such trends are in essence statistical artifacts for example if we consider the evolution of life since the first appearance of prokaryotes the mean level of complexity would necessarily increase with time because any organism with significantly less complexity than a prokaryote would not be able to live according to gould the apparent trend towards increasing unbounded evolutionary versatility plexity is due to random variation in complexity plus the existence of a minimum level of complexity required to sustain life there is no selective pressure that drives life towards increasing complexity i discuss gould s arguments in more detail in section of the eight live hypotheses for trends in evolution this paper focuses on evolutionary versatility i believe that there is indeed a selective advantage to increasing evolutionary versatility evolutionary versatility is the number of independent dimensions along which variation can occur in evolution it is possible that increasing evolutionary versatility may be the driving force behind other apparent evolutionary trends such as increasing complexity i discuss the concept of evolutionary versatility in section in section i introduce a simple computational model of unbounded evolutionary versatility as far as i know this is the first computational model of an evolutionary mechanism for one of the eight live hypotheses for trends in evolution in this model the population evolves in a series of eras during each era the fitness landscape is constant but it randomly changes from one era to the next era the model shows that there is a trend towards increasing evolutionary versatility in spite of the random drift of the fitness landscape in fact when the fitness landscape is constant evolutionary versatility is bounded in this model unbounded evolutionary versatility requires a dynamic fitness landscape the point of this model is to show that it is possible in principle for natural selection to drive evolution towards globally increasing evolutionary versatility without bound even though natural selection is a purely local process i discuss some related work in section my simple computational model of evolutionary versatility is related to bedau and seymour s model of the adaptation of mutation rates the primary focus of bedau and seymour was the adaptation of mutation rates but the primary focus of this paper is evolutionary versatility bedau and seymour s model does not address evolutionary versatility the core of this paper is the experimental evaluation of the model in section in the turney first two experiments i show that there are parameter settings for which evolutionary versatility can increase indefinitely in the third experiment i show that evolutionary versatility is bounded when the fitness landscape is static in the remaining experiments i examine a wide range of settings for the parameters in the model these experiments show that the behaviour of the model is primarily determined by the parameters that control the amount of change in the fitness landscape in section i discuss the implications of the model one of the most interesting implications of the model is that increasing evolutionary versatility implies an accelerating evolutionary pace this leads to testable predictions about organismal evolution i discuss limitations and future work in section and i conclude in section arguments against trends in evolution natural selection produces organisms that are well adapted to their local environments the major objection to trends in evolution is that there is no way for local adaptation to cause a trend for example although the environments of primates may favour increasing complexity the environments of most parasites favour streamlining and simplification there is no generally accepted theoretical explanation of how natural selection could cause a trend van valen s red queen hypothesis attempts to explain how natural selection could cause a trend towards increasing complexity based on coevolution although van valen s hypothesis has been criticized in this paper i propose an explanation of how natural selection could cause a trend towards increasing evolutionary versatility attractive features of my proposal are that it can easily be simulated on a computer and that it leads to testable in principle predictions if there were some constant property shared by all environments then it would be easy to see how there could be a trend due to adaptation to this constant property however the computational model in section shows that there can be a largescale trend even when the fitness landscape changes completely randomly over time unbounded evolutionary versatility aside from theoretical difficulties with trends there is the question of whether there is empirical evidence for any trend mcshea s survey of candidates for trends does not address the issue of evidence for the candidates but in another paper he finds that there is no solid evidence for a trend in most kinds of complexity gould argues that even if there were empirical evidence for a trend that does not imply that there is a driving force behind the trend gould argues that evolution is performing a random walk in complexity space but there is a constraint on the minimum level of complexity when the complexity of an organism drops below a certain level the level of prokaryotes it can no longer live gould s metaphor is that evolution is a drunkard s random walk but with a wall in the way a bounded diffusion process this wall of minimum complexity causes random drift towards higher complexity this random drift does not involve any active selection for complexity there is no push or drive towards increased complexity in summary it is not clear how local selection can produce a global trend and observation of a global trend does not imply that there is a driving force behind the trend however my model shows one way in which local selection can produce a global trend and the model makes testable in principle predictions evolutionary versatility evolutionary versatility is the number of independent dimensions along which variation can occur in evolution a species with high evolutionary versatility has a wide range of ways in which it can adapt to its environment vermeij has argued that there should be selection for increased evolutionary versatility because it can lead to organisms that are more efficient and better at exploiting their environments an important point is that evolutionary versatility requires not merely many dimensions along which variation can occur but also that the dimensions should be independent turney ropy is the condition in which a single gene affects two or more distinct traits that appear to be unrelated when n traits which appear to vary on n dimensions are linked by pleiotropy there is effectively only one dimension along which variation can occur several authors have suggested that it would be beneficial for the map to be modular since increasing modularity implies increasing independence of traits mcshea points out the close connection between evolutionary versatility and modularity evolutionary versatility seems to be connected to several of the seven other live hypotheses increasing evolutionary versatility implies increasing complexity since the organisms must have some new physical structures to support each new dimension of variation the dimensions are supposed to be independent so the new physical structures must also be at least partially independent the increasing accumulation of many independent new physical structures implies increasingly complex organisms among the other live hypotheses developmental depth structural depth adaptedness and perhaps energy intensiveness may be connected to evolutionary versatility evolutionary versatility also seems to be related to evolvability evolvability is the capacity to evolve an increasing number of independent dimensions along which variation can occur in evolution implies an increasing capacity to evolve so it would seem that any increase in evolutionary versatility must also be an increase in evolvability on the other hand some properties that increase evolvability may decrease evolutionary versatility for example selection can be expected to favour a constraint that produces symmetrical development for humans if a sixth finger were a useful mutation then it would likely be best if the new fingers appeared simultaneously on both hands instead of requiring two separate mutations one for the left hand and another for the right hand in general selection should favour any constraint that produces adaptive covariation such constraints increase evolvability but they appear to decrease evolutionary versatility unbounded evolutionary versatility increasing evolutionary versatility suggests an increasing number of independent dimensions but adaptive covariation suggests a decreasing number of independent dimensions vermeij reconciles these forces by proposing that increasing evolutionary versatility adds more dimensions which are then integrated by adaptive covariation so that new dimensions are added and integrated in an ongoing cycle evolutionary versatility also appears to be related to the baldwin effect the baldwin effect is based on phenotypic plasticity the ability of an organism the phenotype to adapt to its local environment during its lifetime examples of phenotypic plasticity include the ability of humans to tan on exposure to sunlight and the ability of many animals to learn from experience phenotypic plasticity can facilitate evolution by enabling an organism to benefit from or at least survive a partially successful mutation which otherwise in the absence of phenotypic plasticity might be detrimental this gives evolution the opportunity to complete the partially successful mutation in future generations evolution is not really free to vary along a given dimension if all variation along that dimension leads to death without children thus phenotypic plasticity increases the effective number of dimensions along which variation can occur in evolution the baldwin effect can therefore be seen as a mechanism for increasing evolutionary versatility a simple computational model of evolutionary versatility the following simple model of unbounded evolutionary versatility has three important features the fitness function is based on a shifting target to demonstrate that a trend is possible even when the optimal phenotype varies with time in fact in this model the target must shift if the model is to display unbounded evolutionary versatility the length of the genome can change there is no upper limit on the possible length of the genome this is necessary because if the length were bounded then there would be a finite number of possible genotypes and thus there would be a bound on the evolutionary versatility the mutation rate is encoded in the genome so that the mutation rate can adapt to the turney environment this allows the model to address the claim that mutation becomes increasingly harmful as the length of the genome increases some authors have argued that natural selection should tend to drive mutation rates to zero of course if the mutation rate goes to zero this sets a bound on evolutionary versatility table shows the parameters of the model and their baseline values in the experiments that follow i manipulate these parameters to determine their effects on the behaviour of the model the meaning of the parameters in table should become clear as i describe the model table the parameters of the model with their baseline values parameter name description pop size era length mutation length number of individuals in population number of children born in one run of the model number of children born in one era fraction of target that changes between eras number of individuals sampled when selecting parents number of bits in genome for encoding the mutation rate run length change rate tournament baseline value figure is a description of the model of evolutionary versatility in this model a genome is a string of bits the model is a genetic algorithm as opposed to a generational genetic algorithm in which children are born in a generational genetic algorithm the whole population is updated simultaneously resulting in a sequence of distinct generations parents are selected using tournament selection in tournament selection the population is randomly sampled and the two fittest individuals in the sample are chosen to be parents see lines to in figure the selective pressure can be controlled by varying the size of the sample a new child is created by applying crossover to the parents lines to the new child then undergoes mutation based on a mutation rate that is unbounded evolutionary versatility encoded in the child s genome lines to mutation can flip a bit from to or from to in the genome or it can add or delete a bit making the bit string longer or shorter the initial section of a genome the first bits encodes the mutation rate for that genome the remainder of the genome which may be null encodes the phenotype the phenotype is a bit string created from the genome by simply copying the bits from the genotype beginning with the plus one bit of the genotype and continuing to the end of the genotype if the length of the genome is exactly mutation code length as it is when the simulation first starts running then the type is the null string the fitness of the phenotype is determined by comparing it to a target the target is a random string of bits the fitness of the phenotype is the number of matching bits between the phenotype and the target lines to if the phenotype is null the fitness is zero the length of the target grows so that the target is always at least as long as the longest phenotype in the population lines to when a new child is born if it is fitter than the least fit individual in the population then it replaces the least fit individual lines to the target is held constant for an interval of time called an era at the end of an era the target is randomly changed each time the target changes it is necessary to the fitness of every individual lines to instead of dividing a run into a series of eras the model could have been designed to have a small continuous change of the target for each new child that is born this is a special case of the current model where target change rate is small and era is one the main motivation for dividing the run into a series of eras is to increase the computational efficiency of the model since it is computationally expensive to the fitness of every individual each time a new child is born actually it would only really be necessary to individuals each time a new child is born it could also be argued that organismal evolution is characterized by periods of stasis followed by rapid change punctuated equilibria turney set the parameter values number of individuals in population number of children born in one run number of children born in one era fraction of target that changes between eras number of individuals sampled when selecting parents number of bits in genome for encoding mutation rate let pop be an array of bit strings the population let each bit string pop i be a string of randomly generated bits where and are generated with equal probability pop i is the individual in pop let target be an empty string the goal string for determining fitness let fit be an array of integers the fitness of pop let each fit i be fit i is the initial fitness for pop i for childnum to do main loop randomly sample individuals bit strings from pop sampling with replacement and take the two fittest individuals to be parents randomly let mom be one of the two parents and let dad be the other randomly pick a crossover point cross that falls inside the bit strings of both mom and dad the parents may have different lengths let child be the left side of mom s bit string up to cross followed by the right side of dad s bit string after cross thus length child equals length dad let mutate be set to the child s mutation rate a fraction between and by interpreting the first bits of child as an encoded fraction example randomly flip bits in child where the probability of flipping any bit is mutate randomly add remove a bit to from the end of child with a probability of mutate where adding and removing have equal probability but do not remove a bit if length child minimum required length if length child length target then randomly add a bit to target or with equal probability let childfit be the number of bits in child that match the bits in target where the first bit in target is aligned with the bit of child if child is too short childfit is let worst be the oldest individual among the least fit individuals in pop let worstfit be the fitness of worst if childfit worstfit then replace worst with child and replace worstfit with childfit if divides into childnum with no remainder then do randomly flip bits in target where the probability of flipping any bit is the fitness fit i of every individual pop i end if end for figure a description of the model of evolutionary versatility the model is a genetic algorithm with crossover and mutation mutation can flip a bit in the genome or increase or decrease the genome length by one bit the mutation rate is encoded in the genome parents are chosen by tournament selection unbounded evolutionary versatility so this feature of the model makes it more realistic recall that evolutionary versatility is the number of independent dimensions along which variation can occur in evolution in this model the evolutionary versatility of a genome is the length of the genome minus this is the length of the part of the genome that encodes the phenotype the first mutation bits are not independent and they do not directly affect the phenotype so i shall ignore them when counting the number of independent dimensions along which variation can occur each remaining bit in the genome is an independent dimension along which variation can occur the dimensions are independent because the fitness of the organism is defined as the number of matches between the phenotype and the target that is the fitness is the sum of the fitnesses for each dimension fitness on one dimension a match on one bit has no impact on fitness on another dimension a match on another bit note that increasing evolutionary versatility increasing genome length does not necessarily imply increasing fitness because the additional bits do not necessarily match the target and a mutation rate that enables evolutionary versatility genome length to increase also makes the genome vulnerable to disruptive fitness reducing mutations however the design of the model implies that increasing genome length will tend to be correlated with increasing fitness related models the most closely related work is the model of bedau and seymour in bedau and seymour s model mutation rates are allowed to adapt to the demands of the environment they find that mutation rates adapt to an optimal level which depends on the evolutionary demands of the environment for novelty my model is similar in that mutation rates are also allowed to adapt other work with adaptive mutation rates includes bedau and seymour s model and my model are distinct from this other work in that we share an interest in the relationship between the adaptive mutation rates and the evolutionary demands turney of the environment for novelty the main difference between this paper and previous work is the different objective none of the previous papers were concerned with trends in evolution as far as i know this is the first model to show how it is possible for evolutionary versatility to increase without bound results of experiments with the model this section presents eight experiments with the model of evolutionary versatility the first experiment examines the behaviour of the model with the baseline parameter settings the second experiment runs the model for ten million births but is otherwise the same as the baseline case this experiment gives a lower resolution view of the behaviour of the model but over a much longer time scale these two experiments support the claim that the model can display unbounded evolutionary versatility given suitable parameter settings the third experiment uses the baseline parameter settings except that the target is held constant with a constant target the mutation rate eventually goes to zero and the population becomes static the results show that in this model unbounded evolutionary versatility requires a dynamically varying target the remaining experiments vary the parameters of the model one at a time these experiments show that the model is most sensitive to the parameters that determine the pace of change in the target in comparison the parameters that do not affect the target have relatively little influence on the behaviour of the model experiment baseline parameter values figure shows the results with the baseline parameter settings see table since the model is stochastic each run is different assuming the random number seed is different but the general behaviour is the same for all runs assuming the parameters are the same in this experiment i ran the model times and averaged the results across the runs for this experiment the length of an era is children at the start of each new era the unbounded evolutionary versatility figure experiment baseline parameter values these four plots show the fitness genome length mutation rate and fitness increase as functions of the number of children that have been born the target for the fitness function changes each time one hundred children are born the fitness increase is the increase in fitness since the most recent change in the target all values are averages over the whole population for one hundred separate runs of the baseline configuration individuals times runs yields samples per value fitness drops however the overall trend is towards increasing fitness see the first plot in figure although the probability that a mutation will increase the genome length is equal to the probability that a mutation will decrease the genome length there is a steady trend towards increasing genome length the second plot in figure the mutation rate decreases steadily third plot although the length of an era is fixed in each era the increase in fitness turney since the start of the era is greater than the corresponding increase for the previous era fourth plot this shows that the pace of evolution is accelerating evolutionary versatility mutation code length is given by the genome length minus the steady growth in the genome length in the second plot in figure shows that evolutionary versatility is increasing at least over the relatively short time span of this experiment experiment longer run length the steady decrease in the mutation rate in the first experiment suggests that the mutation rate might go to zero if the mutation rate is zero then the fitness can no longer increase without bound the fitness would vary randomly up and down as the target changed each era but the fitness would always be less than the genome length which would become a constant value in the second experiment i ran the model until children were born in order to see whether the trends in figure would continue over a longer time scale i ran the model times and averaged the results across the runs in the first experiment the population averages for fitness genome length mutation rate and fitness increase were calculated each time a new child was born in the second experiment to increase the speed of the model the population averages were only calculated each time children were born figure shows the results for the second experiment figure shows that the trends in figure continue in spite of the much longer time scale the only exception is the mutation rate which quickly falls from its initial value of to hover between and there is no indication that the mutation rate will go to zero however since the model is stochastic there is always a very small but probability that the mutation rate could go to zero in figure the fitness increase is calculated as the average fitness of the population at the end of an era minus the average fitness of the population at the start of the same era the unbounded evolutionary versatility figure experiment longer run length these four plots show the fitness genome length mutation rate and fitness increase as functions of the number of children that have been born as in the first experiment the target for the fitness function changes each time one hundred children are born all values are averages over the whole population for ten separate runs of the baseline configuration the values are calculated once for each ten thousand children that are born fitness increase is calculated each era and then the average fitness increase is calculated for each births since there are births in an era there are eras in each sample of births so each value in the plot of the fitness increase is the average of eras and runs the values in the other three plots fitness genome length and mutation rate are averages over runs turney experiment static target this experiment investigated the behaviour of the model when the target was static as in the second experiment the population averages for fitness genome length and mutation rate were calculated once every births i ran the model times and averaged the results across the runs i used the baseline parameter settings except for length and i set both length and to and i set to zero figure shows the results of the runs in all runs the mutation rate was zero for every member of the population long before children were born the longest run lasted for births the shortest run lasted for births and the average run lasted for births in comparison in the second experiment all runs ran for children with no sign that the mutation rate would ever reach zero these experiments support the claim that in this model unbounded evolutionary versatility requires a dynamic target the following two experiments investigate the amount of change in the target that is needed to ensure unbounded evolutionary versatility experiment varying rate of change of target in the fourth experiment the rate of change of the target was varied from to the run length was constant at the remaining parameters were set to their baseline values figure shows the behaviour of the model averaged over ten separate runs the time of the birth of the last novel child the time at which the mutation rate becomes zero for every member of the population was around the birth of the child when target change rate was but it quickly rose to around the child as target change rate approached see the first plot in figure it could not go past because was i conjecture that there is a threshold for target change rate at approximately where the average time of birth of the last novel child approaches infinity as approaches infinity unbounded evolutionary versatility figure experiment static target these three plots show the fitness genome length and mutation rate as functions of the number of children that have been born since the target is static the fitness increase is undefined all values are averages over the whole population for ten separate runs of the model the values are calculated once for each ten thousand children that are born when was of the ten runs made it all the way to the birth with a mutation rate above zero when the was this went to second plot in figure the average fitness of the population at the time of the birth of the child the final fitness rose steadily as target change rate increased from to third plot above it could not rise significantly because of the limit set by length i conjecture that it would rise to turney figure experiment varying rate of change of target in this experiment target change varies from its value in experiment to its value in experiments and the is when target change is about there is a qualitative change in the behaviour of the model this threshold appears to separate bounded evolutionary versatility as in experiment left of the vertical dotted line from unbounded evolutionary versatility as in experiment right of the vertical dotted line all values in the plots are based on ten separate runs of the model ity as rises to infinity the average mutation rate of the population at the time of the birth of the child the final mutation rate increased steadily as target change rate increased even past the threshold this experiment suggests that a relatively high amount of change is required to ensure unbounded evolutionary versatility that evolutionary versatility will increase without bound when the target is changed once every hundred children era the target must change by at least if there is less environmental change than this the mutation rate eventually drops to zero experiment varying length of era in the fifth experiment the length of an era was varied from to the was constant at the remaining parameters were set to their baseline values figure shows the behaviour of the model averaged over ten separate runs like experiment this experiment supports the hypothesis that a relatively high amount of change is required to ensure that evolutionary versatility will increase without bound when the target changes by each era the length of an era can not be more than children if the mutation rate is to stay above zero experiment varying tournament size in the sixth experiment the tournament size was varied from to the baseline value for was larger tournaments mean that there is more competition to become a parent so there is higher selective pressure the was constant at and the remaining parameters were set to their baseline values figure shows the behaviour of the model averaged over ten separate runs the results suggest that the model will display unbounded evolutionary versatility as long as tournament is more than about compared to era and target change rate tournament size the behaviour of the model is relatively robust with respect to the model displays unbounded evolutionary versatility for a relatively wide range of values of turney figure experiment varying length of era in this experiment era length varies from its value in experiments and to its length in experiment was the length is when length is about there is a qualitative change in the behaviour of the model this threshold appears to separate unbounded evolutionary versatility as in experiment left of the vertical dotted line from bounded evolutionary versatility as in experiment right of the vertical dotted line all values in the plots are based on ten separate runs of the model experiment varying population size in the seventh experiment the size of the population was varied from to the baseline value of population was the run was constant at and the remaining parameters were set to their baseline values figure shows the behaviour unbounded evolutionary versatility figure experiment varying tournament size in this experiment varies from to the run length tournament size is larger tournaments mean greater selective pressure the results suggest that there is unbounded evolutionary versatility as long as size is greater than about see the first two plots in the third plot the final fitness the average fitness of the population at the time of the birth of the child continues to rise even when tournament is greater than and of the runs reach the child with a mutation rate see the second plot this suggests that there is an advantage to higher selective pressure beyond what is needed to obtain unbounded evolutionary versatility of the model averaged over ten separate runs when the population is small the model is more susceptible to random variations with a turney figure experiment varying population size in this experiment varied from to the baseline value of population size population size is is these plots suggest that in most runs we will have unbounded evolutionary versatility even when the population size is only individuals however it appears that the model becomes less stable when the population size is below about with smaller populations there is more risk that the mutation rate could fall to zero by random chance large population the model will tend to behave the same way every time it runs figure suggests that the model becomes unstable when the population size is less than about although there is no sharp boundary at this is unlike experiment where there is a sharp boundary when is and experiment where there is a sharp boundary when is unbounded evolutionary versatility experiment varying number of bits for encoding mutation rate in the final experiment the number of bits in the genome used to encode the mutation rate was varied from to the baseline value of was the run length was constant at and the remaining parameters were set to their line values figure shows the behaviour of the model averaged over ten separate runs figure experiment varying number of bits for encoding mutation rate in this experiment the number of bits in the genome used to encode the mutation rate is varied from to the model displays unbounded evolutionary versatility when the number of bits was more than about when mutation length is less than it seems that quantization effects make the model unstable when the encoding is too short the ideal mutation rate may lie between zero and the smallest value that can be encoded so the genetic algorithm is forced to set the mutation rate to zero even though this is less than ideal turney the results suggest that the model will display unbounded evolutionary versatility when the is greater than about if the code length is less than bits the model becomes susceptible to quantization errors for example bits can only encode values if the ideal mutation rate is between and then the genome may be forced to set the mutation rate to zero although a value but less than would be better implications of the model i do not claim that the model shows that there is a trend towards increasing evolutionary versatility in organismal evolution i claim that the model supports the idea that under certain conditions it is possible for evolutionary versatility to increase without bound in this model there is active selection for increased evolutionary versatility there is a selective force that drives the increase it is not merely a statistical artifact due to a bounded diffusion process the model shows how a purely local selection process can yield a global trend the model also shows that the environment must be highly dynamic the target for the fitness function must change significantly and repeatedly to sustain increasing evolutionary versatility if the environment is not sufficiently dynamic the disruptive effects of mutation will outweigh the beneficial effects and selection will drive the mutation rates to zero when the mutation rate is zero throughout the population the genome length can no longer increase so the evolutionary versatility is bounded by the length of the longest genome in the population i believe that in fact there is a trend towards increasing evolutionary versatility in organismal evolution although the model does not and can not prove this belief the model suggests a way to test the belief because the model predicts that where there is increasing evolutionary versatility there should be an accelerating pace of evolution see the fourth plots in figures and therefore i predict that we will find evidence for an unbounded evolutionary versatility erating pace in the evolution of biological organisms it is difficult to objectively verify the claim that the pace of evolution is accelerating the natural measure of the pace of evolution is the historical frequency of innovations but the analysis is complicated by several factors one confounding factor is that the record of the recent past is superior to the record of the distant past which may give the illusion that there are more innovations in the recent past than the distant past another confounding factor is population growth we may expect more innovations in recent history simply because there are more innovators a third factor is difficulty of counting innovations there is a need for an objective threshold on the importance of the innovations so that the vast number of trivial innovations can be ignored i suggest some tests that avoid these objections i predict that the fossil record will show a decreasing recovery time from major catastrophes such as mass extinction events ice ages meteorite impacts and volcanic eruptions also i predict a decrease in the average lifetimes of species as they are by more recent species at an accelerating rate these two tests do not involve counting the frequency of innovations which makes them relatively objective limitations and future work there are several limitations to this work one limitation is that we can not run the model to infinity so we can not prove empirically that evolutionary versatility will grow to infinity i conjecture that with the baseline settings of the parameters table the expected mean average evolutionary versatility of the model will rise to infinity as rises to infinity this conjecture can be supported by empirical evidence figure but it can only be proven by theoretical argument i have not yet developed this theoretical argument another limitation of the model is its abstractness a more sophisticated model would include a mapping an internal implicit fitness function instead of the current external explicit fitness function a turney ping and fitness function that allow varying degrees of dependence and independence among the dimensions traits characteristics along which variation can occur in evolution the possibility of covariation coevolution multiple species relationships and so on however the point of this exercise was to make the model as abstract as possible in order to identify the minimum elements that are needed to display unbounded evolutionary versatility the abstractness of the model was intended to make it more clear and susceptible to analysis there might seem to be some conflict between this model and the no free lunch theorems informally the no free lunch theorems show that there is no universal optimization algorithm that is optimal for all fitness landscapes for example one no free lunch theorem theorem in shows that for any two optimization algorithms a and a the average fitness obtained by a equals the average fitness obtained by a when the average is calculated over all possible fitness landscapes sampled with uniform probability if my model can reach infinite fitness levels for some fitness landscapes does this violate a no free lunch theorem since then the average fitness must also be infinite there is no problem here because the no free lunch theorems are concerned with the fitness after a finite number of iterations not with the fitness after an infinite number of iterations in my case an infinite number of children the model that is presented here is not intended to be a new superior form of optimization algorithm the intent of the model is to show that it is possible under certain conditions for evolutionary versatility to increase without bound furthermore the model is intended to show that local selection in this case local to a certain period of time can drive a global trend global across all periods of time towards increasing evolutionary versatility the model is not universal it will only display unbounded increase in evolutionary versatility for certain parameter settings and for certain fitness landscapes the fitness landscape is defined by the parameters and and by the general design of the unbounded evolutionary versatility model figure experiments and show that the model appears to display unbounded evolutionary versatility for the baseline fitness landscape the fitness landscape that is defined by the parameter settings in table but experiments and show that there are neighbouring fitness landscapes for which evolutionary versatility is bounded the experiments here have only explored a few of the infinitely many possible fitness landscapes of the fitness landscapes that were explored here only a few appeared to display unbounded evolutionary versatility conclusions this paper introduces a simple model of unbounded evolutionary versatility the model is primarily intended to address the claim that natural selection can not produce a trend because it is a purely local process the model shows that local selection can produce a global trend towards increasing evolutionary versatility the model suggests that this trend can continue without bound if there is sufficient ongoing change in the environment for evolutionary versatility to increase without bound it must be possible for the lengths of genomes to increase if there is a bound on the length of the genomes then there must be a bound on the evolutionary versatility a model of unbounded evolutionary versatility must therefore allow mutations that occasionally change the length of a genome it seems possible that once genomes reach a certain length the benefit that might be obtained from greater length would be countered by the damage that mutation can do to the useful genes that have been found so far at this point evolutionary versatility might stop increasing to address this issue the model allows the mutation rate to adapt the experiments show that indeed if there is little change in the environment then the damage of mutation is greater than the benefit of mutation so the mutation rate goes to zero and evolutionary versatility stops increasing however if there is sufficient change in the environment it appears that the mutation rate reaches a stable value and evolutionary versatility continues to increase indefinitely turney perhaps the most interesting observation is that the fitness increase during an era grows over time see the fourth plots in figures and that is increasing evolutionary versatility leads to an accelerating pace of evolution one of the most interesting questions about this model is whether it plausible as a highly abstract model of the evolution of life on earth one test of its plausibility is to look for signs that the pace of organismal evolution is accelerating for example does the fossil record show a decreasing recovery time from major catastrophes is there a decrease in the average lifetimes of species if there is evidence that the pace of evolution is accelerating evolutionary versatility may be better able to account for this than the other seven live hypotheses it is not clear how any of the other hypotheses could be used to explain the acceleration although it seems to be a natural consequence of increasing evolutionary versatility acknowledgments thanks to the reviewers for their very helpful comments on an earlier version of this paper thanks to dan mcshea for many constructive criticisms and general encouragement references aboitiz lineage selection and the capacity to evolve medical hypotheses altenberg the evolution of evolvability in genetic programming in advances in genetic programming kinnear mit press anderson learning and evolution a quantitative genetics approach journal of theoretical biology ayala the concept of biological progress in studies in the philosophy of biology ed ayala new york macmillan ayala can progress be defined as a biological concept in evolutionary progress ed m nitecki pp chicago university of chicago press unbounded evolutionary versatility in genetic algorithms in varela and bourgine eds towards a practice of autonomous systems mit press pp baldwin a new factor in evolution american naturalist bedau and seymour adaptation of mutation rates in a simple model of evolution complexity international blickle and thiele a comparison of selection schemes used in genetic algorithms technical report no gloriastrasse zurich swiss federal institute of technology eth zurich computer engineering and communications networks lab tik blickle and thiele a mathematical analysis of tournament selection in proceedings of the sixth international conference on genetic algorithms eshelman morgan kaufmann san mateo ca pp davis adapting operator probabilities in genetic search proceedings of the third international conference on genetic algorithms morgan kaufmann san mateo ca pp dawkins the evolution of evolvability in artificial life langton dawkins climbing mount improbable new york norton and fogel fogel and atmar programming in chen ed proceedings of the asilomar conference on signals systems and computers pp california maple press gilinsky and good probabilities of origination persistence and extinction of families of marine invertebrate life paleobiology gould trends as changes in variance a new slant on progress and directionality in evolution journal of paleontology gould full house the spread of excellence from plato to darwin new turney york harmony hinton and nowlan how learning can guide evolution complex systems lewin red queen runs into trouble science mcshea metazoan complexity and evolution is there a trend evolution mcshea possible trends in organismal evolution eight live hypotheses annual review of ecology and systematics nitecki evolutionary progress edited collection chicago university of chicago press raup taxonomic survivorship curves and van valen law paleobiology riedl a approach to phenomena quarterly review of biology riedl order in living organisms a systems analysis of evolution translated by jefferies translation of die ordnung des lebendigen new york wiley ruse monad to man the concept of progress in evolutionary biology massachusetts harvard university press simon the architecture of complexity proceedings of the american philosophical society syswerda uniform crossover in genetic algorithms proceedings of the third international conference on genetic algorithms pp california morgan kaufmann syswerda a study of reproduction in generational and genetic algorithms foundations of genetic algorithms rawlins editor morgan kaufmann pp unbounded evolutionary versatility turney the architecture of complexity a new blueprint synthese turney how to shift bias lessons from the baldwin effect evolutionary computation turney increasing evolvability considered as a trend in evolution in wu proceedings of genetic and evolutionary computation conference workshop program workshop on evolvability pp van valen a new evolutionary law evolutionary theory vermeij adaptive versatility and skeleton construction american naturalist vermeij gastropod evolution and morphological diversity in relation to shell geometry journal of zoology vermeij biological versatility and earth history proceedings of the national academy of sciences of the united states of america vermeij adaptation versatility and evolution systematic zoology wagner and altenberg complex adaptations and the evolution of evolvability evolution whitley and kauth j genitor a different genetic algorithm proceedings of the rocky mountain conference on artificial intelligence denver pp whitley the genitor algorithm and selective pressure proceedings of the third international conference on genetic algorithms pp california morgan kaufmann whitley dominic das and anderson genetic reinforcement learning for neurocontrol problems machine learning turney williams adaptation and natural selection new jersey princeton university press wolpert and macready no free lunch theorems for optimization ieee transactions on evolutionary computation
5
apr crafting adversarial input sequences for recurrent neural networks nicolas papernot and patrick mcdaniel ananthram swami and richard harang the pennsylvania state university university park pa mcdaniel united states army research laboratory adelphi md learning models are frequently used to solve complex security problems as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors previous efforts have shown that numerous machine learning models were vulnerable to adversarial manipulations of their inputs taking the form of adversarial samples such inputs are crafted by adding carefully selected perturbations to legitimate inputs so as to force the machine learning model to misbehave for instance by outputting a wrong class if the machine learning task of interest is classification in fact to the best of our knowledge all previous work on adversarial samples crafting for neural network considered models used to solve classification tasks most frequently in computer vision applications in this paper we contribute to the field of adversarial machine learning by investigating adversarial input sequences for recurrent neural networks processing sequential data we show that the classes of algorithms introduced previously to craft adversarial samples misclassified by neural networks can be adapted to recurrent neural networks in a experiment we show that adversaries can craft adversarial sequences misleading both categorical and sequential recurrent neural networks i ntroduction efforts in the machine learning and security communities have uncovered the vulnerability of machine learning models to adversarial manipulations of their inputs specifically approximations made by training algorithms as well as the underlying linearity of numerous machine learning models including neural networks allow adversaries to compromise the integrity of their output using crafted perturbations such perturbations are carefully selected to be they are often indistinguishable to at the same time yield important changes of the output of the machine learning model solutions making models more robust to adversarial perturbations have been proposed in the literature but models remain largely vulnerable the existence of this threat vector puts machine learning models at risk when deployed in potentially adversarial settings a taxonomy of attacks against deep learning classifiers is introduced in to select perturbations changing the class label assigned by a neural network classifier to any class different from the legitimate class or a specific target class chosen by the adversary two approaches can be followed the fast gradient sign method and the forward derivative method both approaches estimate the model s sensitivity by differentiating functions defined over its architecture and parameters the approaches differ in perturbation selection these techniques were primarily evaluated on models trained to solve image classification tasks such tasks simplify adversarial sample crafting because model inputs use linear and differentiable images encoded as numerical vectors thus perturbations found for the model s input are easily transposed in the corresponding raw image on the contrary we study adversarial samples for models mapping sequential inputs in a and nondifferentiable manner with categorical or sequential outputs recurrent neural networks rnns are machine learning models adapted from neural networks to be suitable for learning mappings between sequential inputs and outputs they are for instance powerful models for sentiment analysis which can serve the intelligence community in performing analysis of communications in terrorist networks furthermore rnns can be used for malware classification predicting sequential data also finds applications in stock analysis for financial market trend prediction unlike neural networks rnns are capable of handling sequential data of often rnns introduce cycles in their computational graph to efficiently model the influence of time the presence of cyclical computations potentially presents challenges to the applicability of existing adversarial sample algorithms based on model differentiation as cycles prevent computing gradients directly by applying the chain rule this issue was left as future work by previous work this is precisely the question we investigate in this paper we study a particular instance of adversarial we refer to as adversarial to mislead rnns into producing erroneous outputs we show that the forward derivative can be adapted to neural networks with cyclical computational graphs using a technique named computational graph unfolding in an experiment we demonstrate how using this forward derivative model jacobian an adversary can produce adversarial input sequences manipulating both the sequences output by a sequential rnn and classification predictions made by a categorical rnn such manipulations do not require the adversary to alter any part of the model s training process or data in fact perturbations instantly manipulate the model s output at test time after it is trained and deployed to make predictions on new inputs the contributions of this paper are the following we formalize the adversarial sample optimization problem in the context of sequential data we adapt crafting algorithms using the forward derivative to the specificities of rnns this includes showing how to compute the forward derivative for cyclical computational graphs we investigate transposing adversarial perturbations from the model s inputs to the raw inputs we evaluate the performance of our technique using rnns making categorical and sequential predictions on average changing words in a word movie review is sufficient for our categorical rnn to make wrong class predictions when performing sentiment analysis on reviews we also show that sequences crafted using the jacobian perturb the sequential outputs of a second rnn this paper is intended as a presentation of our initial efforts in an line of research we include a discussion of future work relevant to the advancement of this research topic ii a bout r ecurrent n eural n etworks to facilitate our discussion of adversarial sample crafting techniques in section iii we provide here an overview of neural networks and more specifically of recurrent neural networks along with examples of machine learning applications and tasks that can be solved using such models machine learning machine learning provides automated methods for the analysis of large sets of data tasks solved by machine learning are generally divided in three broad types supervised learning unsupervised learning and reinforcement learning when the method is designed to learn a mapping association between inputs and outputs it is an instantiation of supervised learning in such settings the output data nature characterizes varying problems like classification pattern recognition or regression when the method is only given unlabeled inputs the machine learning task falls under unsupervised learning common applications include dimensionality reduction or network finally reinforcement learning considers agents maximizing a reward by taking actions in an environment interested readers are referred to the presentation of machine learning in neural networks neural networks are a class of machine learning models that are useful across all tasks of supervised unsupervised and reinforcement learning they are made up of computing activation functions to their inputs in order to produce outputs typically processed by other neurons the computation performed by a neuron thus takes the following formal form h w where w is a parameter referred to as the weight vector whose role is detailed below in a neural network f neurons are typically grouped in layers fk a network always has at least two layers corresponding to the input and output of the model one or more intermediate hidden layers can be inserted between these input and output layers if the neuron link y t by w out w h t bh w in x t fig recurrent neural network the sequential input x is processed by time step value x t the hidden neuron evaluates its state h t at time step t by adding the result of multiplying the current input value x t with weight w in with the result of multiplying its previous state with weight w and the bias bh and finally applying the hyperbolic tangent t the output y multiplies the hidden neuron state by weight w out and adds bias by network possesses one or no hidden layer it is referred to as a shallow neural network otherwise the network is said to be deep and the common interpretation of the hidden layers is that they extract successive and hierarchical representations of the input required to produce the output neural networks are principally parameterized by the weights placed on links between neurons such weight parameters hold the model s knowledge and their values are learned during training by considering collections of inputs with their corresponding labels y in the context of supervised learning recurrent neural networks recurrent neural networks rnns are a variant of the vanilla networks described above that is adapted to the modeling of sequential data without such specificities vanilla neural networks do not offer the scalability required for the modeling of large sequential data the specificities of recurrent neural networks include most importantly the introduction of cycles in the model s computational graph which results in a form of parameter sharing responsible for the scalability to large sequences in other words in addition to the links between neurons in different layers recurrent neural networks allow for links between neurons in the same layer which results in the presence of cycles in the network s architecture cycles allow the model to share the are parameters of the links connecting neuron outputs and throughout successive values of a given input value at different time steps in the case of rnns equation thus becomes h t h w following the notation introduced in where h t is the neuron named time step t of the input sequence note that the cycle allows for the activation function to take into account the state of the neuron at the previous time step t thus the state can be used to transfer some aspects of the previous sequence time steps to upcoming time steps an example recurrent neural network throughout sections iii and illustrated in figure neuron iii c rafting a dversarial s equences in the following we formalize adversarial sequences we then build on techniques designed to craft adversarial samples for neural network classifiers and adapt them to the problem of crafting adversarial sequences for recurrent neural networks h adversarial samples and sequences adversarial samples in the context of a machine learning classifier f an adversarial samples is crafted from a legitimate sample by selecting the to a norm appropriate for the input which results in the altered sample being misclassified in a class different from its legitimate class f the adversarial target class can be a chosen class or any class different from the legitimate class thus an adversarial sample solves the following optimization problem first formalized in min f f in the case where the adversary is interested in any target class different from the legitimate class finding an exact solution to this problem is not always possible especially in the case of deep neural networks due to their and nonlinearity thus previous efforts introduced are discussed find approximative solutions adversarial sequences consider rnns processing sequential data when both the input and output data are sequences as is the case in one of our experiments equation does not hold as the output data is not categorical thus the adversarial sample optimization problems needs to be generalized to specify an adversarial target vector which is to be matched as closely as possible by model f when processing the adversarial input this can be stated as min kf k where is the output sequence desired by the adversary k k a norm appropriate to compare vectors in the rnn s input or output domain and the acceptable error between the model output f on the adversarial sequence and the adversarial target an example norm to compare input sequences is the number of sequence steps perturbed we detail how approximative to this problem can be found by computing the model s jacobian b using the fast gradient sign method w out w w in x y h y t w out w w h t w in x w out w w in x t y t w out h t w in x t fig unfolded recurrent neural network this neural network is identical to the one depicted in figure with the exception of its recurrence cycle which is now unfolded biases are omitted for clarity of the illustration where c is the cost function associated with model f and a parameter controlling the perturbation s magnitude increasing the input variation parameter increases the likeliness of being misclassified but albeit simultaneously increases the perturbation s magnitude and therefore its distinguishability as long as the model is differentiable the fast gradient sign method still if one inserts recurrent connections in the computational graph of the model in fact goodfellow et al used the method in to craft adversarial samples on a deep boltzmann machine which uses recurrent connections to classify inputs of fixed size the adversarial sample crafting method described in equation can thus be used with recurrent neural networks as long as their loss is differentiable and their inputs we are however also interested in solving equation for a model f processing input sequence steps using the forward derivative the forward derivative introduced in is an alternative means to craft adversarial samples the method s design considers the threat model of adversaries interested in misclassifying samples in chosen adversarial targets nevertheless the technique can also be used to achieve the weaker goal of misclassification in any target class different from the original sample s class the forward derivative is defined as the model s jacobian jf i j the fast gradient sign method approximates the problem in equation by linearizing the model s cost function around its input and selecting a perturbation using the gradient of the cost function with respect to the input itself this gradient can be computed by following the steps typically used for during training but instead of computing gradients with respect to the model parameters with the intent of reducing the prediction error as is normally the case during training the gradients are computed with respect to the input this yields the following formulation of adversarial samples sgn c f y link where xi is the ith component of the input and fj the j th component of the output it precisely evaluates the sensitivity of output component fj to the input component xi it gives a quantified understanding of how input variations modify the output s value by component pair we leverage the technique known as computational graph unfolding to compute the forward derivative in the presence of cycles as is the case with rnns looking back at equation one can observe that to compute the neuronal state at time step t we can recursively apply the formula while decrementing the time step this yields the following h t h w w w which is the unfolded version of equation thus by unfolding its recurrent components the computational graph of a recurrent neural network can be made acyclic for instance figure draws the unfolded neural network corresponding to the rnn originally depicted in figure using this unfolded version of the graph we can compute the recurrent neural network s jacobian it can be defined as the following matrix j jf i j i where x i is the step i of input sequence y j is the step j of output sequence and i j t for input and output sequences of length using the definition of y j we have w out j j i i w out in j i w out in in i by unfolding recursively each time step of the hidden neuron s state until we reach j j we can write j i w out in i which can be evaluated using the as demonstrated by in the context of neural networks we can craft adversarial sequences for two types of rnn and the forward derivative previous work introduced adversarial saliency maps to select perturbations using the forward derivative in the context of classification neural networks due to space constraints we do not include an overview of saliency maps because we study a binary classifier in section iv thus simplifying perturbation selection indeed perturbing an input to reduce one class probability necessarily increases the probability given to the second class thus adversarial sequences are crafted by solely considering the jacobian jf j column corresponding to one of the output components j we now consider crafting adversarial sequences for models outputting sequences to craft an adversarial sequence from a legitimate input sequence we need to select a perturbation such that f is within an acceptable margin of the desired adversarial output hence approximatively solving equation consider the output sequence each jacobian s column corresponds to a step j of the output sequence we identify a subset of input components i with high absolute values in this column and comparably small absolute values in the other columns of the jacobian matrix these components will have a large impact on the rnn s output at step j and a limited impact on its output at other steps thus if we modify components i in the direction indicated by sgn jf i j the output sequence s step j will approach the desired adversarial output s component j this method is evaluated in the second part of section iv iv e valuation we craft adversarial sequences for categorical and sequential rnns the categorical rnn performs a sentiment analysis to classify movie reviews in lieu of intelligence reports as positive or negative we mislead this classifier by altering words of the review the second rnn is trained to learn a mapping between synthetic input and output sequences the attack alters the model s output by identifying the contribution of each input sequence step recurrent neural networks with categorical output this rnn is a movie review classifier it takes as an input a sequence of performs a sentiment analysis to classify it as negative outputs or positive outputs we were able to achieve an error rate of on the training set by changing on average words in each of the reviews which are on average word long experimental setup we experiment with the long short term memory lstm rnn architecture lstms prevent exploding and vanishing gradients at training by introducing a memory cell which gives more flexibility to the selfrecurrent connections compared to a vanilla rnn allowing it to remember or forget previous states our rnn is composed of four lstm mean pooling and as shown in figure the mean pooling layer averages representations extracted by memory cells of the lstm layer while the softmax formats the output as probability vectors softmax layer mean pooling layer lst m x x t movie is terrific lstm layer lst m lst m lst m embeddings x x integers words this fig lst m x t rnn this recurrent model classifies movie reviews the rnn f is implemented in python with theano to facilitate symbolic gradient computations we train using a little over training and testing reviews reviews are sequences of words from a dictionary d that includes words frequently used in the reviews and a special keyword for all other words the dictionary maps words to integer keys we convert these integer sequences to matrices where each row encodes a word as a set of as word embeddings the matrices are used as the input to the rnn described above once trained the architecture achieves accuracies of and respectively on the training and testing tests the jacobian is a tensor and not a matrix because each word embedding is a vector itself so jf x i j is also a vector and jf has three dimensions as indicated in we consider the the softmax layer instead of its output probabilities to compute the jacobian because the gradient computations are more stable and the results are the same the maximum logit index corresponds to the class assigned to the sentence experimental setup the sequential rnn is described in figure we train on a set of synthetically generated input and output sequence pairs inputs have values per step and outputs values per step both sequences are steps long these values are randomly sampled from a standard normal distribution and for inputs and for outputs the random samples are then altered to introduce a strong correlation between a given step of the output sequence and the previous or last to previous step of the input sequence the model is trained for epochs at a learning rate of the cost is the mean squared error between model predictions and targets figure shows an example input sequence and the output sequence predicted input variable values adversarial sequences we now demonstrate how adversaries can craft adversarial sequences sentences misclassified by the model thus we need to identify dictionary words that we can use to modify the sentence in a way that switches its predicted class from positive to negative or we turn to the attack described in section iii based on computing the model s jacobian we evaluate the jacobian with this respect to the embedding inputs jf i j i gives us a precise mapping between changes made to the word embeddings and variations of the output of the pooling for each word i of the input sequence sgn jf i f where f arg pj gives us the direction in which we have to perturb each of the word embedding components in order to reduce the probability assigned to the current class and thus change the class assigned to the sentence unlike previous efforts describing adversarial samples in the context of computer vision we face a difficulty the set of legitimate word embeddings is finite thus we can not set the word embedding coordinates to any real value in an adversarial sequence to overcome this difficulty we follow the procedure detailed in algorithm we find the word in dictionary d such that the sign of the difference between the embeddings of and the original input word is closest to sgn jf i f this embedding takes the direction closest to the one indicated by the jacobian as most impactful on the model s prediction by iteratively applying this heuristic to sequence words we eventually find an adversarial input sequence misclassified by the model we achieved an error rate of on the training set by changing on average words in each of the training reviews reviews are on average word long for instance we change the review i wouldn t rent this one even on dollar rental into the following misclassified adversarial sequence excellent wouldn t rent this one even on dollar rental the algorithm is inserting words with highly positive connotations in the input sequence to mislead the rnn model recurrent neural networks with sequential output this rnn predicts output sequences from input sequences although we use symthetic data models can for instance be applied to forecast financial market trends output variable values algorithm adversarial sequence crafting for the lstm model the algorithm iteratively modifies words i in the input sentence to produce an adversarial sequence misclassified by the lstm architecture f illustrated in figure require f d y f x x while f y do select a word i in the sequence w k arg sgn i jf i y k i w end while return input sequence output sequence time step fig example input and output sequences of our experimental setup in the input graph the solid lines indicate the legitimate input sequence while the dashed lines indicate the crafted adversarial sequence in the output solid lines indicate the training target output dotted lines indicated the model predictions and dashed lines the prediction the model made on the adversarial sequence adversarial sequences we compute the model s jacobian quantifies contributions of each input sequence step to each output sequence craft adversarial sequences for instance if we are interested in altering a subset of output steps j we simply alter the subset of input steps i with high jacobian values jf i j and low jacobian values jf i k for k j figure shows example inputs and outputs solid lines correspond to the legitimate input sequence and its target output sequence while small dotted lines in the output show model predictions which closely matches the target the adversarial crafted to modify value red of step and value blue of step it does so by only making important changes in the input sequence at value black of step and value red of step due to space constraints completing these qualitative results with a detailed quantitative evaluation is left as future work d iscussion and r elated w ork this work is part of an active line of studies the behavior of machine learning models trained or deployed in adversarial settings the theoretical approach described in section iii is applicable to any neural network model with recurrent components independent of its output data type our experiments were performed on a lstm architecture with categorical outputs and a vanilla rnn model with sequential outputs as a preliminary validation of the approach albeit necessitating additional validation with other rnn model variants as well as datasets future work should also address the grammar of adversarial sequences to improve their semantic meaning and make sure that they are indistinguishable to humans in this paper we considered a threat model describing adversaries with the capability of accessing the model s computational the values of parameters learned during training in realistic environments it is not always possible for adversaries without some type of access to the system hosting the machine learning model to acquire knowledge of these parameters this limitation has been addressed in the context of deep neural network classifiers by the authors introduced a attack for adversaries targeting classifier oracles the targeted model can be queried for labels with inputs of the adversary s choice they used a substitute model to approximate the decision boundaries of the unknown targeted model and then crafted adversarial samples using this substitute these samples are also frequently misclassified by the targeted model due to a property known as adversarial sample transferability samples crafted to be misclassified by a given model are often also misclassified by different models however adapting such a attack method to rnns requires additional research efforts and is left as future work vi c onclusions models learned using rnns are not immune from vulnerabilities exploited by adversary carefully selecting perturbations to model inputs which were uncovered in the context of networks used for computer vision classification in this paper we formalized the problem of crafting adversarial sequences manipulating the output of rnn models we demonstrated how techniques previously introduced to craft adversarial samples misclassified by neural network classifiers can be adapted to produce sequential adversarial inputs notably by using computational graph unfolding in an experiment we validated our approach by crafting adversarial samples evading models making classification predictions and predictions future work should investigate adversarial sequences of different data types as shown by our experiments switching from computer vision to natural language processing applications introduced difficulties unlike previous work we had to consider the of data in our attack performing attacks under weaker threat models will also contribute to the better understanding of vulnerabilities and lead to defenses acknowledgments research was sponsored by the army research laboratory arl and was accomplished under cooperative agreement number arl cyber security cra the views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies either expressed or implied of the arl or the government the government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation r eferences szegedy zaremba sutskever bruna erhan goodfellow and fergus intriguing properties of neural networks in proceedings of the international conference on learning representations computational and biological learning society goodfellow et explaining and harnessing adversarial examples in proceedings of the international conference on learning representations computational and biological learning society papernot et the limitations of deep learning in adversarial settings in proceedings of the ieee european symposium on security and privacy ieee papernot mcdaniel goodfellow jha and practical attacks against deep learning systems using adversarial examples arxiv preprint papernot et distillation as a defense to adversarial perturbations against deep neural networks in proceedings of the ieee symposium on security and privacy ieee and goodfellow adversarial perturbations of deep neural networks in advanced structured prediction hazan papandreou and tarlow mcdaniel et machine learning in adversarial settings ieee security privacy vol no rumelhart hinton and williams learning representations by errors cognitive modeling vol pascanu et malware classification with recurrent networks in ieee icassp ieee pp goodfellow bengio and courville deep learning book in preparation for mit press murphy machine learning a probabilistic perspective mit press krizhevsky sutskever and hinton imagenet classification with deep convolutional neural networks in advances in neural information processing systems pp dahl stokes deng and yu malware classification using random projections and neural networks in ieee icassp ieee pp et deep neural network for traffic sign classification neural networks vol pp bishop pattern recognition machine learning freedman statistical models theory and practice cambridge university press goodfellow et deep boltzmann machines in advances in neural information processing systems pp mozer a focused algorithm for temporal pattern recognition complex systems vol no pp werbos generalization of backpropagation with application to a recurrent gas market model neural networks vol pp hochreiter and schmidhuber long memory neural computation vol no pp bergstra breuleux bastien lamblin and theano a cpu and gpu math expression compiler in proceedings of the python for scientific computing conference scipy vol austin tx maas et learning word vectors for sentiment analysis in proceedings of the annual meeting of the association for computational linguistics human language technologies portland oregon usa pp hinton learning distributed representations of concepts in proceedings of the eighth annual conference of the cognitive science society vol amherst ma mesnil x he deng and bengio investigation of architectures and learning methods for spoken language in interspeech pp barreno nelson et can machine learning be secure in proceedings of the acm symposium on information computer and communications security acm pp
9
a characterization for asymptotic dimension growth jan goulnara arzhantseva graham niblo nick wright and jiawen zhang abstract we give a characterization for asymptotic dimension growth we apply it to cat cube complexes of finite dimension giving an alternative proof of wright s result on their finite asymptotic dimension we also apply our new characterization to geodesic coarse median spaces of finite rank and establish that they have subexponential asymptotic dimension growth this strengthens a recent result of and wright introduction the concept of asymptotic dimension was first introduced by gromov in as a coarse analogue of the classical topological covering dimension it started to attract much attention in when yu proved that the novikov higher signature conjecture holds for groups with finite asymptotic dimension fad a lot of groups and spaces are known to have finite asymptotic dimension among those are for instance finitely generated abelian groups free groups of finite rank gromov hyperbolic groups mapping class groups cat cube complexes of finite dimension see for an excellent survey of these and other results recently behrstock hagen and sisto introduced the powerful new notion of hierarchically hyperbolic spaces and showed that these have finite asymptotic dimension recovering a number of the above results including notably mapping class groups and a number of cat cube complexes on the other hand there are many groups and spaces with infinite asymptotic dimension examples are the wreath product the grigorchuk group the thompson groups etc generalizing fad dranishnikov defined the asymptotic dimension growth for a space if the asymptotic dimension growth function is eventually constant then the space has fad dranishnikov showed that the wreath product of a finitely generated nilpotent group with a finitely generated fad group has polynomial asymptotic dimension growth he also showed that polynomial asymptotic dimension growth implies yu s property a and hence the coarse conjecture provided the space has bounded geometry later ozawa extended this result to spaces with subexponential growth see also bell analyzed how the asymptotic dimension function is affected by various constructions mathematics subject classification key words and phrases asymptotic dimension growth cat cube complex coarse median space mapping class group partially supported by the european research council erc grant of goulnara arzhantseva no and the fellowship trust by royal society goulnara arzhantseva graham niblo nick wright and jiawen zhang in this paper we give an alternative characterization for the asymptotic dimension growth function which is inspired by brown and ozawa s proof of property a for gromov s hyperbolic groups theorem which is in turn inspired by we use this to study two notable examples cat cube complexes of finite dimension and coarse median spaces of finite rank the techniques used to study these examples are developments of those used by and wright to establish property a for uniformly locally finite coarse median spaces of finite rank as a byproduct we obtain a new proof of finite asymptotic dimension for cat cube complexes which allows one to explicitly construct the required controlled covers this compares with wright s original proof which is discussed below cat cube complexes are a nice class of curved spaces first studied by gromov who gave a purely combinatorial condition for recognizing the curvature of cube complexes many groups act properly on cat cube complexes for instance artin groups many small cancellation groups and thompson s groups admit such actions this makes it possible to deduce properties of these groups from the corresponding properties of the cat cube complexes in wright proved that the asymptotic dimension of a cat cube complex x is bounded by its dimension he proved this by constructing a family of cobounded maps to cat cube complexes of at most the same dimension indexed by we use our characterization for finite asymptotic dimension to give a direct proof of this result namely we construct uniformly bounded covers with suitable properties being more explicit this proof loses however the sharp bound on the asymptotic dimension thus we give an alternative proof of the following variant of wright s theorem theorem let x be a cat cube complex of finite dimension then x has finite asymptotic dimension the key point in our approach is to analyse the normal cube path distance on the cube complex introduced by niblo and reeves we consider the ball with respect to the normal cube path distance rather than to the ordinary edgepath distance we decompose such a ball into intervals and use induction on the dimension in order to construct some separated net satisfying a suitable consistency property in the process we give a detailed analysis of normal balls and normal spheres balls and spheres with respect to the normal cube path distance see section for all details our second application is to coarse median spaces they were introduced by bowditch as a coarse variant of classical median spaces the notion of a coarse median group leads to a unified viewpoint on several interesting classes of groups including gromov s hyperbolic groups mapping class groups and cat cubical groups bowditch showed that hyperbolic spaces are exactly coarse median spaces of rank and mapping class groups are examples of coarse a characterization for asymptotic dimension growth median spaces of finite rank he also established interesting properties for coarse median spaces such as rapid decay the property of having quadratic dehn function etc intuitively a coarse median space is a metric space equipped with a ternary operator called the coarse median in which every finite subset can be approximated by a finite median algebra in these approximations the coarse median is approximated by an actual median with the distortion being controlled by the metric this extends gromov s observation that in a space finite subsets can be well approximated by finite trees recently and wright proved that a coarse median space with finite rank and at most exponential volume growth has property a following their proof and using our characterization for asymptotic dimension growth we obtain the following result theorem let x be a geodesic coarse median space with finite rank and at most exponential volume growth then x has subexponential asymptotic dimension growth hierarchically hyperbolic spaces are examples of coarse median spaces see hence our theorem is broader in scope though with a weaker conclusion than the finite asymptotic dimension result proven in we expect the following general result conjecture every geodesic coarse median space with finite rank has finite asymptotic dimension by a result of ozawa subexponential asymptotic dimension growth implies property a thus our theorem strengthens the result of the paper is organized as follows in section we give some preliminaries on asymptotic dimension growth cat cube complexes and coarse median spaces in section we provide a characterization of the asymptotic dimension growth function and as a special case give a characterization of finite asymptotic dimension sections and deal with cat cube complexes in section we study normal balls and spheres which are essential in our approach to prove theorem in section section deals with the coarse median case and we prove theorem there preliminaries asymptotic dimension the notion of asymptotic dimension was first introduced by gromov in as a coarse analogue of the classical lebesgue topological covering dimension see also let x d be a metric space and r we call a family u ui of subsets in x if for any u in u d u r where d u inf d x x u we write g ui joint goulnara arzhantseva graham niblo nick wright and jiawen zhang for the union of ui a family v is said to be uniformly bounded if mesh v sup diam v v v is finite let u ui be a cover of x and r we define the of u denoted by mr u to be the minimal integer n such that for any x x the ball b x r intersects at most n elements of u as usual m u denotes the multiplicity of a cover u that is the maximal number of elements of u with a intersection a number is called a lebesgue number of u if for every subset a x with diameter there exists an element u u such that a u the lebesgue number l u of the cover u is defined to be the infimum of all lebesgue numbers of u definition we say that the asymptotic dimension of a metric space x does not exceed n and we write asdim x n if for every r the space x can be covered by n subspaces xn and each xi can be further decomposed into some uniformly bounded subspaces n xi xi g r disjoint xij and sup diam xij i j we say asdim x n if asdim x n and asdim x is not less than here are basic examples of spaces and groups with finite asymptotic dimension example asdim zn n for all n n where z is the group of integers gromov s spaces word hyperbolic groups have finite asymptotic dimension from the definition it is easy to see that the asymptotic dimension of a subspace is at most that of the ambient space there are other equivalent definitions of asymptotic dimension we list one here for a later use and guide the reader to for others proposition let x be a metric space then asdim x n if and only if for any r there exists a uniformly bounded cover u of x such that mr u n asymptotic dimension growth llet us consider the direct sum of infinitely many copies of the integers g z since for any n n the group zn is contained in g by the above mentioned results g has infinite asymptotic dimension in order to deal with such dranishnikov studied the following concept as a generalization of the property of having a finite asymptotic dimension definition let x d be a metric space define a function adx min m u u is a cover of x l u which is called the asymptotic dimension function of x a characterization for asymptotic dimension growth note that adx is monotonic and lim adx asdim x like in the case of the volume function the growth type of the asymptotic dimension function is more essential than the function itself recall that for f g we write f g if there exists k n such that for any x k f x kg kx k we write f g if both f g and g f it is clear that is an equivalence relation we define the growth type of f to be the class of f define the asymptotic dimension growth of x to be the growth type of adx by a result of bell and dranishnikov the growth type of the asymptotic dimension function is a invariant proposition let x and y be two discrete metric spaces with bounded geometry if x and y are then adx ady in particular the asymptotic dimension growth is for finitely generated groups we give an alternative equivalent definition of the asymptotic dimension growth that is used in our characterization lemma let x be a metric space and define f x adx then ad f x min u u is a cover of x ad proof given suppose u is a cover of x with l u for any u u define the inner of u to be u x x u where denotes the usual of the set and we define u u u u since l u u is still a cover of x by definition it is obvious that f x adx u m u which yields ad conversely suppose u is a cover of x consider u which has lebesgue number not less than it is easy to show m u u which implies fx adx ad f x as the definition for the by the preceding lemma we can use either adx or ad asymptotic dimension function recall that if there exists a polynomial subexponential function f such that adx f then x is said to have polynomial subexponential asymptotic dimension growth dranishnikov has shown that polynomial asymptotic dimension growth implies yu s property a and he gave a class of groups having such property proposition let n be a finitely generated nilpotent group and g be a finitely generated group with finite asymptotic dimension then the wreath product n g has goulnara arzhantseva graham niblo nick wright and jiawen zhang polynomial asymptotic dimension growth in particular z z has polynomial asymptotic dimension growth cat cube complexes we recall basic notions and results on the structure of cat cube complexes we omit some details and most of the proofs but direct the readers to for more information a cube complex is a polyhedral complex in which each cell is isometric to a euclidean cube and the gluing maps are isometries the dimension of the complex is the maximum of the dimensions of the cubes for a cube complex x we can associate it with the intrinsic dint which is the minimal on x such that each cube embeds isometrically when x has finite dimension dint is a complete geodesic metric on x see for a general discussion on polyhedral complex and the associated intrinsic metric there is also another metric associated with x let x be the of x that is a graph with the vertex set v x we equip v with the metric d which is the minimal number of edges in a path connecting two given vertices clearly when x is connected d is a geodesic metric on for x y v the interval is defined by x y z v d x y d x z d x y that is it consists of all points on any geodesic between x and y a geodesic metric space x d is cat if all geodesic triangles in x are slimmer than the comparative triangle in the euclidean space for a cube complex x dint gromov has given a combinatorial characterization of the cat condition x is cat if and only if it is simply connected and the link of each vertex is a flag complex see also another characterization of the cat condition was obtained by chepoi see also a cube complex x is cat if and only if for any x y z v the intersection x y y z z x consists of a single point x y z which is called the median of x y z in this case we call the graph x a median graph and v equipped with the ternary operator m is indeed a median algebra in particular the following equations hold y z u v v x x y x x y z x y z where is any permutation of x y z x y z u v x u v y u v z obviously x y z x y and x y z v x y z z lemma let x y z w v such that z w x y then z x w implies w z y proof since z x w and w x y we have z x w z and x w y so z w y z x w w y z w y x w y w z w y w w w which implies w z y lemma for x y z v and d z y x z x y or x y x z proof by chepoi s result x is a median graph hence it is weakly modular see so d x y d x z which implies d x y d x z or d x z d x y x z x y or x y x z a characterization for asymptotic dimension growth a cat cubical complex x can be equipped with a set of hyperplanes each hyperplane does not intersect itself and divides the space into two halfspaces given two hyperplanes h k if the four possible intersections of halfspaces are all nonempty then we say h crosses k denoted by h this occurs if and only if h and k cross a common cube c also denoted by h c furthermore given a maximal collection of pairwise intersecting hyperplanes there exists a unique cube which all of them cross thus the dimension of x is the maximal number of pairwise intersecting hyperplanes we can also define intervals in the language of hyperplanes x y consists of points which lie in all halfspaces containing x and y we call a subset y v convex if for any x y y x y y obviously halfspaces are convex since any geodesic crosses a hyperplane at most once this also implies d x y hyperplane h h separates x from y coarse median spaces according to gromov hyperbolic spaces can be considered locally as a coarse version of trees in the sense that every finite subset can be approximated by a finite tree in a controlled way if one wants to approximate a space locally by finite median algebras graphs this would turn to the definition of coarse median spaces introduced by bowditch see for details definition let x be a metric space and x be a ternary operation we say that x is a coarse median space and is a coarse median on x if the following conditions hold there exist constants k h such that b c x a b c k a b c h there exists a function h n with the following property for a finite subset a x with p there exists a finite median algebra and maps a x such that y z a a x y z h p and a h p we refer to k h as the parameters of x furthermore if there exists d n such that we can always choose the median algebra in condition above of rank at most then we say x has coarse rank at most a finitely generated group is said to be coarse median if some cayley graph has a coarse median note that by definition a coarse median on a group is not required to be equivariant under the group action remark according to bowditch without loss of generality we may always assume that satisfies the median axioms and for all a b c x goulnara arzhantseva graham niblo nick wright and jiawen zhang a a b a a b c b c a b a c a large class of groups and spaces have been shown to be coarse median including gromov s hyperbolic groups artin groups mapping class groups cat cube complexes etc bowditch has proved that coarse median groups have property of rapid decay quadratic dehn s function etc this yielded a unified way to prove these properties for the groups recently and wright have proved that coarse median spaces of finite rank and of at most exponential volume growth have yu s property a characterization for asymptotic dimension growth in this section we establish a characterization for asymptotic dimension growth and obtain several interesting consequences of this main result for instance we get a characterization for a group to have finite asymptotic dimension theorem let x d be a discrete metric space and f be a function then the following are equivalent adx f there exists a function g which has the same growth type as f such that n x we can assign a subset s x k l x satisfying i n such that s x k l b x sl for all k ii n with k x we have s x k l s x l iii y x with d x y l we have s x k d x y l s x k l s y k l for k d x y s x k d x y l s x k l s y k l for k d x y iv n x we have x k l g l proof by lemma we can assume that there exists a function g with g f such that n there exists a uniformly bounded cover u of x with u g l suppose u ui i i and choose xi ui for k and x x we define s x k l xi b x k ui now let us check the four properties in condition i if b x k ui we can assume y b x k ui now d y xi mesh u so d x xi k mesh u mesh u in other words s x k l b x mesh u ii it is immediate by our definition of sets s x k l iii y x with d x y l d x y we have s x k d x y l xi b x k d x y ui a characterization for asymptotic dimension growth now if b x k d x y ui we can assume z b x k d x y ui z ui and d z x k d x y so d z y k z b y k ui so b y k ui which implies s x k d x y l s x k l s y k l on the other hand d x y suppose x j s x l s y l we can assume that x j s y l b y u j which implies b x d x y u j so we have s x d x y l s x l s y l iv n x we have x k l xi b x k ui u g l s s x l l also h we define ah y h n let h s y l l we define ul ah h h since x if we take h s x l l then x ah so ul is a cover of x since such that s x l l b x sl we know that d h y sl for all y ah which implies mesh ul sl finally let us analyse ml ul x consider h h with b x l ah take y b x l ah d y x l and h s y l l now by assumptions in condition we have s y l l s x l d x y l s x l so ml ul x l g l finally by lemma we have f x g adx ad taking in the preceding theorem a constant function f we obtain a characterization for finite asymptotic dimension corollary let x d be a discrete metric space n then the following are equivalent asdim x n n x we can assign a subset s x k l x satisfying i n such that s x k l b x sl for all k ii n with k x we have s x k l s x l iii y x with d x y l we have s x k d x y l s x k l s y k l for k d x y s x k d x y l s x k l s y k l for k d x y iv n x we have x k l n now we turn to the case when x is a graph and obtain a characterization for finite asymptotic dimension which is easier to check corollary given a graph x v e with vertices v and edges e and equipped with the length metric d then the following are equivalent asdim x n goulnara arzhantseva graham niblo nick wright and jiawen zhang n x we can assign a subset s x k l x satisfying i n such that s x k l b x sl for all k ii n with k x we have s x k l s x l iii y x with d x y with x and y connected by an edge we have s y k l s x k l for all k iv n x l n remark the only distinction between the above two corollaries is that in corollary assumption is required only for endpoints of an edge rather than for an arbitrary pair of points as in corollary we point out that the preceding corollaries can be generalized to the case of arbitrary asymptotic dimension growth we will not use such a generalization so we omit it proof of corollary is implied directly by corollary so we focus on s s x l l following the proof of in proposition n let h and h define ah y h s y l l define ul ah h h since x if we take h s x l l then x ah so ul is a cover of x since such that s x l l b x sl we know d h y sl for all y ah which implies mesh ul sl finally let us analyse ml ul x consider h h with b x l ah take y b x l ah d y x l and h s y l l by the definition of the length metric d we know that there exists a sequence of vertices y yk x such that yi v for all i k d yi for all i k and k now by the hypothesis we know s y l l s l l s l l s yk l k l s x k l l s x l so h h b x l ah s x l which implies ml ul x l n normal cube path and normal distance in the next two sections we focus on cat cube complexes and prove theorem we prove it by constructing a uniformly bounded cover with suitable properties such a construction relies deeply on the analysis of normal balls and spheres which we give in this section normal cube paths which were introduced by niblo and reeves in play a key role in the construction of the cover they determine a distance function on the vertices and the balls and spheres defined in terms of this distance are essential in our proof of theorem throughout this section we fix a cat cube complex x with a fixed vertex the x of x is a graph with vertex set v x and edge set e which give us the edge metric d on this is the restriction of the metric to the normal cube paths given a cube c x we denote by st c the union of all cubes which contain c as a subface a characterization for asymptotic dimension growth definition let ci be a sequence of cubes such that each cube has dimension at least and ci consists of a single point denoted by vi call ci a cube path if ci is the unique cube of minimal dimension containing vi and vi and are diagonally opposite vertices of ci define to be the vertex of diagonally opposite to and to be the vertex of cn diagonally opposite to vn the vertices vi are called the vertices of the cube path and we say the cube path is from to the length of a cube path is the number of the cubes in the sequence a cube path is called normal if ci st vi normal cube paths in cat cube complexes behave like geodesics in trees more precisely in the existence and uniqueness of normal cube paths connecting any pair of vertices is established see also proposition for any two vertices x y v there exists a unique normal cube path from x to y note that the order is important here since in general normal cube paths are not reversible proposition the intersection of a normal cube path and a hyperplane is connected in other words a normal cube path crosses a hyperplane at most once proposition let ci and d j be two normal cube paths in x and let vi and w j be the vertices of these normal cube paths if d and d then for all k we have d vk wk we omit the proofs for the above three propositions the readers can find them in the original paper however let us recall the construction of the normal cube path from x to y as follows consider all the hyperplanes separating x from y and adjacent to x the key fact is that these hyperplanes all cross a unique cube adjacent to x lying in the interval from x to y this cube is defined to be the first cube on the normal cube path then one proceeds inductively to construct the required normal cube path we will also need the following lemma abstracted from lemma let ci be the normal cube path and h be a hyperplane if h ci then a hyperplane k such that k and h does not intersect with proof otherwise we have h now by lemma in we know that there exists a cube c x such that all such k c and h c and is a face of moreover c contains an edge e of ci since h so st ci contains e which is a contradiction to the definition of normal cube path now for any two vertices of x we consider all the hyperplanes separating them with a partial order by inclusion more explicitly for any x y v let h x y be the set of hyperplanes separating x and y for any h h x y let be the halfspace containing x define h k if note that the definition depends goulnara arzhantseva graham niblo nick wright and jiawen zhang on the vertices we choose and we may change them under some circumstances but still write for abbreviation to avoid ambiguity we point out the vertices if necessary we write h k to mean a strict containment lemma for any h k h x y h and k do not intersect if and only if h k or k proof we only need to show the necessity let ci be the normal cube path from x to y and assume h ci k c j since h and k do not intersect i j assume i j obviously x and y since h ci and k c j by proposition since h does not intersect with k we have which implies combining the above two lemmas we have the following result on the existence of chains in h x y proposition let ci be the normal cube path from x to y and h be a hyperplane such that h cl then there exists a chain of hyperplanes hl h such that hi ci proof by lemma there exists a hyperplane k such that k and h does not intersect with define inductively we can define a sequence of hyperplanes as required then the conclusion follows by lemma finally we give a lemma used in the proof of the consistency part of our main theorem lemma let x y v with y x and let be the vertex on the normal cube path from to x and to y if then y proof otherwise y by the construction of the normal cube path we know is also the vertex on the normal cube path from to y since y x in other words which is a contradiction to the assumption normal metric we define a new metric on v x using normal cube paths definition for any x y v define dnor x y to be the length of the normal cube path from x to y we call dnor the normal metric on one needs to verify that dnor is indeed a metric it is easy to see that dnor x y and dnor x y if and only if x y note that the normal cube path from x to y is not the one from y to x in general so the symmetric relation is not that obvious in order to show the symmetric relation and the triangle inequality we give the following characterization lemma for x y v let be the relation defined as above then dnor x y sup m hm hi h x y a characterization for asymptotic dimension growth proof suppose ci is the normal cube path from x to y so dnor x y n denote the right hand side of the equality in the lemma by now for any chain hm in h x y by proposition hi intersects with just one cube denoted by ck i obviously if h k ci then h so k i k j if i j which implies m n so on the other hand for any h cn by proposition we have a chain of hyperplanes hn h such that hi ci which implies n proposition dnor is indeed a metric on proof by lemma h x y h y x and as posets they carry opposite orders one can thus deduce dnor x y dnor y x for x y z v h x y y z h x z where is the symmetric difference operation the inclusions of h x y h x z into h x y and h y z h x z into h y z are both order preserving and therefore by lemma we have dnor x z dnor x y dnor y z normal balls and normal spheres recall that for any two points x y in v x the interval between them is x y z v d x y d x z d z y in other words x y is the set of vertices on the union of all the edge geodesics from x to y a subset y v is called convex if for any x y y x y y now let b x n be the closed ball in the edge metric with centre x v and radius generally b x n is not convex for example take x however as we will see for the normal metric balls are convex more precisely we define the normal ball with centre x v and radius n to be bnor x n y v dnor x y n and the normal sphere with centre x v and radius n to be snor x n y v dnor x y n lemma bnor x n is convex for all x v and n proof given z w bnor x n and a geodesic from z to w if bnor x n we can assume u is the first vertex on which is not in bnor x n which implies dnor u n let be the vertex preceding u on so dnor n since dnor u since d u there exists a unique hyperplane h separating from u so h u h h now according to lemma there exists a chain h in h u with hi h since every geodesic intersects with any hyperplane at most once see for example w which implies h is also a chain in h w this is a contradiction to dnor w n by lemma since the intersection of two convex sets is still convex we have the following corollary corollary for any x v and n n the set x bnor n is convex goulnara arzhantseva graham niblo nick wright and jiawen zhang it is well known that for a convex subset y in a cat cube complex and a point v y there is a unique point in y which is closest to v see for example this statement is true both for the intrinsic cat metric on the cube complex and the edge metric on the vertex set and we have a similar statement for the normal distance proposition there exists a unique point v x bnor n such that x bnor n v the point v is characterized by d v max d x bnor n furthermore if dnor x n then v x snor n which implies that v is also the unique point in x snor n such that d v max d x snor n proof if there exist z w x bnor n such that d z d w attains the maximum consider the median m z w x by corollary m z w x bnor n so d m d z d w while m z x w x so m z w which is a contradiction by corollary v x bnor n conversely for any u x bnor n let m u v x u v by corollary m x bnor n while m v x so d m d v which implies m v by the choice of v u v x v so v u x now by lemma u v now for x n satisfying dnor x n if v x bnor n take a geodesic from v to x and let v yk x be the vertices on since dnor x n x v which implies k now for since v x d v d by the definition of v we know x bnor n so bnor n however since d v dnor v we have dnor dnor v dnor v n which is a contradiction to use the above proposition more flexibly we give another characterization of v which can also be viewed as an alternative definition of in the rest of this subsection we fix x v and n n with dnor x be the normal cube path from to x and vn be the proposition let ci n vertex on this normal cube path then v which is provided by proposition to prove this result let us focus on subsets in h x recall that h x is endowed with the relation as defined prior to lemma definition a subset a h x is called closed under if a and k h k a a characterization for asymptotic dimension growth lemma let be the vertex on the normal cube path from to x then h is maximal in the following sense for any closed a h x which contains chains only with lengths at most n a h proof we proceed by induction on suppose that the lemma holds for n and let be the n vertex on the normal cube path from to x given a closed a h x containing chains only with lengths at most n and a maximal chain hm in a if m n then the closed set h a h hm contains chains only with lengths at most n by induction it is contained in h h now for m n similarly h a h h which implies hi ci for i n so ck for some k n if k by proposition and the closeness of a we get a chain in a with length greater than n which is a contradiction so h proof of proposition by proposition x bnor n v which implies h h v however h v is closed and contains chains only with lengths at most n according to lemma so h v h by lemma which implies h v h so h v h v which implies v finally we characterize those points in x which lie in the intersection x snor n this will be used in the next subsection to decompose x snor n into a union of intervals let be the cube on the normal cube path from to x and v is the vertex on the cube path as above let hn be the set of all hyperplanes intersecting with proposition for w x the following are equivalent w x snor n hn such that h crosses the last cube on the normal cube path from to w hn such that h separates w from and w v proof by proposition w v since dnor w n by lemma the maximum length of chains in h w is take such a chain in h w h v obviously hi intersects with different cubes which implies hi ci so hn and it separates w from since h separates w from h must cross some cube c on the normal cube path from to since h hn we know there is a chain h in h v which is also a chain in h w so h can not cross the first n cubes of the normal cube path from to if h does not cross the last cube then dnor w however w v implies h w h v by lemma dnor v n which is a contradiction this is immediate by lemma goulnara arzhantseva graham niblo nick wright and jiawen zhang we have another description for hn which is implied by proposition directly lemma for h h x h hn if and only if the maximal length of chains in k h x k h is decomposition of x snor n we want to decompose the set x snor n so that we can proceed by the induction on dimension in the proof of theorem throughout this subsection we fix x v and n n with dnor x n and let v be as defined in proposition at the end of the preceding subsection we have defined hn to be the set of all hyperplanes intersecting with where ci is the normal cube path from to x now we decompose x n into a union of intervals with dimensions lower than x and the number of these intervals can be controlled by the dimension of x this will make it possible to do induction on the dimension definition for h hn we define fh w x snor n h separates w from by proposition we immediately obtain the following two lemmas lemma fh w x h crosses the last cube on the normal cube path from to w w v h separates w from s lemma x snor n fh by definition we know fh x bnor n h separates from which implies that fh is convex moreover we will show that fh is actually an interval lemma let xh fh be the point minimising d xh then fh xh v proof since fh is convex and xh v fh so xh v fh on the other hand fh let m z xh so m fh and d m d xh by the choice of xh we know that d m d xh which implies m xh so xh z by proposition xh z v thus by lemma z xh v s proposition x snor n xh v and dim xh v dim x proof we only need to show dim xh v dim x for any hyperplane k crossing xh v by proposition k so dim xh v dim x now we give another characterization for xh which is useful in the proof of the consistency condition of theorem a characterization for asymptotic dimension growth lemma let xh be the closest point to on fh then xh is the unique point in bnor n such that h separates from xh and for any hyperplane k h k does not separate xh from proof since xh fh we have xh x bnor n and h separates from xh now for any hyperplane k h if k separates xh from we have xh and choose xh such that since k does not separate from so d d xh however by lemma fh which is a contradiction it remains to show that xh is the unique point satisfying these conditions otherwise let be another point satisfying the hypothesis in the lemma and xh let k be a hyperplane separating from xh and assume xh obviously k if k h by hypothesis k does not separate xh from as well as from which is a contradiction since k separates from xh so k does not cross h which implies by lemma however so by lemma dnor xh dnor this is a contradiction since dnor xh n as h separates from xh and wright s construction we conclude this section with a recent application of normal cube paths which were invoked by and wright in order to provide a new proof that finite dimensional cat cube complexes have yu s property a the key to their proof was the construction of a family of maps hl with the property that for any interval and any neighbourhood of an endpoint of the interval the maps push that neighbourhood into the interval itself these maps were defined in terms of the normal cube paths as follows definition the h maps given l n we define hl x x as follows for x x let hl x be the vertex on the normal cube path from x to if dnor x and let it be if dnor x lemma let hl be defined as above and y b x then hl y x proof we only need to show that every halfspace containing x and contains also z hl y for any hyperplane h such that one of the associated halfspaces say contains x and either y or y in the former case z so we only need to check the case that h separates x from denote by cm the normal cube path from y to and denote by y vm the vertices on this cube path we shall argue that any hyperplane separating y from x is used within the first d x y steps on the cube path suppose that the cube ci does not cross any hyperplane h with h separating y from x hence every hyperplane k ci separates y x from if there was a hyperplane l separating y from x before ci then necessarily l separates y from x hence l crosses all the hyperplanes k crossing ci this contradicts the maximality of this step on the normal cube path thus there is no such l and so all the hyperplanes h separating y from x must be crossed within the first d x y steps goulnara arzhantseva graham niblo nick wright and jiawen zhang since z is the vertex on the cube path and d x y all the hyperplanes h separating y from x must have been crossed before z thus any such h actually also separates y from x z we will use the remarkable properties of the h maps to construct the s sets defined in our characterization of finite asymptotic dimension in the next section finite dimensional cat cube complexes throughout this section we fix a cat cube complex x of finite dimension and equipped with a basepoint x we will make use of the characterization obtained in corollary in order to prove theorem constructing the sets s x k l by corollary in order to prove x has finite asymptotic dimension we need to find a constant n n such that n x we can assign a subset s x k l x satisfying i ii iii iv n such that s x k l b x sl for all k n with k x s x k l s x l y x with d x y s y k l s x k l for all k n x l n now for l n k and x x we define e s x k l hl b x k it is easy to show that e s x k l satisfies i to iii but it does not satisfy iv above so we need some modification intuitively we construct s x k l as a uniformly separated net in e s x k l to be more precise we require the following lemma lemma there exist two constants n k only depending on the dimension such that n v there are subsets cx x and maps px x p cx where p cx denotes the power set of cx satisfying if d x y and y x then cx y cy and px y p y for z x w px z we have d z w kl x b z ml cx n where m we postpone the proof of the above lemma and first show how to use it to construct s x k l and hence to conclude the proof of theorem proof of theorem let n k be the constants in lemma n x let e s x k l hl b x k be as above and by lemma e we know s x k l x now we define s x k l px e s x k l and the only thing left to complete the proof is to verify the conditions in corollary a characterization for asymptotic dimension growth i by the definition of hl we know b x k d y hl y so for any s x k l d z x for such z and any w px z by lemma we know d z w kl which implies s x k l b x k l b x ml ii n with k x we have e s x k l e s x l now immediately by the definition s x k l s x l iii y x with d x y by lemma y x or x y assume the former let k obviously e s y k l e s x k l so we have e e s y k l p y s y k l px y s y k l px e s y k l px e s x k l s x k l here we use the first part of lemma in the second equation on the other hand e s x k l e s y k l so we have s x k l px e s x k l px e s y k l e px y s y k l p y e s y k l s y k l here we use the first part of lemma in the fourth equality iv by i we know that s x k l b x ml for all k hence by definition s x k l b x ml cx now by the third part of lemma we have x k l the last thing is to prove lemma we use the analysis in section to construct cx and px inductively recall that in section proposition for any l n n and any x x we have x snor nl xh v with and dim xh v dim x in order to carry out induction on the dimension of x we require a stronger version of lemma which is more flexible on the choice of endpoints of intervals more explicitly we have lemma there exist two constants n k only depending on the dimension such that n x v x x and a map x x p x satisfying if d x y and y x then x y y and x y y for z x w x z we have d z w kl x b z ml x n where m it is obvious that lemma is implied by lemma one just needs to take now we prove lemma proof of lemma fix an l we will carry out induction on dim x given any x v with dim x we define x y x dnor y ln goulnara arzhantseva graham niblo nick wright and jiawen zhang where ln l since dim x x is indeed isometric to an interval in we define x x p x as follows for any y x x y consists of a single point which is at distance y from in y where is the function of taking integer part now it is obvious that if d x y and y x then x y y and x y y for any z x and w x z we have d z w l x b z ml x suppose for any x v with dim x we have defined x x and a map x x p x satisfying if d x y and y x then x y y and x y y for z x and w x z we have d z w l x b z ml x now we focus on x v with dim x for any n n with nl dnor x by proposition xh vxnl fxh x snor nl x x x where vxnl is the farthest point from in x nl hnl is the set of hyperplanes crossing the cube of the normal cube path from to x and we also have dim xh vxnl dim x by induction cxh vxnl and pxh vxnl have already been defined now we define x cxh vxnl x and x x x for any z x let be the vertex on the normal cube path from to z where n z so dnor z l which implies d z and xh vxnl x snor nl x now define n o x x z pxh vxnl h hnl and xh vxnl and we need to verify the requirements hold for x and x first suppose d x y and y x and let be the hyperplane separating x from y given n n such that y snor nl by proposition y vxnl is the vertex on the normal cube path from to x and vnl is the vertex on the normal cube path from to y due to the property y proposition d vxnl vnl by proposition we have y vnl snor nl y snor nl x vxnl a characterization for asymptotic dimension growth recall that h z w denotes the set of all hyperplanes separating z from obviously h x h y y y x which implies hnl hnl hnl by lemma and lemma y x if hnl then y by proposition on the other hand hnl by lemma yh is the unique point in bnor nl such that h separates from yh and for any hyperplane k h k does not separate y from this implies yh xh y x since hnl hnl so we can do induction for the new base point yh xh and y y y x vnl vnl since d vxnl vnl and vnl xh vxnl this implies y cxh vxnl xh vnl cxh vy nl since cxh vxnl xh vxnl we have cxh vxnl y cxh vxnl xh vxnl y cxh vxnl xh xh vxnl y y y y claim xh vxnl y vnl indeed if vxnl vnl then it holds naturally if vxnl vnl y y y then by lemma vxnl y since d vxnl vnl so vxnl vnl y or vnl vxnl y y y while the former can not hold since vnl y y so vnl vxnl y which implies y vnl xh y xh vxnl vxnl y y vnl xh vxnl y by the claim y cxh vxnl y cxh vxnl xh vnl cxh vy nl now for the above n we have cxh vxnl y x y cxh vxnl y y x cxh vy y nl cyh vy y nl y since x x snor nl we have x y x x y x y n x nl x y n y nl y y n y nl y one need to show that x z y z let be the vertex on the normal cube path from to z where n z by the analysis above we know y y y y x hnl d vxnl vnl vnl vxnl y xh yh hnl hnl y x for h hnl with xh vxnl then h hnl h since y now for such h y y y xh vxnl xh vnl yh vnl goulnara arzhantseva graham niblo nick wright and jiawen zhang where the first equation comes from the claim above inductively we know for such h pxh vxnl p yh vy nl now by definition n o x pxh vxnl h hnl and xh vxnl n o y pxh vxnl h hnl and xh vxnl n o y y p yh vy h hnl and yh vnl x z nl y z second for any z x and w x z assume that w pxh vxnl for some x h hnl and xh vxnl as in the definition by induction we know d w since dim xh vxnl so l third for any z x consider b z ml x suppose n n satisfying s x b z ml x snor nl so b z ml x xh v which means nl x x there exists some h hnl such that b z ml xh vnl for such n and h let z xh vxnl xh vxnl obviously d z w d z d w b z ml xh vxnl b ml xh vxnl by induction we have b z ml cxh vxnl b ml cxh vxnl now for the above z there exist at most values of n such that b z ml x x snor nl and for such n since there exist at most hyperplanes h x such that b z ml x xh vnl so we have x b z ml x b z ml x n b z ml x nl x x b z ml cxh vxnl x n as above now we take k and n then the lemma holds for these constants coarse median spaces in this section we discuss the coarse median case and prove theorem we fix a coarse median space x with geodesic metric and coarse median with parameters k h and finite rank the definitions and notations are the same as in section according to remark we also assume that the coarse median satisfies and we recall a characterization for asymptotic dimension growth theorem any geodesic uniformly locally finite coarse median space of finite rank and at most exponential growth has property a our result theorem says that any coarse median space as above has subexponential asymptotic dimension growth thus combining with ozawa s result our theorem yields a strengthening of theorem to prove theorem we use several notations and lemmas from we use the notation x y for x y given r and a b x the coarse interval a b r is defined to be a b r z x a b z z by a result of bowditch there exists a constant depending only on the parameter k h such that for all x y z x x y z x y also recall that the median axiom holds in the coarse median case up to a constant depending only on the parameters k h for all x y z u v x we have x y z u v x u v y u v z actually we can take h h given r t denote r k r r k r h and r t rt we need the following lemmas from lemma let x be a coarse median space r and let a b x x a b then a x r a b r lemma let x be a geodesic coarse median space of rank at most for every and t there exists rt such that for all r rt a b x there exists h a b r such that a h r t and b a rt a b a h r lemma let x be a coarse median space fix there exist constants depending only on the parameters of the coarse median structure and such that the following holds let a b h m x and r satisfy m a h r h a b r then p m b h satisfies h p proof of theorem the proof is based on the construction used in to prove property a and for the readers convenience we give a sketch of their proof in fact we will verify the stronger conditions on the s sets required to apply theorem fix a base point x and let be the constants from lemma first apply lemma for and all t n to obtain a sequence rt n such that the conclusion of the lemma holds furthermore we can choose the rt tr inductively to arrange the sequence n t lt t is increasing goulnara arzhantseva graham niblo nick wright and jiawen zhang now fix x x t n and k for any y b x k lemma applied for a y b and r rt produces a point h y y rt we define s x k lt h y x y b x k we need to verify these sets satisfy condition in the statement of proposition we need to show there exists a subexponential function f r r satisfying i n such that s x k lt b x st for all k ii n with k x we have s x k lt s x lt iii y x with x y lt we have s x k x y lt s x k lt s y k lt for k x y s x k x y lt s x k lt s y k lt for k x y iv n x we have x k lt f lt by the construction ii and iii hold naturally for i by lemma we know s x k lt b x lt rt t the only thing left is to find a subexponential function f such that condition iv holds the following argument follows totally from the proof in and we omit some calculation the readers can turn to their original paper for more details take y b x k with the notation as above denote m y x y then by lemma one can deduce that m y y h y rt now since h y y rt lemma implies the point p y m y h y m y satisfies h y p y as m y x y x lemma now implies p y x consequently we have x p y trt rt which depends linearly on lt now by proposition in the number of possible points p y is bounded by p lt for some polynomial p depending only on h k and uniform local finiteness of x since x has at most exponential growth it follows that x k lt is at most p lt crt for some constants c take f lt p lt crt and recall that in the limit rt we extend f to a function on by setting f r f lt for r lt this completes the proof references behrstock hagen and sisto asymptotic dimension and for hierarchically hyperbolic spaces and groups behrstock hagen and sisto hierarchically hyperbolic spaces ii combination theorems and the distance formula bell and dranishnikov asymptotic dimension in topology proc bell growth of the asymptotic dimension function for groups arxiv bestvina bromberg and fujiwara the asymptotic dimension of mapping class groups is finite bowditch coarse median spaces and groups pacific journal of mathematics bowditch embedding median algebras in products of trees geometriae dedicata a characterization for asymptotic dimension growth bridson and metric spaces of curvature volume of grundlehren der mathematischen wissenschaften fundamental principles of mathematical sciences springerverlag berlin brown and ozawa and approximations volume of graduate studies in mathematics american mathematical society providence ri campbell and niblo hilbert space compression and exactness of discrete groups journal of functional analysis chatterji and niblo from wall spaces to cat cube complexes international journal of algebra and computation chepoi graphs of some cat complexes advances in applied mathematics dranishnikov groups with a polynomial dimension growth geometriae dedicata gromov hyperbolic groups in essays in group theory pages springer gromov asymptotic invariants of infinite groups geometric group theory volume of london math soc lecture note pages isbell median algebra transactions of the american mathematical society kaimanovich boundary amenability of hyperbolic spaces in discrete geometric analysis volume of contemp pages amer math providence ri niblo and reeves the geometry of cube complexes and the complexity of their fundamental groups topology nica cubulating spaces with walls algebr geom topol nowak and yu large scale geometry oppenheim an intermediate invariant between subexponential asymptotic dimension growth and yu s property internat algebra ozawa metric spaces with subexponential asymptotic dimension growth international journal of algebra and computation reeves biautomatic structures and combinatorics for cube complexes phd thesis university of melbourne roe hyperbolic groups have finite asymptotic dimension proceedings of the american mathematical society roller poc sets median algebras and group actions an extended study of dunwoody s construction and sageev s theorem southampton preprint archive sageev ends of group pairs and curved cube complexes proceedings of the london mathematical society smith the asymptotic dimension of the first grigorchuk group is infinity revista complutense and wright coarse medians and property wright finite asymptotic dimension for cat cube complexes geometry topology yu the novikov conjecture for groups with finite asymptotic dimension the annals of mathematics yu the coarse conjecture for spaces which admit a uniform embedding into hilbert space inventiones mathematicae zeidler coarse median structures on groups master s thesis university of vienna vienna austria wien mathematik wien austria address school of mathematics university of southampton highfield united kingdom address wright
4
normalizing flows on riemannian manifolds nov mevlana gemici google deepmind mevlana danilo rezende google deepmind danilor shakir mohamed google deepmind shakir abstract we consider the problem of density estimation on riemannian manifolds density estimation on manifolds has many applications in optics and plasma physics and it appears often when dealing with angular variables such as used in protein folding robot limbs and in general directional statistics in spite of the multitude of algorithms available for density estimation in the euclidean spaces rn that scale to large n normalizing flows kernel methods and variational approximations most of these methods are not immediately suitable for density estimation in more general riemannian manifolds we revisit techniques related to homeomorphisms from differential geometry for projecting densities to and use it to generalize the idea of normalizing flows to more general riemannian manifolds the resulting algorithm is scalable simple to implement and suitable for use with automatic differentiation we demonstrate concrete examples of this method on the sn in recent years there has been much interest in applying variational inference techniques to learning large scale probabilistic models in various domains such as images and text one of the main issues in variational inference is finding the best approximation to an intractable posterior distribution of interest by searching through a class of known probability distributions the class of approximations used is often limited approximations implying that no solution is ever able to resemble the true posterior distribution this is a widely raised objection to variational methods in that unlike mcmc the true posterior distribution may not be recovered even in the asymptotic regime to address this problem recent work on normalizing flows inverse autoregressive flows and others referred collectively as normalizing flows focused on developing scalable methods of constructing arbitrarily complex and flexible approximate posteriors from simple distributions using transformations parameterized by neural networks which gives these models universal approximation capability in the asymptotic regime in all of these works the distributions of interest are restricted to be defined over high dimensional euclidean spaces there are many other distributions defined over special homeomorphisms of euclidean spaces that are of interest in statistics such as beta and dirichlet gaussian wrapped cauchy and fisher which find little applicability in variational inference with large scale probabilistic models due to the limitations related to density complexity and gradient computation many such distributions are unimodal and generating complicated distributions from them would require creating mixture densities or using auxiliary random variables mixture methods require further knowledge or tuning number of mixture components necessary and a heavy computational burden on the gradient computation in general with quantile functions further mode complexity increases only linearly with mixtures as opposed to exponential increase with normalizing flows conditioning on auxiliary variables on the other hand constrains the use of the created distribution due to the need for integrating out the auxiliary factors in certain scenarios in all of these methods computation of gradients is difficult due to the fact that simulation of random variables can not be in general reparameterized rejection sampling in this work we present methods that generalizes previous work on improving variational inference in rn using normalizing flows to riemannian manifolds of interest such as spheres sn tori tn and their product topologies with rn like infinite cylinders figure left construction of a complex density on sn by first projecting the manifold to rn transforming the density and projecting it back to sn right illustration of transformed densities corresponding to an uniform density on the sphere blue empirical density obtained by monte carlo red analytical density from equation green density computed ignoring the intrinsic dimensionality of sn these special manifolds m rm are homeomorphic to the euclidean space rn where n corresponds to the dimensionality of the tangent space of m at each point a homeomorphism is a continuous function between topological spaces with a continuous inverse bijective and bicontinuous it maps point in one space to the other in a unique and continuous manner an example manifold is the unit the surface of a unit ball which is embedded in and homeomorphic to see figure in normalizing flows the main result of differential geometry that is used for computing the density updates is given by and represents the relationship between differentials infinitesimal volumes between two equidimensional euclidean spaces using the jacobian of the function rn rn that transforms one space to the other this result only applies to transforms that preserve the dimensionality however transforms that map an embedded manifold to its intrinsic euclidean space do not preserve the dimensionality of the points and the result above become obsolete jacobian of such transforms rn rm with m n are rectangular and an infinitesimal cube on rn maps to an infinitesimal parallelepiped on the manifold the relation between these volumes is given by det g where g is the metric induced by the embedding on the tangent space tx m the correct formula for computing the density over m now becomes z z f f z f det g rn rn det the density update going from the manifold to the euclidian space sn rn is then given by q q p f det f det as an application of this method on the sn we introduce inverse stereographic transform and define it as u rn sn ut u ut u which maps rn to sn in a bijective and bicontinuous manner the determinant of the metric g x associated with this transformation is given by t det g det x x xt x using these formulae on the left side of figure we map a uniform density on to enrich this density using normalizing flows and then map it back onto to obtain a or arbitrarily complex density on the original sphere on qthe right side of figure we show that the density update based on the riemannian metric det red is correct and closely follows the kernel density estimate based on samples blue we also show that using the generic volume transformation formulation for dimensionality preserving transforms green leads to an erroneous density and do not resemble the empirical distributions of samples after the transformation references rezende mohamed and wierstra stochastic backpropagation and approximate inference in deep generative models in icml kingma and welling variational bayes in iclr karol gregor ivo danihelka alex graves danilo jimenez rezende and daan wierstra draw a recurrent neural network for image generation in icml sm eslami nicolas heess theophane weber yuval tassa koray kavukcuoglu and geoffrey e hinton attend infer repeat fast scene understanding with generative models arxiv preprint danilo jimenez rezende shakir mohamed ivo danihelka karol gregor and daan wierstra generalization in deep generative models in icml matthew hoffman david blei chong wang and john paisley stochastic variational inference journal of machine learning research danilo jimenez rezende and shakir mohamed variational inference with normalizing flows arxiv preprint diederik kingma tim salimans and max welling improving variational inference with inverse autoregressive flow corr laurent dinh jascha and samy bengio density estimation using real nvp tim salimans diederik kingma and max welling markov chain monte carlo and variational inference bridging the gap in francis bach and david blei editors icml volume of jmlr workshop and conference proceedings pages arindam banerjee inderjit dhillon joydeep ghosh and suvrit sra clustering on the unit hypersphere using von distributions mach learn december siddharth gopal and yiming yang von clustering models in tony jebara and eric xing editors proceedings of the international conference on machine learning pages jmlr workshop and conference proceedings marco fraccaro ulrich paquet and ole winther indexable probabilistic matrix factorization for maximum inner product search in proceedings of the thirtieth aaai conference on artificial intelligence february phoenix arizona pages arindam banerjee inderjit dhillon joydeep ghosh and suvrit sra generative clustering of directional data in proceedings of the ninth acm sigkdd international conference on knowledge discovery and data mining kdd pages new york ny usa acm alex graves stochastic backpropagation through mixture density distributions corr lars maaloe casper kaae sonderby soren kaae sonderby and ole winther auxiliary deep generative models corr scott linderman david blei christian naesseth francisco ruiz rejection sampling variational inference adi the formula using matrix volume siam journal on matrix analysis and applications adi an application of the matrix volume in probability linear algebra and its applications marcel berger and bernard gostiaux differential geometry manifolds curves and surfaces manifolds curves and surfaces volume springer science business media
10
sep the annals of statistics vol no doi c institute of mathematical statistics estimation in nonlinear regression with harris recurrent markov by degui dag and jiti university of york university of bergen and monash university in this paper we study parametric nonlinear regression under the harris recurrent markov chain framework we first consider the nonlinear least squares estimators of the parameters in the homoskedastic case and establish asymptotic theory for the proposed estimators our results show that the convergence rates for the estimators rely not only on the properties of the nonlinear regression function but also on the number of regenerations for the harris recurrent markov chain furthermore we discuss the estimation of the parameter vector in a conditional volatility function and apply our results to the nonlinear regression with i processes and derive an asymptotic distribution theory which is comparable to that obtained by park and tribute while this paper was in the process of being published we heard that professor peter hall one of the most significant contributors to the areas of nonlinear regression and time series analysis sadly passed away the fundamental work done by professor peter hall in the area of martingale theory represented by the book with christopher heyde hall and heyde martingale limit theory and its applications academic press enables the authors of this paper in using martingale theory as an important tool in dealing with all different types of estimation and testing issues in econometrics and statistics in a related annals paper by gao king lu and ann statist theorem of hall and heyde plays an essential role in the establishment of an important theorem in short we would like to thank the for including our paper in this dedicated issue in honour of professor peter hall s fundamental contributions to statistics and theoretical econometrics received january revised august supported by the norwegian research council supported in part by an australian research council discovery early career researcher award supported by two australian research council discovery grants under grant numbers and ams subject classifications primary secondary key words and phrases asymptotic distribution asymptotically homogeneous function recurrent markov chain harris recurrence integrable function least squares estimation nonlinear regression this is an electronic reprint of the original article published by the institute of mathematical statistics in the annals of statistics vol no this reprint differs from the original in pagination and typographic detail li and gao phillips econometrica some numerical studies including simulation and empirical application are provided to examine the finite sample performance of the proposed approaches and results introduction in this paper we consider a parametric nonlinear regression model defined by yt g xt et g xt et t n where is the true value of the parameter vector such that rd and g r is assumed to be known throughout this paper we assume that is a compact set and lies in the interior of which is a standard assumption in the literature how to construct a consistent estimator for the parameter vector and derive an asymptotic theory are important issues in modern statistics and econometrics when the observations yt xt satisfy stationarity and weak dependence conditions there is an extensive literature on the theoretical analysis and empirical application of the above parametric nonlinear model and its extension see for example jennrich malinvaud and wu for some early references and severini and wong lai skouras and li and nie for recent relevant works as pointed out in the literature assuming stationarity is too restrictive and unrealistic in many practical applications when tackling economic and financial issues from a time perspective we often deal with nonstationary components for instance neither the consumer price index nor the share price index nor the exchange rates constitute a stationary process a traditional method to handle such data is to take the difference to eliminate possible stochastic or deterministic trends involved in the data and then do the estimation for a stationary model however such differencing may lead to loss of useful information thus the development of a modeling technique that takes both nonstationary and nonlinear phenomena into account in time series analysis is crucial without taking differences park and phillips hereafter pp study the nonlinear regression with the regressor xt satisfying a unit root or i structure and prove that the rates of convergence of the nonlinear least squares nls estimator of depend on the properties of g for an integrable g the rate of convergence is as slow as and for an asymptotically homogeneous g the rate of convergence can achieve the and even of convergence more recently chan and wang consider the same model nonlinear and nonstationary regression structure as proposed in the pp paper and then establish some corresponding results under certain technical conditions which are weaker than those used in the pp paper as also pointed out in a recent paper by myklebust karlsen and the null recurrent markov process is a nonlinear generalization of the linear unit root process and thus provides a more flexible framework in data analysis for example gao and yin show that the exchange rates between british pound and us dollar over the time period between january and february are nonstationary but do not necessarily follow a linear unit root process see also bec rahbek and shephard for a similar discussion of the exchange rates between french franc and german mark over the time period between december and april hence gao and yin suggest using the nonlinear threshold autoregressive tar with stationary and unit root regimes which can be proved as a recurrent markov process see for example example in section and example in the empirical application section under the framework of null recurrent markov chains there has been an extensive literature on nonparametric and semiparametric estimation karlsen and karlsen myklebust and lin li and chen schienle chen gao and li gao et al by using the technique of the split chain nummelin meyn and tweedie and the generalized ergodic theorem and functional limit theorem developed in karlsen and as far as we know however there is virtually no work on the parametric estimation of the nonlinear regression model when the regressor xt is generated by a class of harris recurrent markov processes that includes both stationary and nonstationary cases this paper aims to fill this gap if the function g is integrable we can directly use some existing results for functions of harris recurrent markov processes to develop an asymptotic theory for the estimator of the case that g belongs to a class of asymptotically homogeneous functions is much more challenging as in this case the function g is no longer bounded in nonparametric or semiparametric estimation theory we do not have such problems because the kernel function is usually assumed to be bounded and has a compact support unfortunately most of the existing results for the asymptotic theory of the null recurrent markov process focus on the case where g is bounded and integrable chen hence in this paper we first modify the conventional nls estimator for the asymptotically homogeneous g and then use a novel method to establish asymptotic distribution as well as rates of convergence for the modified parametric estimator our results show that the rates of convergence for the parameter vector in nonlinear cointegrating li and gao models rely not only on the properties of the function g but also on the magnitude of the regeneration number for the null recurrent markov chain in addition we also study two important issues which are closely related to nonlinear mean regression with harris recurrent markov chains the first one is to study the estimation of the parameter vector in a conditional volatility function and its asymptotic theory as the estimation method is based on the the rates of convergence for the proposed estimator would depend on the property of the volatility function and its derivatives meanwhile we also discuss the nonlinear regression with i processes when g is asymptotically homogeneous by using theorem in section we obtain asymptotic normality for the parametric estimator with a stochastic normalized rate which is comparable to theorem in pp however our derivation is done under markov perspective which carries with it the potential of extending the theory to nonlinear and nonstationary autoregressive processes which seems to be hard to do with the approach of pp the rest of this paper is organized as follows some preliminary results about markov theory especially harris recurrent markov chain and function classes are introduced in section the main results of this paper and their extensions are given in sections and respectively some simulation studies are carried out in section and the empirical application is given in section section concludes the paper the outline of the proofs of the main results is given in an appendix the supplemental document li and gao includes some additional simulated examples the detailed proofs of the main results and the proofs of some auxiliary results preliminary results to make the paper in this section we first provide some basic definitions and preliminary results for a harris recurrent markov process xt and then define function classes in a way similar to those introduced in pp markov theory let xt t be a markov chain on the state space e e with transition this means that for any pprobability t x a for x we further set a e with a we have p assume that the markov chain xt is harris recurrent definition a markov chain xt is harris recurrent if for any set b and given x for all x e xt returns to b infinitely often with probability one where is defined as in karlsen and nonlinear and nonstationary regression the harris recurrence allows one to construct a split chain which decomposes the partial sum of functions of xt into blocks of independent and identically distributed parts and two asymptotically negligible remaining parts let be the regeneration times n the number of observations and n n the number of regenerations as in karlsen and where they use the notation t n instead of n n for the process g xt t defining x g xt k x g xt k n n zk n x g xt k n n n where g is a real function defined on r then we have sn g n x n n g xt x zk zn n from nummelin we know that zk k is a sequence of random variables and and zn n converge to zero almost surely when they are divided by the number of regenerations n n using lemma in karlsen and the general harris recurrence only yields stochastic rates of convergence in asymptotic theory of the parametric and nonparametric estimators see theorems and below where distribution and size of the number of regenerations n n have no a priori known structure but fully depend on the underlying process xt to obtain a specific rate of n n in our asymptotic theory for the null recurrent process we next impose some restrictions on the tail behavior of the distribution of the recurrence times of the markov chain definition a markov chain xt is recurrent if there exist a small nonnegative function f an initial measure a constant and a slowly varying function lf such that n x f xt lf n where stands for the expectation with initial distribution and is the gamma function with parameter li and gao the definition of a small function f in the above definition can be found in some existing literature page in nummelin assuming recurrence restricts the tail behavior of the recurrence time of the process to be a regularly varying function in fact for all small functions f by lemma in karlsen and we can find an ls such that holds for the recurrent markov chain with lf f ls r where is an invariant measure of the markov chain xt f f x dx and s is the small function in the minorization inequality of karlsen and letting ls n lf n f and following the argument in karlsen and we may show that the regeneration number n n of the recurrent markov chain xt has the following asymptotic distribution n n d ls n where t t is the process with parameter kasahara since n n n for the null recurrent case by the rates of convergence for the nonparametric kernel estimators are slower than those for the stationary time series case karlsen myklebust and gao et al however this is not necessarily the case for the parametric estimator in our model in section below we will show that our rate of convergence in the null recurrent case is slower than that for the stationary time series for integrable g and may be faster than that for the stationary time series case for asymptotically homogeneous g in addition our rates of convergence also depend on the magnitude of which measures the recurrence times of the markov chain xt examples of recurrent markov chains for a stationary or positive recurrent process we next give several examples of recurrent markov chains with example recurrent markov chain process be defined as xt xt i let a random walk t where xt is a sequence of random variables with e e and e and the distribution of xt is absolutely continuous with respect to the lebesgue measure with the density function satisfying inf x for all compact sets some existing papers including kallianpur and robbins have shown that xt defined by is a recurrent markov chain ii consider a parametric tar model of the form xt i s i sc xt nonlinear and nonstationary regression where s is a compact subset of r sc is the complement of s xt satisfies the corresponding conditions in example i above recently gao and yin have shown that such a tar process xt is a recurrent markov chain furthermore we may generalize the tar model to xt h i s i sc xt where x and is a parameter vector according to and granger the above autoregressive process is also a recurrent markov chain example recurrent markov chain with let xt be a sequence of random variables taking positive values and xt be defined as xt xt for t and for some positive constant myklebust karlsen and prove that xt is recurrent if and only if p n n where is the integer function and l is a slowly varying positive function from the above examples the recurrent markov chain framework is not restricted to linear processes see example ii furthermore such a null recurrent class has the invariance property that if xt is recurrent then for a transformation t t xt is also recurrent and granger such invariance property does not hold for the i processes for other examples of the recurrent markov chain we refer to example in schienle for some general conditions on diffusion processes to ensure the harris recurrence is satisfied we refer to and and bandi and phillips function classes similar to park and phillips we consider two classes of parametric nonlinear functions integrable functions and asymptotically homogeneous functions which q include many commonlypq pq used functions in nonlinear regression let kak aij for a aij and kak be the euclidean norm of vector a a function h x r rd is if z kh x dx r li and gao where is the invariant measure of the harris recurrent markov chain xt when is differentiable such that dx ps x dx h x is integrable if and only if h x ps x is integrable where ps is the invariant density function for xt for the random walk case as in example i the reduces to the conventional integrability as dx dx definition a vector function h x is said to be integrable on if for each h x is and there exist a neighborhood of and m r r bounded and such that kh x h x k x for any the above definition is comparable to definition in pp however in our definition we do not need condition b in definition of their paper which makes the integrable function family in this paper slightly more general we next introduce a class of asymptotically homogeneous functions definition for a vector function h x let h h x r x where is nonzero h is said to be asymptotically homogeneous on if the following two conditions are satisfied i h is locally bounded uniformly for any and continuous with respect to ii the remainder term r x is of order smaller than as for any as in pp is the asymptotic order of h and h is the limit homogeneous function the above definition is quite similar to that of an function in pp except that the regularity condition a in definition of pp is replaced by the local boundness condition i in definition following definition in pp as r x is of order smaller than we have either r x a ar x or r x b ar x br where a o b o as ar is locally bounded and br is bounded and vanishes at infinity note that the above two definitions can be similarly generalized to the case that h is a d d matrix of functions details are omitted here to save space furthermore when the process xt is positive recurrent an asymptotically homogeneous function h x might be also integrable on as long as the density function of the process ps x is integrable and decreases to zero sufficiently fast when x diverges to infinity nonlinear and nonstationary regression main results in this section we establish some asymptotic results for the parametric estimators of when g and its derivatives belong to the two classes of functions introduced in section integrable function on we first consider estimating model by the nls approach which is also used by pp in the unit root framework define the loss function by n x yt g xt ln g bn by minimizing ln g over we can obtain the resulting estimator that is bn arg min ln g for let x x x g x bn when g and its derivabefore deriving the asymptotic properties of tives are integrable on we give some regularity conditions assumption i xt is a harris recurrent markov chain with invariant measure ii et is a sequence of random variables with mean zero and finite variance and is independent of xt i g x is integrable on and for all r assumption g x g x dx ii both x and x are integrable on and the matrix z x x dx is positive definite when is in a neighborhood of remark in assumption i xt is assumed to be harris recurrent which includes both the positive and null recurrent markov chains the restriction on et in assumption ii may be replaced by the condition that et is an irreducible ergodic and strongly mixing process with mean zero and certain restriction on the mixing coefficient and moment conditions theorem in karlsen myklebust and hence under some mild conditions et can include the ar and li and gao arch processes as special examples however for this case the techniques used in the proofs of theorems and below need to be modified by noting that the compound process xt et is harris recurrent furthermore the homoskedasticity on the error term can also be relaxed and we may allow the existence of certain heteroskedasticity structure that is et xt where is the conditional variance function and satisfies assumption ii with a unit variance however the property of the function would affect the convergence rates given in the following asymptotic results for example to ensure the validity of theorem we need to further assume that is which indicates that x x is integrable on as in the literature karlsen myklebust and we need to assume the independence between xt and assumption is quite standard and similar to the corresponding conditions in pp in particular assumption i is a key condition to derive bn the global consistency of the nls estimator bn the following theorem is we next give the asymptotic properties of applicable for both stationary positive recurrent and nonstationary null recurrent time series theorem let assumptions and hold bn which minimizes the loss function ln g over is a the solution consistent that is bn op bn has an asymptotically normal distribution of the b the estimator form p d bn n n n where is a null vector bn is asymptotically normal with remark theorem shows that a stochastic convergence rate n n for both the stationary and nonstationary cases however n n is usually unobservable and its specific rate depends on and ls if xt is recurrent see corollary below we next discuss how to link n n with a directly observable hitting time indeed if c e and ic has a support the number p of times that the process visits c up to the time n is defined by nc n ic xt by lemma in karlsen and we have nc n c n n nonlinear and nonstationary regression if c ic r c dx a possible estimator of is ln nc n ln n which is strongly consistent as shown by karlsen and however it is usually of somewhat limited practical use due to the slow convergence rate remark of karlsen and a simulated example is given in appendix b of the supplemental document to discuss the finite sample performance of the estimation method in by and theorem we can obtain the following corollary directly corollary suppose that the conditions of theorem are satisfied and let c e such that ic has a support and c bn has an asymptotically normal distribution of the form then the estimator p d bn nc n n c where c remark in practice we may choose c as a compact set such that c and c in the additional simulation study example given in the supplemental document for two types of recurrent markov processes we choose c a with the positive constant a carefully chosen which works well in our setting if has a continuous derivative function ps we can show that z x x pc x dx with pc x ps x c the density function pc x can be estimated by the kernel method then replacing by the nls estimated value we can obtain a consistent estimate for note that nc n is observable and can be estimated bn hence for by calculating the variance of the residuals ebt yt g xt inference purposes one may not need to estimate and ls when xt is recurrent as and nc n in can be explicitly computed without knowing any information about and ls from in theorem and in section above we have the following corollary corollary suppose that the conditions of theorem are satisfied furthermore xt is a recurrent markov chain with then we have b n op p ls n li and gao where ls n is defined in section remark as and ls n is a slowly varying positive function bn is slower than for the integrable case the rate of convergence of the rate of convergence of the parametric nls estimator in the stationary time series case combining and theorem the result can be strengthened to q d bn ls n nd where nd is a normal distribution with mean zero and covariance matrix being the identity matrix which is independent of a similar result is also obtained by chan and wang corollary and complement the existing results on the rates of convergence of nonparametric estimators in recurrent markov processes karlsen myklebust and gao et al for the random walk case which corresponds to recurrent markov chain the rate of convergence is which is similar to a result obtained by pp for the processes that are of i type asymptotically homogeneous function on we next establish an asymptotic theory for a parametric estimator of when g and its derivatives belong to a class of asymptotically homogeneous functions for a unit root process xt pp establish the consistency and limit distribution of bn by using the local time technique their method relies the nls estimator on the linear framework of the unit root process the functional limit theorem of the partial sum process and the continuous mapping theorem the harris recurrent markov chain is a general process and allows for a possibly nonlinear framework however in particular the null recurrent markov chain can be seen as a nonlinear generalization of the linear unit root process hence the techniques used by pp for establishing the asymptotic theory is not applicable in such a possibly nonlinear markov chain framework meanwhile as mentioned in section the methods used to prove theorem can not be applied here directly because the asymptotically homogeneous functions usually are not bounded and integrable this leads to the violation of the conditions in the ergodic theorem when the process is null recurrent in fact most of the existing limit theorems for the null recurrent markov process h xt chen only consider the case where h is bounded and integrable hence it is quite challenging to extend theorem in pp to the case of general null recurrent markov chains and establish an asymptotic theory for the nls estimator for the case of asymptotically homogeneous functions nonlinear and nonstationary regression bn to address the above concerns we have to modify the nls estimator let mn be a positive and increasing sequence satisfying mn as n but is dominated by a certain polynomial rate we define the modified loss function by qn g n x yt g xt i mn the modified nls mnls estimator n can be obtained by minimizing qn g over arg min qn g the above truncation technique enables us to develop the limit theorems for the parametric estimate n even when the function g or its derivatives are unbounded a similar truncation idea is also used by ling to estimate the model when the second moment may not exist it is called as the method by ling however assumption in ling indicates the stationarity for the model the harris recurrence considered in the paper is more general and includes both the stationary and nonstationary cases as mn for the integrable case discussed in section we can easily show that n has the same asymptotic bn under some regularity conditions in example below distribution as we compare the finite sample performance of these two estimators and find that they are quite similar furthermore when xt is positive recurrent as mentioned in the last paragraph of section although the asymptotically homogeneous g x and its derivatives are unbounded and not integrable on it may be reasonable to assume that g x ps x and its derivatives with respect to are integrable on in this case theorem and corollary in section still hold for the estimation n and the role of n n or nc n is the same as that of the sample size which implies that the consistency in the stationary time series case can be derived hence we only consider the null recurrent xt in the remaining subsection let bi i i i mn bi i mn b mn mn mn b mn mn it is easy to check that bi k i mn k are disjoint s s m and mn bi k define mn mn x li and gao and mn x i i mn mn i i hg mn mn mn x e n mn x hg mn x hg mn mn hg mn mn where hg and will be defined in assumption i below some additional assumptions are introduced below to establish asymptotic properties for n assumption i g x x and x are asymptotically homogeneous on with asymptotic orders and and limit homogeneous functions hg and respectively furthermore the asymptotic orders and are independent of ii the function hg is continuous on the interval for all e which achieves unique for all there exist a continuous e n such that minimum at and a sequence of positive numbers m lim e n m e n e for in a neighborhood of both and are continuous on the interval and there exist a continuous and positive definite matrix and a sequence of positive numbers mn such that lim mn e n and mn mn are bounded and furthermore both mn m ln mn o for ln but ln o mn iii the asymptotic orders and are positive and nondecreasing such that n n o n as n iv for each x mn nx y x y x is a small set and the invariant density function ps x is bounded away from zero and infinity nonlinear and nonstationary regression remark assumption i is quite standard see for example condition b in theorem in pp the restriction that the asymptotic orders are independent of can be relaxed at the cost of more complicated assumptions and more lengthy proofs for example to ensure the global consistency of n we need to assume that there exist and a neighborhood of for any such that inf inf n n for and to establish the asymptotic normality of n we need to impose additional technical conditions on the asymptotic orders and limit homogeneous functions similar to condition b in theorem of pp the e n in assumption ii e and m explicit forms of mn can be derived for some special cases for example when xt is generated by a random walk process we have dx dx and mn o x mn o mn z i i mn mn x x dx which implies that mn mn and x x dx in e n can be derived similarly for the e and m the explicit forms of above two cases and details are thus omitted define jg n mn mn we next establish an asymptotic theory for n when xt is null recurrent theorem let xt be a null recurrent markov process assumptions ii and hold a the solution n which minimizes the loss function qn g over is consistent that is n op b the estimator n has the asymptotically normal distribution d n n g n n n id where id is a d d identity matrix li and gao remark from theorem the asymptotic distribution of n for the asymptotically homogeneous regression function is quite different from bn for the integrable regression function when the process is null that of recurrent such finding is comparable to those in pp the choice of mn in the estimation method and asymptotic theory will be discussed in corollaries and below remark as in corollary we can modify for inference purposes define jg c n mn c where mn c x i i mn mn c mn x mn mn c where c satisfies the conditions in corollary then by and we can show that d nc n jg c n n n id when xt is recurrent we can use the asymptotically normal distribution theory to conduct statistical inference without knowing any information of as nc n is observable and jg c n can be explicitly computed through replacing c by the estimated value from in theorem and in section above we have the following two corollaries the rate of convergence in below is quite general for recurrent markov processes when it is the same as the convergence rate in theorem of pp corollary suppose that the conditions of theorem are satisfied furthermore let xt be a recurrent markov chain with taking mn s n for some positive constant we have n op mn corollary suppose that the conditions of theorem are satisfied let g x xt be a random walk process and mn for some positive constant then we have n op where is the mnls estimator of furthermore q d n n n n nonlinear and nonstationary regression remark for the simple linear regression model with regressors generated by a random walk process and imply the existence of super consistency corollaries and show that the rates of convergence for the parametric estimator in nonlinear cointegrating models rely not only on the properties of the function g but also on the magnitude of in the above two corollaries we give the choice of mn for some special cases in fact for the random walk process xt defined as in example i with e we have nr x x nr xi b r n n where b r is a standard brownian motion and denotes the weak convergence furthermore by the continuous mapping theorem billingsley sup x nr sup b r n which implies that it is reasonable to let mn where may be chosen such that p sup b r p where the second equality is due to the reflection principle and x rx du this implies that c can be obtained when is given e such as for the general recurrent markov process the choice of the optimal mn remains as an open problem we conjecture that it may with defined in and m f chosen by be an option to take mn m a method and will further study this issue in future research discussions and extensions in this section we discuss the applications of our asymptotic results in estimating the nonlinear heteroskedastic regression and nonlinear regression with i processes furthermore we also discuss possible extensions of our model to the cases of multivariate regressors and nonlinear autoregression nonlinear heteroskedastic regression we introduce an estimation method for a parameter vector involved in the conditional variance function for simplicity we consider the model defined by yt xt for rp where satisfies assumption ii with a unit variance r is positive and is the true value of the parameter vector li and gao involved in the conditional variance function estimation of the parametric nonlinear variance function defined in is important in empirical applications as many scientific studies depend on understanding the variability of the data when the covariates are integrated han and park study the maximum likelihood estimation of the parameters in the arch and garch models a recent paper by han and kristensen further considers the quasi maximum likelihood estimation in the models with stationary and nonstationary covariates we next consider the general harris recurrent markov process xt and use a robust estimation method for model letting be a positive number such that e log log we have log log xt log log xt log log log log xt where e since our main interest lies in the discussion of the asymptotic theory for the estimator of we first assume that is known to simplify our discussion model can be seen as another nonlinear mean regression model with parameter vector to be estimated the logtransformation would make data less skewed and thus the resulting volatility estimator may be more robust in terms of dealing with such transformation has been commonly used to estimate the variability of the data in the stationary time series case peng and yao gao chen cheng and peng however any extension to harris recurrent markov chains which may be nonstationary has not been done in the literature our estimation method will be constructed based on noting that is assumed to be known define xt xt and xt log xt case i if xt and its derivatives are integrable on the logb n can be obtained transformed nonlinear least squares lnls estimator by minimizing ln over where ln n x log xt letting assumptions and be satisfied with et and g replaced by and respectively then the asymptotic results developed in section bn still hold for nonlinear and nonstationary regression case ii if xt and its derivatives are asymptotically homogeneous on the modified nonlinear least squares lmnls estimator n can be obtained by minimizing qn over where qn n x log xt i mn where mn is defined as in section then the asymptotic results developed in section still hold for n under some regularity conditions such as a slightly modified version of assumptions and hence it is possible to achieve the result for n when xt is null recurrent in practice however is usually unknown and needs to be estimated we next briefly discuss this issue for case ii we may define the loss function by qn n x log log xt i mn then the estimators n and can be obtained by minimizing qn over and a simulated example example is given in appendix b of the supplemental document to examine the finite sample performance of the lnls and lmnls estimations considered in cases i and ii respectively nonlinear regression with i processes as mentioned before pp consider the nonlinear regression with the regressors xt generated by xt xt xt x where is a sequence of random variables and satisfies some summability conditions for simplicity we assume that throughout this subsection pp establish a suite of asymptotic results for the nls estimator of the parameter involved in when xt is defined by an open problem is how to establish such results by using the recurrent markov chain framework this is quite challenging as xt defined by is no longer a markov process except for some special cases for example for j we next consider solving this open problem for the case where g is asymptotically homogeneous on and derive an asymptotic theory for n by using theorem the discussion for the integrable case is more li and gao complicated and will be considered in a future study our main idea is to approximate xt by which is defined by t x x and then show that the asymptotically homogeneous function of xt is asymptotically equivalent to the same function of as is a random walk process under the assumption see appendix e of the supplemental document we can then make use of theorem define n mn mn x x dx we next give some asymptotic results for n for the case where xt is a unit root process and the proof is provided in appendix e of the supplemental document theorem let assumptions and in appendix e of the supplemental document hold and mn where is defined in assumption i a the solution n which minimizes the loss function qn g over is consistent that is n op b the estimator n has the asymptotically normal distribution d n n n n id where n is the number of regenerations for the random walk remark theorem establishes an asymptotic theory for n when xt is a unit root process our results are comparable with theorems and in pp however we establish asymptotic ity in with stochastic rate n n and pp establish their asymptotic mixed normal distribution theory with a deterministic rate as n n n in probability if we take mn n as in corollary we will find that our rate of convergence of n is the same as that derived by pp nonlinear and nonstationary regression extensions to multivariate regression and nonlinear autoregression the theoretical results developed in section are limited to nonlinear regression with a univariate markov process a natural question is whether it is possible to extend them to the more general case with multivariate covariates in the unit root framework it is well known that it is difficult to derive the limit theory for the case of multivariate unit root processes as the vector brownian motion is transient when the dimension is larger than or equal to in contrast under the framework of the harris recurrent markov chains it is possible for us to generalize the theoretical theory to the multivariate case with certain restrictions for example it is possible to extend the theoretical results to the case with one nonstationary regressor and several other stationary regressors we next give an example of vector autoregressive var process which may be included in our framework under certain conditions example defined by consider a var process xt which is xt b xt t where a is a q q matrix b is a vector and xt is a sequence of random vectors with mean zero if all the eigenvalues of the matrix a are inside the unit circle under some mild conditions on xt theorem in myklebust karlsen and shows that the var process xt in is geometric ergodic which belongs to the category of positive recurrence on the other hand if the matrix a has exactly one eigenvalue on the unit circle under some mild conditions on xt and b theorem in myklebust karlsen and shows that the var process xt in is recurrent with for this case the asymptotic theory developed in section is applicable however when a has two eigenvalues on the unit circle under different restrictions xt might be null recurrent but not recurrent or transient if a has three or more eigenvalues on the unit circle the var process xt would be transient which indicates that the limit theory developed in this paper would be not applicable we next briefly discuss a nonlinear autoregressive model of the form g xt t for this autoregression case et is not independent of xt and thus the proof strategy developed in this paper needs to be modified following the argument in karlsen and in order to develop li and gao an asymptotic theory for the parameter estimation in the nonlinear autoregression we may need that the process xt is harris recurrent but not that the compound process xt is also harris recurrent this is because we essentially have to consider sums of products like xt xt g xt which are of the general form treated in karlsen and the verification of the harris recurrence of xt has been discussed by lu and example given in section above how to establish an asymptotic theory for the parameter estimation of in model will be studied in our future research simulated examples in this section we provide some simulation studies to compare the finite sample performance of the proposed parametric estimation methods and to illustrate the developed asymptotic theory example consider the generalized linear model defined by yt exp et t n where xt is generated by one of the three markov processes i ar process xt xt ii random walk process xt xt iii tar process xt i i and xt is a sequence of standard normal random variables for the above three processes the error process et is a sequence of n random variables and independent of xt in this simulation study we compare the finite sample behavior of the nls estimator with that of the mnls estimator and the sample size n is chosen to be and the aim of this example is to illustrate the asymptotic theory developed in section as the regression function in is integrable when following the discussion in section the ar process defined in i is positive recurrent and the random process defined in ii and the tar process defined in iii are recurrent we generate replicated samples for this simulation study and calculate the means and standard errors for both of the parametric estimators in simulations in the mnls estimation procedure we choose mn with where is defined in for case i and for cases ii and iii it is easy to find that the simulation results are reported in table where the numbers in the parentheses are the standard errors of the nls or mnls estimator in the replications from table we have the following interesting findings a the parametric estimators perform better in the stationary case i than in the nonstationary cases ii and iii this is consistent with the asymptotic nonlinear and nonstationary regression table means and standard errors for the estimators in example sample size nls mnls the regressor xt is generated in case i nls mnls the regressor xt is generated in case ii nls mnls the regressor xt is generated in case iii results obtained in section such as theorem and corollaries and which indicate that the convergence rates of the parametric estimators can achieve op in the stationary case but only op in the recurrent case b the finite sample behavior of the mnls estimator is the same as that of nls estimator since means little sample information is lost c both of the two parametric estimators improve as the sample size increases d in addition for case i the ratio of the standard errors between and is close to the theoretical ratio for case iii the ratio of the standard errors between and is close to the theoretical ratio hence this again confirms that our asymptotic theory is valid example consider the quadratic regression model defined by yt et t n where xt is generated either by one of the three markov processes introduced in example or by iv the unit root process xt xt xt vt in which vt is a sequence of n random variables and the error process et is defined as in example in this simulation study we are interested in the finite sample behavior of the mnls estimator to illustrate the asymptotic theory developed in section as the regression function in is asymptotically homogeneous for the comparison purpose we also investigate the finite sample behavior of the nls estimation although we do not establish the related asymptotic theory under the framework of null recurrent markov chains the sample size n is chosen to be and as in example and the replication number is r in the mnls estimation procedure as in the previous example we choose mn where for case i and for cases ii iv li and gao table means and standard errors for the estimators in example sample size nls mnls the regressor xt is generated in case i nls mnls the regressor xt is generated in case ii nls mnls the regressor xt is generated in case iii nls mnls the regressor xt is generated in case iv the simulation results are reported in table from which we have the following conclusions a for the regression model with asymptotically homogeneous regression function the parametric estimators perform better in the nonstationary cases ii iv than in the stationary case i this finding is consistent with the asymptotic results obtained in sections and b the mnls estimator performs as well as the nls estimator in particular for the nonstationary cases both the nls and mnls estimations improve as the sample size increases empirical application in this section we give an empirical application of the proposed parametric model and estimation methodology example consider the logarithm of the uk to us export and import data in these data come from the website https spanning from january to august monthly and with the sample size n let xt be defined as log et log puk t us log pt where et is the monthly average of the nominal exchange rate and pit denotes the consumption price index of country i in this example we let yt denote the logarithm of either the export or the import value the data xt and yt are plotted in figures and respectively meanwhile the real data application considered by gao and yin suggests that xt may follow the threshold autoregressive model proposed in that paper which is shown to be a recurrent markov process furthermore an application of the estimation method by gives this further supports that xt roughly follows a recurrent markov chain with nonlinear and nonstationary regression fig plot of the real exchange rate xt to avoid possible confusion let yex t and yim t be the export and import data respectively we are interested in estimating the parametric relationship between yex t or yim t and xt in order to find a suitable parametric relationship we first estimate the relationship nonparametrically based on yex t mex xt and yim t mim xt karlsen myklebust and fig plot of the logarithm of the export and import data yt li and gao where mex and mim are estimated by pn k xt x yex t and m b ex x p n k xt x pn k xt x yim t m b im x p n k xt x where k is the probability density function of the standard normal distribution and the bandwidth h is chosen by the conventional method then a parametric calibration procedure based on the preliminary nonparametric estimation suggests using a polynomial relationship of the form yex t xt eex t for the export data where the estimated values by using the method in section of and are and respectively and yim t xt eim t for the import data where the estimated values of and are and respectively their plots are given in figures and respectively while figures and suggest some relationship between the exchange rate and either the export or the import variable the true relationship may fig plot of the polynomial model fitting nonlinear and nonstationary regression fig plot of the polynomial model fitting also depend on some other macroeconomic variables such as the real interest rate in the uk during the period as discussed in section we would like to extend the proposed models from the univariate case to the multivariate case as a future application we should be able to find a more accurate relationship among the export or the import variable with the exchange rate and some other macroeconomic variables conclusions in this paper we have systematically studied the nonlinear regression under the general harris recurrent markov chain framework which includes both the stationary and nonstationary cases note that the nonstationary null recurrent process considered in this paper is under markov perspective which unlike pp indicates that our methodology has the potential of being extended to the nonlinear autoregressive case in this paper we not only develop an asymptotic theory for the nls estimator of when g is integrable but also propose using a modified version of the conventional nls estimator for the asymptotically homogeneous g and adopt a novel method to establish an asymptotic theory for the proposed modified parametric estimator furthermore by using the we discuss the estimation of the parameter vector in a conditional volatility function we also apply our results to the nonlinear regression with i processes which may be and establish an asymptotic distribution theory which is comparable to that obtained by pp the simulation studies and empirical applications have been provided to illustrate our approaches and results li and gao appendix a outline of the main proofs in this appendix we outline the proofs of the main results in section the detailed proofs of these results are given in appendix c of the supplemental document the major difference between our proof strategy and that based on the unit root framework pp and kristensen and rahbek is that our proofs rely on the limit theorems for functions of the harris recurrent markov process lemmas and below whereas pp and kristensen and rahbek s proofs use the limit theorems for integrated time series we start with two technical lemmas which are crucial for the proofs of theorems and the proofs for these two lemmas are given in appendix d of the supplemental document by li and gao lemma let hi x be a integrable function on and suppose that assumption i is satisfied for xt a uniformly for we have z n x hi xt hi x dx op n n b if et satisfies assumption ii we have uniformly for n n p hi xt et op n n hi x x dx is positive definite we have z n x d hi xt et n hi x hi x dx furthermore if p r n x lemma let hah x be a asymptotically homogeneous function on with asymptotic order independent of and limit homogeneous function hah suppose that xt is a null recurrent markov process with the invariant measure and assumption iv are satisfied and hah is continuous on the interval for all furthermore letting mn x i i hah n hah mn mn mn x hah hah mn mn nonlinear and nonstationary regression with bi and bi defined in section there exist a continuous and positive definite matrix and a sequence of positive numbers mn such that mn mn is bounded ln mn o for ln but ln o mn and n mn lim where is defined in section a uniformly for we have n n jah n n x hah xt xt i mn id op where jah n mn mn b if et satisfies assumption ii we have uniformly for jah n n x and furthermore n p hah xt i mn et op n n n jah n n x d hah xt i mn et n id proof of theorem for theorem a we only need to verify the following sufficient condition for the weak consistency jennrich for a sequence of positive numbers ln g ln g op uniformly for where is continuous and achieves a unique minimum at this sufficient condition can be proved by using and in lemma and in theorem a is thus proved combining the device in billingsley and in lemma b we can complete the proof of the asymptotically normal distribution in details can be found in appendix c of the supplementary material proof of corollary the asymptotic distribution can be proved by using and theorem b li and gao proof of corollary the convergence result can be proved by using and and following the proof of lemma in gao et al a detailed proof is given in appendix c of the supplementary material proof of theorem the proof is similar to the proof of theorem above to prove the weak consistency similar to we need to verify the sufficient condition for a sequence of positive numbers qn g qn g op uniformly for where is continuous and achieves a unique minimum at using assumption ii and following the proofs of and in lemma see appendix d in the supplementary material we may prove and thus the weak consistency result combining the device and in lemma b we can complete the proof of the asymptotically normal distribution for n in more details are given in appendix c of the supplementary material proof of corollary by using theorem b and and following the proof of lemma in gao et al we can directly prove proof of corollary the convergence result follows from in corollary and can be proved by using in theorem b acknowledgments the authors are grateful to the professor runze li an associate editor and two referees for their valuable and constructive comments and suggestions that substantially improved an earlier version of the paper thanks also go to professor peter phillips and other colleagues who commented on this paper and the participants of various conferences and seminars where earlier versions of this paper were presented this work was started when the first and third authors visited the second author at department of mathematics university of bergen in supplementary material supplement to estimation in nonlinear regression with harris recurrent markov chains doi we provide some additional simulation studies the detailed proofs of the main results in section the proofs of lemmas and and theorem nonlinear and nonstationary regression references bandi and phillips b nonstationary processes in handbook of financial econometrics and hansen eds bec rahbek and shephard the acr model a multivariate dynamic mixture autoregression oxf bull econ stat billingsley convergence of probability measures wiley new york chan and wang q nonlinear cointegrating regressions with nonstationary time series working paper school of mathematics univ sydney chen x how often does a harris recurrent markov chain recur ann probab chen x on the limit laws of the second order for additive functionals of harris recurrent markov chains probab theory related fields chen cheng and peng conditional variance estimation in heteroscedastic regression models statist plann inference chen gao and li estimation in regression with regressors bernoulli gao j nonlinear time series semiparametric and nonparametric methods monographs on statistics and applied probability chapman boca raton fl gao and yin j estimation in threshold autoregressive models with a stationary and a unit root regime econometrics gao kanaya li and uniform consistency for nonparametric estimators in null recurrent time series econometric theory han and kristensen asymptotic theory for the qmle in garchx models with stationary and nonstationary covariates j bus econom statist han and park y with persistent covariate asymptotic theory of mle econometrics and limit theorems for null recurrent markov processes mem amer math soc jennrich i asymptotic properties of least squares estimators ann math statist kallianpur and robbins the sequence of sums of independent random variables duke math j karlsen myklebust and nonparametric estimation in a nonlinear cointegration type model ann statist karlsen myklebust and nonparametric regression estimation in a null recurrent time series statist plann inference karlsen and nonparametric estimation in null recurrent time series ann statist kasahara y limit theorems for processes and poisson point processes and their applications to brownian excursions j math kyoto univ kristensen and rahbek a testing and inference in nonlinear cointegrating vector error correction models econometric theory li and gao lai asymptotic properties of nonlinear least squares estimates in stochastic regression models ann statist li and nie efficient statistical inference procedures for partially nonlinear models and their applications biometrics li and gao j supplement to estimation in nonlinear regression with harris recurrent markov lin li and chen j local linear m in null recurrent time series statist sinica ling and local likelihood estimators for models econometrics lu z on the geometric ergodicity of a autoregressive model with an autoregressive conditional heteroscedastic term statist sinica malinvaud the consistency of nonlinear regressions ann math statist meyn and tweedie markov chains and stochastic stability ed cambridge univ press cambridge myklebust karlsen and null recurrent unit root processes econometric theory nummelin general irreducible markov chains and nonnegative operators cambridge tracts in mathematics cambridge univ press cambridge park and phillips b asymptotics for nonlinear transformations of integrated time series econometric theory park and phillips b nonlinear regressions with integrated time series econometrica peng and yao q least absolute deviations estimation for arch and garch models biometrika schienle nonparametric nonstationary regression with many covariates working paper berlin severini and wong profile likelihood and conditionally parametric models ann statist skouras strong consistency in nonlinear stochastic regression models ann statist and granger j modelling nonlinear economic time series oxford univ press oxford wu asymptotic theory of nonlinear least squares estimation ann statist li department of mathematics university of york heslington campus york united kingdom department of mathematics university of bergen post box bergen norway gao department of econometrics and business statistics monash university at caulfield caulfield east victoria australia
10
memcapacitive neural networks yuriy pershin and massimiliano di ventra jul abstract we show that memcapacitive memory capacitive systems can be used as synapses in artificial neural networks as an example of our approach we discuss the architecture of an neural network based on memcapacitive synapses moreover we demonstrate that the plasticity can be simply realized with some of these devices memcapacitive synapses are a alternative to memristive synapses for neuromorphic computation pershin is with the department of physics and astronomy and usc nanocenter university of south carolina columbia sc pershin di ventra is with the department of physics university of california san diego la jolla california diventra memcapacitive neural networks electronic devices with memory such as memristive memcapacitive and meminductive systems are promising components for unconvential computing applications because of their ability to store and process information at the same space location moreover if these devices are used as memory and computing elements in neural networks density and consumption can be easily achieved so far only memristive devices have been considered as electronic synapses in artificial neural networks in this article we show instead that memcapacitive systems could play a similar role thus offering the benefit of low power dissipation and in some instances full compatibility with cmos technology an added benefit for electronic realizations of smart electronics according to their definition memcapacitive systems are described by the equations vc t c x q t q t f x q t where q t is the charge on the capacitor at time t vc t is the applied voltage c is the memcapacitance which depends on the state of the system and can vary in time x is a set of n state variables describing the internal state of the system and f is a continuous vector function it is currently well established that in biological neural networks the synapse strength encodes memories in electronic circuits the memory feature of memcapacitive systems provided by their internal states characterized by x can play a similar role in figure we show an example of memcapacitive neural network in this leaky network n input neurons are connected to the rc block of the output neuron with the help of memcapacitive synapses each memcapacitive synapse contains a memcapacitive system and two diodes it is assumed that the switching of memcapacitive system involves a voltage threshold which is above voltage pulse amplitudes involved in this network subjected to a voltage pulse from the input neuron the memcapacitive system ci charges the integrating capacitor c in proportion to its capacitance ci as soon as the voltage across c reaches the threshold of nout nout fires a forward voltage pulse and uses the controllable switch s to reset c the diodes connected to the ground discharge cn after the pulse disappearance and can be replaced by resistors fig presents simulations of the network in the ltspice environment assuming the firing of only one input neuron here we compare the circuit response subjected to the same input pulse sequence periodic firing of at different values of corresponding synaptic connection memcapacitance the stronger synaptic connection fig a results in faster charging of the integrating capacitor c and higher rate of the output neuron nout firing this result is compatible with operation of excitatory synapses in order to model inhibitory synapses one can use the synaptic connection similar to that shown in fig with diodes connected with opposite polarity to the integrating capacitor and power supply voltage v instead of the ground the inset in fig shows the schematics of the inhibitory synapse explicitly moreover the inhibitory synapse should be driven by inverted pulses to evaluate the strengths and weaknesses of using memcapacitive systems as synapses we compare the energy dissipation in memcapacitive and memristive neural networks indeed the circuit depicted in fig can also operate with memristive synapses replacing the memcapacitive ones let us then estimate the amount of energy lost in both cases for this purpose we consider the situation when a single voltage pulse fired by charges c by a small amount of charge q from vc in the case of memcapacitive network the dissipated energy is the energy stored in due to the pulse namely uc qv if c in the case of memristive network ur i qv consequently in this application the memcapacitive synapses are two times more energy efficient however the memristive networks require less components as the diodes connected to the ground in fig are not required when memristive synapses are not used let us now consider a realization of the plasticity stdp with memcapacitive v gnd gnd nn fig v v nj vn cn c cj s r vout nout memcapacitive synapses in an neural network n input electronic neurons are connected to the output neuron nout using memcapacitive synapses the inset schematics of the inhibitory synapse vc a fig time vc b vout voltage v voltage v vout time simulation of the memcapacitive network fig with only one spiking neuron the regular spikes of the input neuron trigger the output neuron nout with voltage threshold shown by the horizontal dashed line at different frequencies depending on the strength of memcapacitive synapse in a and in b this plot was obtained with c r and diode model here vc is the voltage across c and the lines are shifted by for clarity synapses for this purpose we consider a bipolar memcapacitive system with threshold which is also suitable for the network considered above in biological neural networks when a postsynaptic signal reaches the synapse before the action potential of the presynaptic neuron the synapse shows depression ltd namely its strength decreases weaker connection between the neurons depending on the time difference between the postsynaptic and presynaptic signals conversely when the postsynaptic action potential reaches the synapse after the presynaptic action potential the synapse undergoes a longtime potentiation ltp namely the signal transmission between the two neurons increases in proportion to the time difference between the presynaptic and postsynaptic signals in electronic circuits stdp can be implemented using double voltage pulses as shown in fig see also ref in this case the pulse overlap provides opposite voltage polarities across the synapse depending on timing of presynaptic and postsynaptic pulses using a spice model of memcapacitive system with threshold we simulate the dynamics of a memcapacitive synapse subjected to ltp and ltd pulses the bottom line in fig clearly shows the corresponding increase and decrease of the synaptic strength memcapacitance in conclusion we have presented an alternative to simulate synaptic behavior that uses memcapacitive systems instead of memristive systems the corresponding memcapacitive neural networks can operate presynaptic voltage v postsynaptic fig memcapacitance pf ltp ltd time stdp with memcapacitive synapses this plot was obtained using a model of bipolar memcapacitive system with threshold with vt clow chigh c t here the lines are shifted by for clarity at low energy consumption and in some cases they are compatible with cmos technology making them promising candidates for neuromorphic computing this work has been partially supported by nsf grant the center for magnetic recording research at ucsd and burroughs wellcome fund collaborative research travel grant r eferences chua and kang memristive devices and systems proc ieee vol pp di ventra pershin and chua circuit elements with memory memristors memcapacitors and meminductors proc ieee vol no pp di ventra and pershin the parallel approach nature physics vol snider cortical computing with memristive nanodevices scidac review vol pp jo chang ebong b bhadviya mazumder and lu nanoscale memristor device as synapse in neuromorphic systems nano vol pp pershin and di ventra experimental demonstration of associative memory with memristive neural networks neural networks vol pershin and di ventra neuromorphic digital and quantum computation with memory circuit elements proc ieee vol traversa bonani pershin and di ventra dynamic computing random access memory cowan sudhof and stevens synapses the johns hopkins university press biolek ventra and pershin reliable spice simulations of memristors memcapacitors and meminductors
9
minimum cuts and shortest cycles in directed planar graphs via noncrossing shortest mar march abstract let g be an simple directed planar graph with nonnegative edge weights we study the fundamental problems of computing a global cut of g with minimum weight and a cycle of g with minimum weight the best previously known algorithm for the former problem running in o n n time can be obtained from the algorithm of nussbaum sankowski and for maximum flows the best previously known result for the latter problem is the o n n algorithm of by exploiting duality between the two problems in planar graphs we solve both problems in o n log n log log n time via a algorithm that finds a shortest cycle the kernel of our result is an o n log log n algorithm for computing noncrossing shortest paths among nodes well ordered on a common face of a directed plane graph which is extended from the algorithm of italiano nussbaum sankowski and for an undirected plane graph introduction let g be an simple graph with nonnegative edge weights g is unweighted if the weights of all edges of g are identical let c be a subgraph of the weight w c of c is the sum of edge weights of let g c denote the graph obtained from g by deleting the edges of paths are allowed to repeat nodes throughout the paper for nodes s and t an of g is a path of g from s to t and an of g is a subgraph c of g such that there are no in g a global cut of g is an of g for some nodes s and t of a cycle of g is an of g for some node s of the problem on g seeks a cut of g with minimum weight for instance the consisting of edge is the minimum cut of the graph in figure a the best known algorithm on directed g due to hao and orlin runs in o mn log nm time on undirected g nagamochi and ibaraki and stoer and wagner solved the problem in o mn log n time and karger solved the problem in expected a preliminary version of this paper appeared as the master s thesis of the first author the journal version appeared in siam journal on discrete mathematics graduate institute of computer science and information engineering national taiwan university email sirbatostar corresponding author department of computer science and information engineering national taiwan university this author also holds joint appointments in the graduate institute of networking and multimedia and the graduate institute of biomedical electronics and bioinformatics national taiwan university address roosevelt road section taipei taiwan roc research of this author is supported in part by most grant email hil web a g b c figure a a simple planar graph b a simple bidirected plane graph obtained from g by adding edges with weights respectively if we are seeking a minimum cut respectively shortest cycle of c the dual of o m n time kawarabayashi and thorup recently announced the first known o mn algorithm on undirected unweighted g improving upon the algorithm of gabow designed twenty years ago the problem on g seeks a cycle of g with minimum weight for instance cycle with weight is the shortest cycle of the graph in figure a since a shortest directed cycle containing edge ts is obtainable from a shortest the problem on directed graphs can be reduced to computing shortest paths in o mn log n time vassilevska williams and williams argued that finding a truly subcubic algorithm for the problem might be hard for directed respectively undirected unweighted g itai and rodeh solved the problem in o n log n respectively o min mn n time where n o is the time for multiplying two n n matrices if g is undirected and planar chalermsook fakcharoenphol and nanongkai showed that the time complexity of both aforementioned problems on g is o log n times that of finding an of g with minimum weight for any given nodes s and plugging in the o n log n algorithms of frederickson borradaile and klein and erickson the reduction of chalermsook et al solved both problems in o n n time plugging in the o n log log n time algorithm of italiano nussbaum sankowski and the reduction of chalermsook et al solved both problems in o n log n log log n time the best known result for both problems on g is the o n log log n algorithm of and sankowski relying upon the oracle of italiano et al this paper addresses both problems for the case that g is directed and planar while the problem has been thoroughly studied for undirected planar graphs surprisingly no prior work is specifically for directed planar graphs djidjev claimed that his technique for unweighted undirected planar graphs solves the problem on unweighted directed planar g in o time and left open the problem of finding a shortest cycle in unweighted directed planar g in o time weimann and yuster gave an o algorithm for the problem which should be adjustable to solve the problem also in o time via similar techniques to our proof for lemma in to handle degeneracy in shortest cycles reduced the time for the problem on g to o n n but it is unclear how to adjust his algorithm to solve the problem without increasing the required time by too much the algorithm of baum sankowski and for maximum flows solves the problem on directed planar g in o n n time below is our result theorem it takes o n log n log log n time to solve the and problems on an simple directed planar graph with nonnegative edge weights as pointed out by anonymous reviewers mozes nikolaev nussbaum and weimann recently announced an o n log log n algorithms for the problem however unlike our theorem their algorithm requires the condition that there is a unique shortest path between any two nodes for general directed planar graphs with nonnegative edge weights they apply an isolation lemma to perturb the edge weights to meet the condition with high probability thus their results are monte carlo randomized algorithms related work the only known nontrivial algorithm for the problem due to chang and lu works on undirected unweighted planar graphs for undirected g if g is embedded on an orientable surface of genus g erickson fox and nayyeri solved the problem in go g n log log n time based on the algorithm of and sankowski for undirected planar graphs if g is undirected and unweighted and is embedded on an orientable surface of genus g o with djidjev solved the problem in o log n time on undirected unweighted o g weimann and yuster solved the problem in o n log n time for directed planar g even if g is unweighted our theorem remains the best known algorithm if g is unweighted and embedded on a surface the technique of djidjev solved the problem in o g time the problem on g with negative edge weights can be reduced to one with nonnegative edge weights using the standard reweighting technique via a tree in g cygan gabow and sankowski studied the problem on graphs whose edge weights are bounded integers yuster studied the version on undirected g asking for each node a shortest cycle containing the node see for algorithms that compute shortest cycles with prescribed topological properties see for approximation algorithms of the problem the closely related problem that seeks a minimum for given nodes s and t and its dual problem that seeks a maximum have been extensively studied even for only planar graphs see a minimum of g can be obtained in o m n time from a maximum f of g by identifying the edges from the nodes of g reachable from s to the nodes of g not reachable from s in the residual graph of g with respect to f no efficient reductions for the other direction are known orlin gave the only known o mn algorithms for the maximum problem on general graphs with integral edge weights for undirected planar g reif gave an o n n algorithm for the minimum problem frederickson improved the time complexity of reif s algorithm to o n log n the best known algorithms for both problems due to italiano et al run in o n log log n time the attempt of janiga and koubek to generalize reif s algorithm to directed planar g turned out to be flawed borradaile and klein and erickson gave o n log n time algorithms for both problems on directed planar graphs on directed planar unweighted g brandes and wagner and eisenstat and klein solved both problems in o n time the algorithm of kaplan and nussbaum is capable of exploiting the condition that nodes s and t are close for directed planar g the o n n algorithm of et al obtains the minimum weights of for any given s and all nodes t of for any given node subsets s and t of directed planar g the o n n algorithm of borradaile klein mozes nussbaum and computes a subgraph c of g with minimum weight such that there is no in g c for any s s and t t on undirected planar g borradaile sankowski and gave an o n n algorithm to compute a gomoryhu tree a compact representation of with minimum weights for all nodes s and the kernel of our result is an o n log log n algorithm for computing noncrossing shortest paths among nodes well ordered on a common face of a directed plane graph which is extended from the algorithm of italiano et al for an undirected plane graph a closely related problem seeks noncrossing paths between k given terminal pairs on h faces with minimum total weight in a plane graph takahashi suzuki and nishizeki solved the problem for undirected plane graphs with h in o n log k time papadopoulou addressed the geometric version of the problem where the terminal pairs are on the boundaries of h polygonal obstacles in the plane with complexity n and gave an o n algorithm for the case h erickson and nayyeri generalized the result of takahashi et solving the problem for undirected planar graphs in h n log k time they also generalized the result of papadopoulou to solve the geometric version in h n time each of these algorithms computes an implicit representation of the answers which may have total size kn polishchuk and mitchell addressed the problem of finding noncrossing thick paths with minimum total weight takahashi suzuki and nishizeki also considered the rectilinear version of the problem technical overview and outline our proof for theorem consists of a series of reductions based upon the duality between simple cycles and minimal cuts in plane graphs section gives an o n reduction from the and problems in an planar graph to the problem of finding a shortest cycle in an o plane graph g lemma let c be a balanced separator of g that corresponds to a fundamental cycle with respect to a tree of a shortest cycle that does not cross c can be recursively computed from the subgraphs of g separated by although we can not afford to compute a shortest cycle that crosses c section reduces the problem of finding a shortest cycle to finding a cycle a cycle that crosses c with the property that if it is not shortest then a shortest cycle that does not cross c has to be a shortest cycle of g lemma this reduction is a recursive algorithm using the balanced separator c and thus introduces an o log n overhead in the running time a cycle of g that crosses a shortest path p of g can be shortcutted into a cycle that crosses p at most once section reduces the problem of finding a cycle to that of finding a c p cycle a cycle whose weight is no more than that of any cycle that crosses a shortest path p of g in c exactly once lemma by the technique of reif that incises g along p section further reduces the problem of finding a c p cycle to that of finding shortest noncrossing paths among nodes well ordered on the boundary of external face lemma as a matter of fact this problem can be solved by the o n log n algorithm of klein already yielding improved o n n algorithms for the and problems mozes et al also mentioned that o n n algorithms can be obtained by plugging in the o n log n minimum algorithm of borradaile and klein into a directed version of the reduction algorithm of chalermsook et al to achieve the time complexity of theorem section solves the problem in o n log log n time by extending the algorithm of italiano et al for an rected plane graph section concludes the paper reduction to finding shortest cycles directed graph g is bidirected if for any two nodes s and t of g st is an edge of g if and only if ts is an edge of the graph in figure a is not bidirected the degree of node v in bidirected g is the number of neighbors of v in the degree of bidirected g is the maximum degree of the nodes in a bidirected plane graph is a bidirected planar graph equipped with a plane embedding in which edges between two adjacent nodes are bundled together figures b and c show two bidirected plane graphs and a cycle passing each node at most once is simple a cycle is degenerate if it is a node or passes both edges st and ts for two nodes s and a cycle not simple respectively degenerate is respectively cycle in figure a is and in the graph g of figure a cycle is degenerate and simple cycle is and simple and cycle is degenerate and the shortest degenerate cycle of g is with weight the shortest cycle of g is with weight theorem can be proved by the following lemma lemma it takes o n log n log log n time to compute a shortest cycle in an o simple bidirected plane graph with nonnegative edge weights proof of theorem adding edges with weights respectively to the input graph does not affect the weight of minimum cuts respectively shortest cycles hence we may assume without loss of generality that the input graph has at least four nodes and is a simple bidirected plane graph such that each face of is a triangle see figures a and b for examples let the dual of be the simple bidirected plane graph on the faces of sharing the same set of edges with that is obtainable in o n time from as follows for any two adjacent nodes s and t of there are directed edges f g st and gf ts in where f and g are the two faces of incident with the bundled edges between s and t such that face g immediately succeeds face f in clockwise order around node s of see figure c for an example observe that c is a minimal cut of if and only if c is a simple cycle of by nonnegativity of edge weights a shortest cycle of is a minimum cut of for instance the shortest cycle of the graph in figure c is with weight it corresponds to the of which in turn corresponds to the minimum cut of although the degenerate cycle is a shortest cycle of it does not correspond to a cut of g in the above manner since each node of has exactly three neighbors the statement of the theorem for the problem follows from applying lemma on by nonnegativity of edge weights it takes o n time to obtain a shortest degenerate cycle of by examining the o n degenerate cycles of on exactly two nodes by lemma the statement of the theorem for the problem is immediate from the following claim it takes o n time to obtain from an o n o simple bidirected plane graph g such that a shortest cycle of can be computed from a shortest cycle of g in o n time let and be simple bidirected plane graphs with nonnegative edge weights such that is obtained from by the following o d operation on a node v of with d if ud are the neighbors of v in in clockwise order around v then s plit v v a and b and figure bidirected plane graphs and and their edge weights are in black solid lines shortest cycles and are in blue dashed lines shortest cycles and are in red dotted lines adds path vd with new nodes vd replaces edge ui v by edge ui vi with the same weight for each i d replaces edge vui by edge vi ui with the same weight for each i d and deletes see figure for an example of and an o n o simple bidirected plane graph g can be obtained in o n time from by iteratively applying s plit on each node v of with degree d to prove the claim it suffices to ensure the following statement a shortest cycle of is obtainable in o d time from a shortest nondegenerate cycle of for each ui uj p of with i j d such that p has at least two edges and all internal nodes of p are in vd we replace p by path ui vuj by w p w ui vuj we have w w since is so is the resulting o d obtainable cycle of since may pass v more than once could be see figure for an example of and it remains to show w w for any shortest simple cycle of by nonnegativity of edge weights we have w w even if is let be the cycle of that is obtained from as follows if there is a path ui vuj with i j d in then replace it by path ui vi vj uj by w ui vuj w ui vi vj uj we have w w otherwise let since is simple there is at most one path ui vuj in since is so is see figure for an example of and by w w w w is a shortest nondegenerate cycle of the rest of the paper proves lemma via balanced separating cycles let c be a simple cycle of a bidirected plane graph g with nonnegative edge weights let intg c respectively extg c denote the subgraph of g consisting of the nodes and edges on the boundary of the faces of g inside respectively outside of a nondegenerate cycle of g is if one of and is a shortest cycle b h a g c and c figure a the bidirected plane graph g and its edge weights are in black the blue dashed cycle c is a segmented cycle of g whose segments are shortest paths and of the shortest cycle of intg c is with weight the shortest cycle of extg c is with weight the red dotted cycle c with weight is the unique cycle of g and the unique shortest cycle of b a bidirected plane graph of g where respectively is a shortest cycle of intg c respectively extg c we say that c is segmented if it consists of the following three paths in order a shortest path an edge and the reverse of a shortest path where one of and is allowed to be a node let shortest paths and be the segments of see figure a for an example this section proves lemma using lemmas and section proves lemma lemma let g be an o simple bidirected plane graph with nonnegative edge weights given a segmented simple cycle c of g together with its segments it takes o n log log n time to compute a cycle of lemma henzinger klein rao and subramanian it takes o n time to compute a tree rooted at any given node of an connected simple directed planar graph with nonnegative edge weights lemma lipton and tarjan goodrich klein mozes and sommer lemma let be an simple undirected plane triangulation with nonnegative face weights summing to such that the weight of each face of is at most given any spanning tree t of it takes o n time to obtain an edge e of t such that the total weight of the faces of inside respectively outside of the simple cycle in t e is no more than proof of lemma we give a recursive algorithm on the input graph h which can be assumed to be connected without loss of generality for each node y whose neighbors x and z are we replace y and its incident edges by edges xz and zx with weights w xy w yz and w zy w yx respectively the resulting graph g obtainable in o n time from h remains an o simple connected bidirected plane graph see figure for an example of h and let be the number of faces in since each maximal simple path on the nodes of g has o edges g has o nodes a shortest cycle of h can be obtained in o n time from a shortest cycle of g which can be found in o time for the case with a g and t a and figure a the bidirected plane graph g having faces and its edge weights are in black a tree t rooted at is in blue dashed lines b the plane triangulation consists of all edges the numbers are weights of the faces of the undirected version of g consists of the black solid edges and the blue dashed edges the undirected version of t is in blue dashed lines the edges in are in red dotted lines to obtain a shortest cycle of g for the case with let t be an o n obtainable tree of g rooted at an arbitrary node as ensured by lemma for each face f of the simple undirected unweighted version of g having size k let f be triangulated into k faces via adding k edges without introducing multiple edges let an arbitrary one of the k faces be assigned weight and let the remaining k faces be assigned weights let be the resulting simple plane triangulation the undirected version of t is a spanning tree of and see figure for an example lemma ensures an edge xy of obtainable in o n time such that the face weights of inside respectively outside of the simple cycle of xy sum to at most for instance such an edge xy in the example in figure is if x and y are adjacent in g then let e otherwise let e consist of edges xy and yx with weights we union g and e to obtain a simple bidirected plane graph let s be the least common ancestor of x and y in t let c be the segmented simple cycle of consisting of the of t edge xy and the reverse of the of t by lemma it takes o n log log n time to compute a cycle of let c e and c no matter e or not and are subgraphs of g we recursively compute a shortest cycle respectively in respectively which is also a shortest cycle of c respectively c by definition of a cycle c in with minimum weight is a shortest cycle of if c passes an edge in e then the weight of each cycle of and g is otherwise we return c as a shortest cycle of the algorithm runs in o n log log n time without accounting for the time for its subsequent recursive calls by the number implying that there respectively of faces in respectively is at most are o log n levels of recursion by the overall number of faces in each recursion level is o n implying that the overall number of nodes in each recursion level is o n the algorithm runs in o n log n log log n time s t a g v u v t s u b h figure a with p t and c tvs the red dotted cycle and the blue dashed cycle are c p of g with minimum weight b h is obtained from incising g along p cycles that cross the separating cycle this section proves lemma by lemma which is proved by lemmas and section proves lemma if graph g has then let dg u v denote the weight of a shortest of if g has no then let dg u v lemma let g be an simple connected bidirected plane graph with nonnegative edge weights let be o n nodes on the boundary of the external face of g in order it takes overall o n log log n time to compute dg ui vi for each i let g be a simple bidirected plane graph a simple path q of g aligns with subgraph h of g if q or the reverse of q is a path of a simple path q of g passing at least one edge deviates from subgraph h of g if the edges and the internal nodes of q are not in for any simple path p of g a cycle of g is a p if it consists of a path aligning with p and a path deviating from p for any simple cycle of g and any path p of g aligning with c a p is a c p if the first edge of its path deviating from p is in intg c if and only if the last edge of its path deviating from p is in extg c for instance the c in figure a is a of g whose path aligning with is node the first edge respectively last edge of its path deviating from is in extg c respectively intg c so c is a c of c is also a c a cycle of g is c p if its weight is no more than that of any c p of lemma let g be an o simple bidirected plane graph with nonnegative edge weights let c be a simple cycle of given a path p of g aligning with c it takes o n log log n time to compute a c p cycle of proof let c be a c p of g with minimum weight for instance the red and blue cycles in figure a are two c p with minimum weight let be a shortest nondegenerate cycle of g passing at least one endpoint of p which can be obtained in o n time via examining shortest in uv vu by lemma for all o edges uv of g incident to at least one endpoint of p if c passes some endpoint of p then w w c implying pi a b c figure a the red dotted not intersecting is in intg c the blue dashed cycle not intersecting is in extg c b the red dotted cycle consists of and in order is a c c the degenerate cycle c is obtained from the red dotted cycle c by replacing the of c with the blue dashed of pi the green cycle c is a cycle contained by c that is a cycle ensured by the lemma the rest of the proof assumes that c does not pass any endpoint of p thus p has internal nodes let h be an o n o simple bidirected plane graph obtainable in o n time as follows suppose that with are the nodes of p in order let s and t we incise g along p by adding new nodes a new path p t and the reverse of p for each i letting each edge vui respectively ui v incident to ui in intg c p be replaced by vvi respectively vi v with the same weight letting the weight of each edge in p and the reverse of p be and embedding the resulting graph h such that p and p are on the external face see figure for an example by lemma it takes overall o n log log n time to compute dh ui vi and dh vi ui for each i let respectively be an i that minimizes dh ui vi respectively dh vi ui by lemma it takes o n time to obtain a simple shortest of h and a simple shortest of the weight of respectively is minimum over all ui vi respectively vi ui of h with i let respectively be the cycle of g corresponding to respectively let q be the path of c that deviates from p let ui and uj with i j be the first and last nodes of q respectively if the first edge of q is in intg c then c corresponds to a vi ui of h implying w w c if the last edge of q is in intg c then c corresponds to a uj vj of h implying w w c for instance the red respectively blue cycle of g in figure a corresponds to the red respectively blue of h in figure b thus one of and with minimum weight is a cycle ensured by the lemma proof of lemma let intg c and extg c let and be the given segments of let c be a shortest cycle of g whose number of edges not in is minimized over all shortest cycles of if c is a cycle of or then any cycle of g is including the one ensured by lemma the rest of the proof assumes that neither nor contains c by lemma it suffices to ensure that c is a c we need the following claim for each i if c pi then c is a pi of by the claim c intersects both and or else c would be a cycle of or as illustrated by figure a contradicting the assumption since c is a and a c consists of four paths and in order such that qi aligns with pi and ri deviates from for each i by the assumption if gi and gj then i j thus c is a c see figure b for an illustration it remains to prove the claim assume for contradiction that c intersects pi but is not a pi for an index i there are nodes of pi with and such that precedes in pi succeeds in pi the and the of c deviate from pi and the of c deviates from the of c let c be the cycle of g obtained from c by replacing the of c with the path of pi since pi is a shortest path of g w c w c since c is the reverse of each of the of c is not in c thus even if c is degenerate there is a nondegenerate cycle c in c see figure c for an illustration by nonnegativity of edge weights w c w c by w c w c c is a shortest cycle of g whose number of edges not in is fewer than the number of edges of c not in contradicting the definition of c noncrossing shortest paths among nodes on external face this section proves lemma via extending techniques of reif and italiano et al for undirected planar graphs algorithms for lemma and graphs lemma are reviewed in data structures for algorithm lemma are given in data structures that enables efficient partition of boundary nodes via noncrossing paths lemma are given in tools involving noncrossing shortest paths lemma are given in lemma is proved by lemmas and in graph let g be a simple bidirected plane graph a division d of g is an partition of g into bidirected plane subgraphs each of which is a piece of the multiplicity of node v of g in d is the number of pieces of d containing a node of g with multiplicity two or more in d is a boundary node of a face of a piece of d is a hole of the piece if it is not a face of for any r an see of g is a division of g with o pieces each having o r nodes o r boundary nodes and o holes lemma klein mozes and sommer for any r it takes o n time to compute an for an simple bidirected plane graph each of whose faces contains at most three nodes b k h a h figure a a piece h in which and are the boundary nodes in one hole and and the boundary nodes in the other hole b k h let d be an of for any connected component h of any piece of d let k h denote the complete directed graph on the boundary nodes of d in h in which w uv dh u v see figure for an example the dense distance graph see k d of d is the o n simple directed graph on the o r boundary nodes of d simplified from the union of k h over all connected components h of all pieces of d by keeping exactly one copy of parallel edges with minimum weight for any edge uv of k d an underlying is a in some connected component h of some piece of d with weight equal to w uv in k d for any path of k d an underlying path of consists of an underlying for each edge uv of lemma klein for any given d of an simple bidirected plane graph with nonnegative edge weights it takes o n log r time to compute k d and a data structure from which for any path of k d the first c edges of an underlying path of can be obtained in o c log log r time algorithm consider the following equation w w w w for distinct nodes of a simple directed graph h with edge weights a monge unit is a complete h equipped with a cyclic ordering for the nodes of h such that equation holds for any distinct nodes of h in order a monge unit is a complete bipartite h equipped with an ordering for each of the two maximal independent sets of h such that equation holds for any distinct nodes and of one independent set in order and any distinct nodes and of the other independent set in order a monge decomposition of a simple directed graph k with edge weights is a set m of monge units on node subsets of k such that k is the graph simplified from the union of the monge units in m the multiplicity of a node v of k in m is the number of monge units in m that contain the size of m is the sum of the multiplicities of all nodes of k in m an equivalent a b figure each of the two graphs can be simplified from the union of two monge units form of the following lemma is proved by mozes and using the algorithm of klein and used by kaplan mozes nussbaum and sharir specifically for any hole c of a piece h of d the complete graph on the nodes of c with w uv dh u v for any nodes u and v in c equipped with the cyclic ordering of c is a monge unit for instance the subgraphs of k h in figure b induced by and equipped with their cyclic orders on the holes are two monge units for any two holes and of a piece h of d mozes et al showed that the complete bipartite graph on the nodes of and with w uv dh u v for nodes u and v such that each of and contains exactly one of u and v can be simplified from the union of o monge units for instance the subgraph of k h in figure consisting of edges between and can be simplified from the union of the graphs in figures a and b the edges of the graph in figure a from i respectively i to i respectively i form a monge unit the edges of the graph in figure b from i respectively i to i respectively i form a monge unit lemma for any given d of an simple bidirected plane graph with nonnegative edge weights it takes o n log r time to obtain a monge decomposition m d of k d such that the multiplicity of a node of k d in m d is o times its multiplicity in as summarized in the following lemma given a monge decomposition of graph k there are o m m obtainable data structures for range minimum queries see kaplan et al and gawrychowski mozes and weimann with which the algorithm of fakcharoenphol and rao outputs a tree of k in o m m time lemma given a monge decomposition of a simple strongly connected directed graph k with nonnegative edge weights it takes o m m time to compute a tree of k rooted at any given node lemma let d be a given of an simple plane graph with nonnegative edge weights it takes o n log r time to compute a data structure from which for any subset x of the boundary nodes of d such that the subgraph k of k d induced by x is strongly connected it takes o m m time to compute a tree of k rooted at any given node where m is the sum of the multiplicities of the nodes of x in proof let m d be a monge decomposition of k d as ensured by lemma let m consist of the subgraph h x of h induced by x for each monge unit h in m d each h x remains x y z x z y b g a g and figure a and are noncrossing shortest paths of b g a monge unit with the induced cyclic ordering respectively orderings of the nodes in h x for the first respectively second type thus m is a monge decomposition of k preserving the property that the multiplicity of a node of k in m is o times its multiplicity in d implying that the size of m is o m it takes overall o m time to obtain the induced cyclic ordering or the two induced orderings of the nodes of h x from h for each monge unit h in m d since the weight of each edge of h x can be obtained in o time from its weight in h we have an implicit representation of m in o m time the lemma follows from lemma noncrossing paths let g be a simple connected bidirected plane graph let be distinct nodes on the boundary of the external face of connected plane graph g in order a simple and a simple of g are noncrossing if is empty or is a path for instance in figure in red and in blue are noncrossing for noncrossing and let g denote the connected bidirected plane subgraph of g enclosed by and the and on the boundary of the external face of g following the order of see figure for an example let d be an of our proof of lemma needs a data structure b d with the following property for distinct nodes on the external face of g in order any disjoint simple and of g and any simple of g such that and are noncrossing given x and it takes o log r time to obtain x and x where x i j with i j consists of the boundary nodes of d in g pi pj is the sum of multiplicities of the nodes of x in d and is the number of edges in see figure for an illustration lemma it takes o n time to compute a data structure b d for any given d of any simple connected bidirected plane graph proof given x and the edge set e of it takes o time to obtain the nodes of x in e which belongs to x x let x consist of the nodes of x not in if x then x x x the rest of the proof assumes x let respectively consist of the pieces h of d such that h contains nodes of x and no respectively some edges of we have since g is connected and e let a be the o obtainable undirected bipartite graph on the nodes x in x and the pieces h of d in such that h and x are adjacent in a if and only if h contains x in the nodes of x in the same connected component of a either all belong to x or all belong to x since g is connected each connected component of a contains a node of x h h h h b g a g c g figure an illustration for the definition of b d where is the blue solid is the green and is the red dotted and are disjoint and are noncrossing a g in which the boundary nodes of d form x b g in which the boundary nodes of d form x c g in which the boundary nodes of d form x h that belongs to a piece of h in it takes overall o time to obtain h e c e and c x for each hole c of each piece h of d in since each piece of d has o holes it remains to show that with the b d defined below for each hole c of each piece h of d in it takes o m log r time to determine the nodes of c x in x where m is the number of nodes in h x plus the number of edges in h assume without loss of generality that the external face of each piece h of d is a hole of the o n obtainable data structure b d consists of the cyclic ordering of the incident edges around each node of g and the following items for each hole c of each piece h of d h h an arbitrary simple path q of h from a node of c to a node q on the external face of the ordering indices of the nodes on q the cyclic ordering indices of the nodes on h it takes overall o time to obtain q e for each hole c of each piece h of d in with the first part of b d if uv is an edge of g with u and v then it takes o time to determine whether v g with the second part of b d for any subset u of any piece h of d and any hole c of h it takes o k time to determine the ordering indices of the nodes of u q in q and the cyclic ordering indices of the nodes of u c in case c e as illustrated by figure a it takes overall o m log r time via sorting their ordering indices to compute for each node x of c the first node u e in the traversal of c starting from x following the order of and the node v of c preceding u in the traversal we have x x if and only if v g which can be determined in o time case c as illustrated by figure b if then let v be the node preceding the first node u of q in let c be the boundary of the external face of as illustrated by figure c if q e then let v be the node of c preceding the first node u of c in e on the traversal of c starting from q following the order of either way it takes o m v x q c c q q e u e v c e v e u u q e q a c b figure illustrations for the proof of lemma u v figure an illustration for the proof of lemma time to obtain v and determine whether v g if v g then c x x otherwise c x x noncrossing shortest paths lemma let g be a simple connected bidirected plane graph with nonnegative edge weights if nodes are on the boundary of the external face of g in order then for any shortest path of g there is a shortest of g such that and are noncrossing proof as illustrated by figure suppose that is a shortest of g with let u respectively v be the first respectively last node of in let be obtained from by replacing its with the of by the order of on the boundary of the external face of g is well defined and is a shortest of g such that and are noncrossing lemma let g be an simple connected bidirected plane graph with nonnegative edge weights let uk vk be distinct nodes on the boundary of the external face of g in order for each i k let pi be a simple shortest ui vi of g such that and pk are noncrossing let h be the number of nodes of g pk not in pk given pk and pk consider the problem of computing dg ui vi for all i if pk then the problem can be solved in o h log k time if pk and we are given a set z of o nodes such that for each i k at least one shortest ui vi passes at least one node of z then the problem can be solved in o h time if pk and we are given w pk then the problem can be solved in o h time proof since pk and pk are given it takes o h time to obtain g pk excluding the edges and internal nodes of statements and follow from lemmas and as for statement under the assumption that a simple shortest ua va pa and a simple shortest ub vb pb of g are given and disjoint below is the recursive algorithm m easure a b with a b k for solving the a b of computing dg ui vi for all indices i with a i b let i a b by lemma it takes time linear in the number of nodes in g pa pb to obtain dg ui vi and a simple shortest ui vi pi of g pa pb that is noncrossing with both pa and pb for the a i if pa pi then call m easure a i otherwise apply statement with z consisting of an arbitrary node in pa pi for the i b if pi pb then call m easure i b otherwise apply statement with z consisting of an arbitrary node in pi pb the algorithm for the statement obtains dg and dg uk vk from and pk and calls m easure k since each dg ui vi with i k is computed by lemma or statement the correctness holds trivially by the choice of i m easure k runs in o log k levels of recursion since pa pb holds for each call to m easure a b each node of g pk appears in at most two subgraphs g pa pb in the same level of recursion thus the overall running time for each level of recursion is o h the algorithm runs in o h log k time proving lemma proof of lemma for each i let di dg ui vi with the modification below each di with i equals the weight of a shortest in the resulting g which remains an o n simple connected bidirected plane graph for each i add new nodes and in the external face edges ui and vi and edges ui and vi contract each strongly connected subgraph into a single node delete all and delete all except one copy of each set of multiple edges with minimum weight thus the rest of the proof assumes that are distinct and g does not have any cycles implying that all shortest paths of g are simple let be an o n bidirected plane graph obtainable in o n time from g by identifying nodes ui and vi into a new node zi for each i and then triangulating each face of size larger than let r max by lemma an for can be computed in o n time let be the division of g induced by each piece of is obtained from a piece of by deleting the edges added to triangulate faces of size larger than each piece of has o r nodes o r boundary nodes and o holes so does each piece of let i consist of indices and and the indices i such that at least one of ui and vi is a boundary node of since each zi with i i is a boundary node in the cardinality of i is o r to turn both of ui and vi with i i subroutine s olve a b if i a b then solve the a b by lemma and return if i a b then let i be a median of i a b and let p respectively p be a shortest ui vi whose first respectively last c edges can be obtained in o c log log r time case p pa pb let pi p call l abel pi s olve a i and s olve i b return case p pa pb call l abel p ui x where x is the first node of p in pa pb call l abel p y vi where y is the last node of p in p ui x pa pb case y pa pb let j be the index in a b with x pj if y pj then solve the a b by lemma with z x y return if y pj then let pi p ui x pj x y p y vi implying w pi pj y x if x pa then solve the a i by lemma and call s olve i b return if x pb then solve the i b by lemma and call s olve a i return case y pa pb implying y p ui x and y x let pi p ui y p y vi let z x if x pa then solve the a i by lemma and call s olve i b return if x pb then solve the i b by lemma and call s olve a i return figure subroutine s olve a b into boundary nodes we introduce o new o r pieces which form a partition of the nodes ui and vi with i i let d be the resulting division of each new piece of d has o r nodes and no edges so it has o r boundary nodes and o holes thus d is an of g such that each ui with i is a boundary node in d if and only if so is vi let be the simple bidirected plane graph with edge weights obtained from g by reversing the direction of each edge let d be the of corresponding to by equation it takes o n log log n time to compute k d and k d and the data structures ensured by lemmas and for any nodes x and y in a shortest path p of g let p x y denote the of p we need a subroutine l abel p to compute label z for each node z of a shortest path p of g under the assumption that z for at most one node of p is let z be the node with z if there is no such a node then let z be an arbitrary node of p and let z for each node z that precedes z in p let z z w p z z for each node z that succeeds z in p let z z w p z z subroutine l abel p runs in o time per node of p and does not overwrite z for any z with z after running l abel p for any nodes x and y of p w p x y can be obtained from y x in o time for any indices a and b let set i a b consist of the indices i i with a i b for each i let pi be a shortest ui vi of g obtainable in o n time by lemma if then the lemma follows from lemma with z x for an arbitrary node x the rest of the proof assumes the algorithm proving the lemma calls l abel l abel pk and s olve where the main subroutine s olve a b as defined in figure and elaborated below solves the a b of computing di for all indices i with a i b under the condition that ua x va ua vi vb y va ua ui vi ui vi ub vb ub vb x x va y ui ub y a b c figure illustrations for the proof of lemma all pa and pb are in black each p ui x is in red dots each p y vi is in blue dashes shortest ua va pa of g and shortest ub vb pb of g are disjoint z is for each node z pa pb and the set x a b of boundary nodes of d in g pa pb is given by equation it remains to prove that s olve correctly solves the in o n log r time if i a b then all ui with a i b are not boundary nodes in since these ui induce a connected subgraph of g they belong to a common piece of d implying b a o r the a b can be solved by lemma in o h a b log r time where h a b is the number of nodes in g pa pb that are not in pa pb for the case with i a b we can not afford to directly compute a shortest ui vi pi of g for a median i of i a b by lemma instead in the subgraph of k d induced by the given set x a b of boundary nodes of d in g pa pb we compute a shortest ui vi respectively of k d respectively k d the first respectively last c edges of whose underlying path p respectively p can be obtained in o c log log r time by lemma by lemma g pa pb contains at least one shortest ui vi of g implying that the subgraph of k d induced by x a b contains at least one shortest ui vi of k d therefore p and p are shortest ui vi of g in g pa pb if p does not intersect pa pb then it takes o log log r time per node to obtain p as in case of figure the subroutine lets pi p and calls l abel pi s olve a i and s olve i b if p intersects pa pb it takes o log log r time per node to obtain p ui x and p y vi where x is the first node of p in pa pb and y is the last node of p in p ui x pa pb as stated by the first two bullets in case of figure the subroutine calls l abel p ui x and l abel p y vi as illustrated by figure a if each of pa and pb contains exactly one of x and y then the a b is solved in o h a b time by lemma with z x y as stated by the first bullet in case of figure as illustrated by figure b if x y pa then let pi p ui x pa x y p y vi the i b is solved by calling s olve i b the a i is solved by lemma with w pa pi y x in o h a b time the case with x y pb is similar the second bullet of case in figure states these two cases as illustrated by figure c if x pa and y pa pb then the shortest ui vi pi p ui y p y vi is disjoint with pa pb the i b is solved by calling s olve i b since at least one shortest ui vi of g pa pi passes x the a i subproblem can be solved in o h a b time by lemma with z x the case pa pb is similar case in figure states these two cases with x pb and y the correctness holds trivially since each di with i is computed somewhere during the execution of s olve by lemma since i is chosen to be a median of i a b in each subroutine call to s olve a b there are o log n levels of recursion in executing s olve let m a b be the sum of the multiplicities of the nodes of x a b in by lemma the time for computing and is o m a b log m a b in order to maintain the condition that x a b is given whenever s olve a b is called we apply lemma to obtain x a i and x i b in o m a b mi log r time before calling s olve a i or s olve i b where mi is the number of edges in pi pa pb since pa and pb are disjoint each boundary node of d is contained by one or two subgraphs g pa pb of the same recursion level since there are o pieces of d and each piece of d has o r boundary nodes the sum of m a b over all subgraphs g pa pb at the same recursion level is o r since each edge of g appears in at most one pi pa pb for all subroutine calls to s olve a b the sum of all mi throughout the execution of s olve is o n by equation the overall time for computing and is n o log n log n o n r the overall time of finding all paths p p ui x and p y vi is o n log log r since their edges are disjoint and all of them are obtainable in o log log r time per node therefore the running time of s olve is dominated by the sum of the o h a b log r time for solving the a b subproblems by lemmas and at the bottom of recursion since the sum of h a b over all these a b is o n the running time of s olve is o n log r the lemma is proved concluding remarks we give the first known o n log n log log n algorithms for finding a minimum cut and a shortest cycle in an simple directed planar graph g with nonnegative edge weights for the case that g is restricted to be unweighted our algorithm remains the best known result for the problem the best algorithm for the problem running in o n log n time is obtained by plugging in the o n minimum stcut algorithm of brandes and wagner and eisenstat and klein to a directed version of the reduction algorithm of chalermsook et al thus an interesting future direction is to further reduce the running time of our algorithms on both problems for this special case extending our results to graphs is also of interest acknowledgment we thank the anonymous reviewers for helpful comments references borradaile and klein an o n log n algorithm for maximum in a directed planar graph journal of the acm borradaile klein mozes nussbaum and maximum flow in directed planar graphs in time in proceedings of the annual ieee symposium on foundations of computer science pages borradaile sankowski and min oracle for planar graphs with preprocessing time acm transactions on algorithms brandes and wagner a linear time algorithm for the arc disjoint menger problem in planar directed graphs algorithmica cabello finding shortest contractible and shortest separating cycles in embedded graphs acm transactions on algorithms cabello chambers and erickson shortest paths in embedded graphs siam journal computing cabello colin de and lazarus finding shortest cycles in directed graphs on surfaces in proceedings of the acm symposium on computational geometry pages chalermsook fakcharoenphol and nanongkai a deterministic time algorithm for finding minimum cuts in planar graphs in proceedings of the annual symposium on discrete algorithms pages chang and lu computing the girth of a planar graph in linear time siam journal on computing cormen leiserson rivest and stein introduction to algorithms mit press edition cygan gabow and sankowski algorithmic applications of s theorem shortest cycles diameter and matchings in proceedings of the annual ieee symposium on foundations of computer science pages djidjev a faster algorithm for computing the girth of planar and bounded genus graphs acm transactions on algorithms eisenstat and klein algorithms for max flow and shortest paths in planar graphs in proceedings of the acm symposium on theory of computing pages erickson maximum flows and parametric shortest paths in planar graphs in proceedings of the annual symposium on discrete algorithms pages erickson fox and nayyeri global minimum cuts in surface embedded graphs in proceedings of the annual symposium on discrete algorithms pages erickson and optimally cutting a surface into a disk discrete computational geometry erickson and nayyeri minimum cuts and shortest cycles via homology covers in proceedings of the annual symposium on discrete algorithms pages erickson and nayyeri shortest walks in the plane in proceedings of the annual symposium on discrete algorithms pages erickson and worah computing the shortest essential cycle discrete computational geometry fakcharoenphol and rao planar graphs negative weight edges shortest paths and near linear time journal of computer and system sciences fox shortest cycles in directed and undirected surface graphs in proceedings of the annual symposium on discrete algorithms pages fox fast algorithms for surface embedded graphs via homology phd thesis university of illinois at frederickson fast algorithms for shortest paths in planar graphs with applications siam journal on computing gabow a matroid approach to finding edge connectivity and packing arborescences journal of computer and system sciences gabow and tarjan faster scaling algorithms for network problems siam journal on computing gawrychowski mozes and weimann submatrix maximum queries in monge matrices are equivalent to predecessor search in speckmann editor proceedings of the international colloquium on automata languages and programming pages goldberg scaling algorithms for the shortest paths problem siam journal on computing gomory and hu network flows journal of the siam goodrich planar separators and parallel polygon triangulation journal of computer and system sciences hao and j orlin a faster algorithm for finding the minimum cut in a directed graph journal of algorithms henzinger klein rao and subramanian faster algorithms for planar graphs journal of computer and system sciences itai and rodeh finding a minimum circuit in a graph siam journal on computing italiano nussbaum sankowski and improved algorithms for min cut and max flow in undirected planar graphs in proceedings of the acm symposium on theory of computing pages janiga and koubek minimum cut in directed planar networks kybernetika kaplan mozes nussbaum and sharir submatrix maximum queries in monge matrices and monge partial matrices and their applications in proceedings of the annual symposium on discrete algorithms pages kaplan and nussbaum minimum cut in undirected planar graphs when the source and the sink are close in schwentick and editors proceedings of the international symposium on theoretical aspects of computer science pages karger minimum cuts in time journal of the acm kawarabayashi and thorup deterministic global minimum cut of a simple graph in time in proceedings of the acm symposium on theory of computing pages khuller and naor flow in planar graphs a survey of recent results in planar graphs dimacs series in discrete math and theoretical computer science pages ams klein shortest paths in planar graphs in proceedings of the annual symposium on discrete algorithms pages klein mozes and sommer structured recursive separator decompositions for planar graphs in linear time in proceedings of the acm symposium on theory of computing pages klein mozes and weimann shortest paths in directed planar graphs with negative lengths a o n n algorithm acm transactions on algorithms nussbaum sankowski and single source all sinks max flows in planar digraphs in proceedings of the annual ieee symposium on foundations of computer science pages and sankowski and shortest cycles in planar graphs in o n log log n time in proceedings of the annual european symposium on algorithms pages liang minimum cuts and shortest cycles in directed planar graphs via shortest paths master s thesis national taiwan university july liang and lu minimum cuts and shortest cycles in directed planar graphs via noncrossing shortest paths siam journal on discrete mathematics lingas and lundell efficient approximation algorithms for shortest cycles in undirected graphs information processing letters lipton and tarjan a separator theorem for planar graphs siam journal on applied mathematics monien the complexity of determining a shortest cycle of even length computing motwani and raghavan randomized algorithms cambridge university press mozes nikolaev nussbaum and weimann minimum cut of directed planar graphs in o n log log n time computing research repository december http mozes and shortest paths in planar graphs with real lengths in o n log log n time in de berg and meyer editors proceedings of the annual european symposium on algorithms lecture notes in computer science pages springer mulmuley vazirani and vazirani matching is as easy as matrix inversion combinatorica nagamochi and ibaraki computing in multigraphs and capacitated graphs siam journal on discrete mathematics j orlin max flows in o nm time or better in proceedings of the acm symposium on theory of computing pages papadopoulou shortest paths in a simple polygon international journal of computational geometry and applications polishchuk and mitchell thick paths and flows in polygonal domains in proceedings of the acm symposium on computational geometry pages reif minimum cut of a planar undirected network in o n n time siam journal on computing roditty and tov approximating the girth acm transactions on algorithms roditty and vassilevska williams subquadratic time approximation algorithms for the girth in proceedings of the annual symposium on discrete algorithms pages stoer and wagner a simple algorithm journal of the acm takahashi suzuki and nishizeki finding shortest rectilinear paths in plane regions in proceedings of the international symposium on algorithms and computation pages takahashi suzuki and nishizeki shortest noncrossing paths in plane graphs algorithmica vassilevska williams multiplying matrices faster than in proceedings of the acm symposium on theory of computing pages vassilevska williams and williams subcubic equivalences between path matrix and triangle problems in proceedings of the annual ieee symposium on foundations of computer science pages weihe s t in undirected planar graphs in linear time journal of algorithms weimann and yuster computing the girth of a planar graph in o n log n time siam journal on discrete mathematics algorithms for planar graphs and graphs in metric spaces phd thesis university of copenhagen yuster a shortest cycle for each vertex of a graph information processing letters yuster and zwick finding even cycles even faster siam journal on discrete mathematics
8
a survey of algorithms amgad walid faizan ur mohamed abdur saleh purdue university west lafayette usa umm university makkah ksa may may abstract a algorithm finds a path containing the minimal cost between two vertices in a graph a plethora of algorithms is studied in the literature that span across multiple disciplines this paper presents a survey of algorithms based on a taxonomy that is introduced in the paper one dimension of this taxonomy is the various flavors of the problem there is no one general algorithm that is capable of solving all variants of the problem due to the space and time complexities associated with each algorithm other important dimensions of the taxonomy include whether the algorithm operates over a static or a dynamic graph whether the algorithm produces exact or approximate answers and whether the objective of the algorithm is to achieve or is to only be goal directed this survey studies and classifies algorithms according to the proposed taxonomy the survey also presents the challenges and proposed solutions associated with each category in the taxonomy introduction the problem is one of the topics in computer science specifically in graph theory an optimal is one with the minimum length criteria from a source to a destination there has been a surge of research in algorithms due to the problem s numerous and diverse applications these applications include network routing protocols route planning traffic control path finding in social networks computer games and transportation systems to count a few there are various graph types that algorithms consider a general graph is a mathematical object consisting of vertices and edges an aspatial graph contains vertices where their positions are not interpreted as locations in space on the other hand a spatial graph contains vertices that have locations through the edge s a planar graph is plotted in two dimensions with no edges crossing and with continuous edges that need not be straight there are also various settings in which a can be identified for example the graph can be static where the vertices and the edges do not change over time in contrast a graph can be dynamic where vertices and edges can be introduced updated or deleted over time the graph contains either directed or undirected edges the weights over the edges can either be negative or weights the values can be real or integer numbers this relies on the type of problem being issued the majority of algorithms fall into two broad categories the first category is singlesource sssp where the objective is to find the from a vertex to all other vertices the second category is apsp where the objective is to find the between all pairs of vertices in a graph the computation of can generate either exact or approximate solutions the choice of which algorithm to use depends on the characteristics of the graph and the required application for example approximate algorithms objective is taxonomy to produce fast answers even in the presence of a large input graph a special called a spanner can also be created from the main graph that approximates the distances so that a can be computed over that given the large body of literature on algorithms for computing the the objective of this survey is to present a breakdown of these algorithms through an appropriate taxonomy the taxonomy aims to help researchers practitioners and application developers understand how each algorithm works and to help them decide which type or category of algorithms to use given a specific scenario or application domain figure illustrates the proposed taxonomy where each branch describes a specific category of problem figure taxonomy of algorithms taxonomy as in figure the proposed taxonomy classifies the various algorithms into multiple highlevel branches the static branch in figure lists algorithms that operate over graphs with fixed weights for each edge the weights can denote distance travel time cost or any other weighting criteria given that the weights are fixed some static algorithms perform precomputations over the graph the algorithms related work try to achieve a between the query time compared to the precomputation and storage requirements static algorithms consists of two classical algorithms for fall under the two main categories sssp and apsp the sssp algorithms compute the from a given vertex to all other vertices the apsp algorithms compute the between all pairs of vertices in the graph hierarchical algorithms break the problem into a linear complexity problem this can lead to enhanced performance in computation by orders of magnitude algorithms optimize in terms of distance or time toward the target solution distance oracle algorithms include a preprocessing step to speed up the query time distance oracle algorithms can either be exact or approximate the dynamic branch in figure lists algorithms that process update or query operations on a graph over time the update operation can insert or delete edges from the graph or update the edge weights the query operation computes the distance between source and destination vertices dynamic algorithms include both apsp and sssp algorithms algorithms target graphs that change over time in a predictable fashion stochastic algorithms capture the uncertainty associated with the edges by modeling them as random variables parametric algorithms compute a solutions based on all values of a specific parameter replacement path algorithms computes a solution that avoids a specified edge for every edge between the source vertex and the destination vertex replacement paths algorithms achieve good performance by reusing the computations of each edge it avoids on the other hand alternative path algorithms also computes a shortest path between vertices that avoids a specified edge the distinguishing factor between both categories is that replacement paths are not required to indicate a specific vertex or edge on the other hand alternative avoids the specified edge on the the problem finds the approximate on weighted planar divisions related work zwick survey adopts a theoretical with regards to the exact and approximate shortest paths algorithms zwick s survey addresses sssp all pairs apsp spanners a weighted graph variation and distance oracles the survey illustrates the various variations that each category adopts when handling negative and edge weights as well as directed and undirected graphs sen surveys approximate algorithms with a focus on spanners and distance oracles sen s survey discusses how spanners and distance oracles algorithms are constructed and their practical applicability over a static setting sommer surveys query processing algorithms that the index size and the query time sommer s survey also introduce the transportation network class of algorithms and include algorithms for general graphs as well as planar and complex graphs many surveys focus on algorithms that target traffic applications especially route planning methods in such related work a network denotes a graph holzer et al classify variations of dijkstra s algorithm according to the adopted speedup approaches their survey emphasizes on techniques that guarantee correctness it argues that the effectiveness of techniques highly relies on the type of data in addition the best speedup technique depends on the layout memory and tolerable preprocessing time in contrast to optimal algorithms fu et al survey algorithms that target heuristic algorithms to quickly identify the heuristic algorithms aim is to minimize computation time the survey proposes the main distinguishing features of heuristic algorithms as well as their computational costs goldberg investigates the performance of shortestpath algorithms over road networks from a theoretical standpoint goldberg reviews algorithms dijkstra and and illustrates heuristic techniques for computing the given a subset of the graph the survey proves the good and bounds over a graph also it discusses pruning and illustrates how algorithms can be altered to compute reaches while maintaining the same time bound as their original counterparts delling and wagner survey route planning speedup techniques over some problems including dynamic and timedependent variants for example the authors argue that shortcuts used in static networks can not work in problem definition a network in essence they investigate which networks can existing techniques be adopted to bast illustrates techniques for fast routing between road networks and transportation networks bast s survey argues that the algorithms for both networks are different and require specialized techniques for each also the survey presents how the technique performs against dijkstra s algorithm moreover the survey presents two open questions namely how to achieve despite the lack of a hierarchy in transportation networks and how to efficiently compute local searches as in neighborhoods demetrescu and italiano survey algorithms that investigate fully dynamic directed graphs with emphasis on dynamic and dynamic transitive closures the survey focuses on defining the algebraic and combinatorial properties as well as tools for dynamic techniques the survey tackles two important questions namely whether dynamic achieve a space complexity of o and whether shortest path algorithms in a setting be solved efficiently over general graphs nannicini and liberti survey techniques for dynamic graph weights and dynamic graph topology they list classical and recent techniques for finding trees and in large graphs with dynamic weights they target two versions of the problem namely and what they refer to as cost updates of the weights dean s survey focuses on techniques in a dynamic setting it surveys one special case namely the fifo network as it exposes structural properties that allow for the development of efficient algorithms this survey presents these aspects that are different from all its predecessors first it presents a taxonomy that can aid in identifying the appropriate algorithm to use given a specific setting second for each branch of the taxonomy the algorithms are presented in chronological order that captures the evolution of the specific ideas and algorithms over time moreover our survey is more comprehensive we cover more recent algorithms that have been invented after the publication of the other surveys problem definition given a set of vertices v a source vertex s a destination vertex d where s d v and a set of weighted edges e over the set v find the between s and d that has the minimum weight the input to the algorithm is a graph g that consists of a set of vertices v and edges the graph is defined as g v e the edges can be directed or undirected the edges have explicit weights where a weight is defined as w e where e e or unweighted where the implicit weight is considered to be when calculating the algorithm complexity we refer to the size of the set of vertices v as n and the size of the set of edges e as static algorithms in this section we review algorithms for both the sssp and shortestpath apsp problems sssp definition given a graph g v e and source s v compute all distances s v where v v the simplest case for sssp is when the graph is unweighted cormen et al suggest that breadthfirst search can be simply employed by starting a scan from a root vertex and inspecting all the neighboring vertices for each neighboring vertex it probes the vertices until the path with the minimum number of edges from the source to the destination vertex is identified static algorithms dijkstra s algorithm solves the single source sssp problem from a given vertex to all other vertices in a graph dijkstra s algorithm is used over directed graphs with weights the algorithm identifies two types of vertices solved and unsolved vertices it initially sets the source vertex as a solved vertex and checks all the other edges through unsolved vertices connected to the source vertex for to the destination once the algorithm identifies the shortest edge it adds the corresponding vertex to the list of solved vertices the algorithm iterates until all vertices are solved dijkstra s algorithm achieves a time complexity of o one advantage of the algorithm is that it does not need to investigate all edges this is particularly useful when the weights on some of the edges are expensive the disadvantage is that the algorithm deals only with weighted edges also it applies only to static graphs dijkstra s algorithm performs a search in order to find the optimum and as such is known to be a greedy algorithm dijkstra s algorithm follows a successive approximation procedure based on bellman ford s optimality principle this implies that dijkstra s algorithm can solve the dynamic programming equation through a method called the reaching method the advantage of dynamic programming is that it avoids the search process by tackling the dynamic programming algorithms probe an exponentially large set of solutions but avoids examining explicitly all possible solutions the greedy and the dynamic programming versions of dijkstra s algorithm are the same in terms of finding the optimal solution however the difference is that both may get different paths to the optimal solutions fredman and tarjan improve over dijkstra s algorithm by using a fibonnaci heap this implementation achieves o nlogn m running time because the total incurred time for the heap operations is o n log n m and the other operations cost o n m fredman and willard introduce an extension that includes an o log n variant of dijkstra s algorithm through a structure termed the the provides constant amortized costs for most heap operations and o log n amortized cost for deletion driscoll and gabow propose a heap termed the relaxed fibonacci heap a relaxed heap is a binomial queue that allows heap order to be violated the algorithm provides a parallel implementation of dijkstra s algorithm another line of optimization is through improved priority queue implementations boas and boas et al implementations are based on a stratified binary tree the proposed algorithm enables online manipulation of a priority queue the algorithm has a processing time complexity of o loglog n and storage complexity of o n loglog n a study by thorup indicates the presence of an analogy between sorting and the sssp problem where sssp is no harder than sorting edge weights thorup describes a priority queue giving a complexity of o loglog n per operation and o m loglog n complexity for the sssp problem the study examines the complexity of using a priority queue given memory with arbitrary word size following the same analogy han proposes a deterministic integer sorting algorithm in linear space that achieves a time complexity of o m loglog n logloglog n for the sssp problem the approach by han illustrates that sorting arbitrarily large numbers can be performed by sorting on very small integers thorup proposes a deterministic linear space and time algorithm by building a hierarchical bucketing structure that avoids the sorting operation a bucketing structure is a dynamic set into which an element can be inserted or deleted the elements from the buckets can be picked in an unspecified manner as in a list the algorithm by thorup works by traversing a component tree hagerup improves over the algorithm of thorup achieving a time complexity of o n m log w where w is the width of the machine word this is done through a deterministic linear time and space algorithm bellman ford and moore develop an sssp algorithm that is capable of handling negative weights unlike dijkstra s algorithm it operates in a similar manner to dijkstra s where it attempts to compute the but instead of selecting the shortest distance neighbor edges with shortest distance it selects all the neighbor edges then it proceeds in n cycles in order to guarantee that all changes have been propagated through the graph while it provides a faster solution than bellmanford s algorithm dijkstra s algorithm is unable to detect negative cycles or operate with negative weights static algorithms however if there is a negative cycle then there is no that can be computed the reason is due to the lower total weight incurred due to the traversal cycle s algorithm achieves a complexity of o nm its strong points include the ability to operate on negative weights and detect negative cycles however the disadvantages include its slower when compared to dijkstra s algorithm also s algorithm does not terminate when the iterations do not affect the graph weights any further karp addresses the issue of whether a graph contains a negative cycle or not he defines a concept termed minimum cycle mean and indicates that finding the minimum cycle mean is similar to finding the negative cycle karp s algorithm achieves a time complexity of o nm yen proposes two performance modifications over bellman ford and moore the first involves the relaxation of edges an edge is relaxed if the value of the vertex has changes the second modification is dividing the edges based on a linear ordering over all vertices then the set of edges are partitioned into one or more subsets this is followed by performing comparisons between the two sets according to the proposed partitioning scheme a slight improvement to what yen proposes has been introduced by bannister and eppstein where instead of using an arbitrary linear ordering they use a random ordering the result is fewer number of iterations over both subsets apsp definition given a graph g v e compute all distances between a source vertex s and a destination v where s and v are elements of the set v the most general case of apsp is a graph with edge weights in this case dijkstra s algorithm can be computed separately for each vertex in the graph the time complexity will be o mn logn a vast number of algorithms has been proposed that handle real for the shortestpath problem algorithm tries to find all pairs apsp in a weighted graph containing positive and negative weighted edges their algorithm can detect the existence of cycles but it does not resolve these cycles the complexity of algorithm is o where n is the number of vertices the detection of cycle is done by probing the diagonal path matrix algorithm can not find the exact between vertices pairs because it does not store the intermediate vertices while calculating however using a simple update one can store this information within the algorithm steps the space complexity of the algorithm is o however this space complexity can reach o by using a single displacement array the strong point of the algorithm is that it can handle edges and can detect cycles the main drawback though is that the timing complexity for running dijkstra s algorithm on all vertices to convert it from sssp to apsp will be o mn logn this timing complexity is lower than o if and only if m having a sparse graph many studies have been proposed better running time over s algorithm on edge weights a notable enhancement has been proposed by fredman that relies on a approach his approach relies on the theorem proposed by aho and hopcroft the complexity of an n xn matrix multiplication using a multiplication approach is similar to that of shortestpaths he shows that o n comparisons suffices to solve the apsp problem the algorithm achieves a complexity of o loglogn table summarizes the enhancements proposed for edges up to this date table algorithms and complexities for edges static algorithms time complexity n loglogn logn n n loglogn loglogn logn loglogn logn author the best result by han and takaoka achieve o loglogn reduction factor when compared to the result of their approach focuses on the distance product computation first an nxn matrix js divided into m each having dimensions where m is determined based on a specific criterion then the algorithm proceeds in a series of matrix manipulations index building encoding and partitioning steps until it reaches the proposed bound the best edge weight complexity is o logn first the algorithm sorts all adjacency lists in an increasing weight fashion then it performs an sssp computation n times and proceeds in iterations in the first phase it uses the notion of potential over the edges of vertices and selects and labels the edge with the minimum potential potential derived from the is defined as a probability distribution on complete directed graphs with arbitrary edge lengths that contain no negative cycles the algorithm runs in two main phases each with a specific invariant and has an o logn complexity the best positive integer edge weight complexity is o c where is the exponent being proposed by coppersmith and winograd their proposed algorithm provides a transition between the fastest exact and approximate algorithms with a linear error rate the algorithm focuses on directed graphs with small positive integer weights in order to obtain additive approximations the approximations are polynomial given the actual distance between pairs of vertices distance oracles definition given a graph g v e a distance oracle encompasses a data structure or index that undergoes preprocessing and a query algorithm the term distance oracle has been proposed by thorup and zwick it proposes a faster alternative to the sssp and apsp algorithms this can be achieved by preprocessing the graph and creating an auxiliary data structure to answer queries distance oracle operates in two phases namely a preprocessing phase and a query phase in the preprocessing phase information such as data structures or indexes are computed in contrast the query processing phase processes queries efficiently using the outcome from the preprocessing phase distance oracles may return exact or approximate distances a distance oracle provides an efficient between space in terms of data structure or index storage and query time exact distances fakcharoenphol and rao propose an algorithm for planar graphs that balances the between preprocessing and query time the preprocessing complexity for both space and time is n and the complexity is n their proposed approach creates a graph given a subset of vertices followed by the computation of the tree first the graph is divided into a set static algorithms of bipartite graphs the distance matrices of the bipartite graph need to comply with a condition referred to as the monge condition the proposed result of o n holds as long as the noncrossing condition is enforced klein et al propose a algorithm with a fast preprocessing complexity of o nlog n over a directed planar graph the graph can include both positive and negative edges given a planar directed graph g and a source vertex the algorithm finds a curve known as a jordan curve a jordan curve c is identified if it passes through o n vertices a boundary vertex is one that passes through cutting the graph and duplicating the boundary vertices creates subgraphs gi the algorithm passes through five stages recursively compute the distances from r within a graph where r is an arbitrary boundary vertex compute all distances between boundary vertices use a variant of to compute the graph distances from the boundary vertex r to all other boundary vertices use dijkstra s algorithm to compute the graph distances from the boundary vertex r to all other vertices use dijkstra s algorithm to compute graph distances from the source vertix this requires time of o nlogn djidjev proposes a faster query time algorithm and proves that for any s n a distance oracle can have a space complexity during preprocessing of o s and query time complexity of o djidjev s objective is to have an algorithm in which the product of and is than those of sssp and apsp problems the proposed algorithm provides a complexity of o n for any class of directed graphs where the separator theorem holds cabello improves the preprocessing time and provides a theoretical proof that for any s a distance oracle can have o s preprocessing space complexity and o s query time complexity this is slower than the algorithm proposed by djidjev by a logarithmic factor but still covers a wider range of the proposed approach constructs a data structure between any pair of vertices that can answer queries then the algorithm queries the data structure with those pairs proposes a constant algorithm for unweighted graphs and proves that for any s a distance oracle can have a space complexity of o the algorithm relies on the wiener index of a graph the weiner index defines the sum of distances between all pairs of vertices in a graph the proposed technique shows the existence of subquadratic time algorithms for computing the wiener index computing the wiener index has the same complexity as computing the average vertex pairs distances henzinger et al propose a sssp algorithm requiring o log nl time where l is the absolute value of an edge with the smallest negative value the proposed algorithm also achieves a similar bound for planar graphs and planar bipartite graphs they also propose a parallel and dynamic variant of the algorithm the key component of their approach is the use of based on planar separators mozes and sommer propose an algorithm to answer distance queries between pairs of vertices in planar graphs with edge weights they prove that for any s nloglogn a distance oracle can have s time complexity and o s space complexity distance queries can be answered in s the graph can be preprocessed in n and the generated data structure will have a size of o nloglogc the query time will be c where c is a cycle with c o n vertices approximate distances approximate distance oracles algorithms attempt to compute by querying only some of the distances it is important to note that algorithms that deal with finite metric spaces produce only approximate answers some algorithms create spanners where a spanner is a sparse that approximates the original graph they can be regarded as a spanning tree that maintains the locality aspects of the graph these locality aspects defines a stretch where a stretch is a multiplicative factor static algorithms that indicates the amount distances increase in the graph the stretch is a result of utilizing the spanner edges only other algorithms approximate distances by triangulation using a concept called landmark or beacon that is selected by random sampling where each vertex stores distances to all landmarks note that given the definition of approximate distance oracles the actual is still not guaranteed to be retrieved zwick presents an apsp algorithm for directed graphs that utilizes a matrix multiplication where the approximate distance is computed in o log where for any they define the stretch as and w represents the largest weighted edge identified in the graph aingworth et al propose an apsp algorithm for undirected graphs with unweighted edges that does not adopt a matrix multiplication approach a of not using fast matrix multiplication is a error they propose two algorithms one that achieves an additive error of in time o log n they also provide an estimate of graph paths and distances in o log n and another algorithm that achieves a query time of o m n log n dor et al improve on previous surplus results by proposing an apsp algorithm that computes the surplus estimate in they also show that for any k a surplus estimate takes to be computed their work relies on the one main observation that there is a set of vertices that represent vertices with high degree value in other words a set of vertices x is said to represent a set of y if all vertices in x have a neighbor in y cohen and zwick improve the work proposed by dor et al for weighted undirected graphs by proposing an algorithm that computes the surplus estimate of all distances in and estimate in they show that finding the estimated distances between in directed graphs is a hard problem similar to the boolean matrix multiplication this makes their proposed approximation algorithm only valid for undirected graphs their algorithm relies on two important aspects partitioning of the graph with the assumption that it is directed and the use of an sssp algorithm dijkstra s patrascu and roditty further improve the stretch bound of intermediate vertices on the expense of increasing the space requirements and achieve this approach defines the notion of balls defined as b where balls around each vertex grow geometrically and stop based on a specific criteria given the vertices s and t the happens when the balls do not intersect agarwal et al also propose a estimate approach that can be implemented in a distributed fashion the approach is mainly meant for compact routing protocols it aims to characterize the space and time for approximate distance queries in sparse graphs for both approaches above and the space versus query time depends on the number of edges for spanners elkin and peleg propose a general with space complexity of o where is a constant when and are also constants they claim that the stretch and spanners can be minimized in a simultaneous evaluation fashion baswana and sen propose a spanner with a stretch that can be computed in o km and with a size of o where k they provide a theoretical proof that a spanner with a stretch can be computed without distance computation in linear time through a novel clustering technique the proposed approach can take o k rounds each round explores an adjacency vertex list in order to determine the edges that need to be removed the advantage of this approach is its applicability to various computational environments the synchronous distributed model the external memory model and the crcw pram model for planar graphs thorup proposes an distance oracle this approach provides a constant number of through separators in contrast to lipton et al for each vertex it stores the distances to a set of o landmarks per level this process is performed recursively for o logn levels static algorithms kawarabayashi et al propose a planar graph algorithm that provides tunable where a polylogarithmic query time can be achieved while maintaining a linear space requirement with respect to the graph size the proposed approach achieves a preprocessing time complexity of o nlog n and query time of o log n it achieves faster running time than thorup s approach that computes a set c of connections that covers all vertices of a graph with every vertex containing o connections in contrast only a subset of vertices is covered using kawarabayashi et al approach the approach is o times the number of paths in space complexity for complex networks chen et al proposes a distance oracle over random graphs with estimate that has a space complexity of o their approach adopts the distance oracle proposed by thorup and zwick where they use vertices as landmarks the adaptation includes selecting vertices with the highest degree as landmarks it encodes the in the vertex labels a search algorithm is based on adding annotations to vertices or edges of the graph that consist of additional information this information allows the algorithm to determine which part of the graph to prune in the search space simple search hart et al propose a simple algorithm termed the algorithm proposes a heuristic approach in finding the unlike dijkstra s algorithm is an informed algorithm where it searches the routes that lead to the final goal is an optimal greedy algorithm but what sets aside from other algorithms is its ability to maintain the distance it traveled into account always finds the if an admissible heuristic function is used the strong point of the algorithm is that it is meant to be faster than dijkstra since it explores less number of vertices on the downside if does not use a good heuristic method it will not reach the some of the variants of the algorithm use landmarks and other techniques in order to achieve better performance than under various setups goldberg and werneck propose a preprocessing phase where initially a number of landmarks are selected followed by the computation of the where it is stored between the vertices of all these landmarks they propose a technique using the computed distances in addition to the triangle inequality property the technique is based on the algorithm the landmark chosen and the triangle inequality gutman offers a comparable solution to the problem where his work is based on the concept of reach gutman s technique relies on storing a reach value and the euclidean coordinates of all vertices the advantage of gutman s approach is that it can be combined with the algorithm when compared to the work by goldberg and werneck gutman s outperforms their proposed technique given one landmark while it performs worse given sixteen landmarks on the downside gutman s approach depends on assumptions longer preprocessing complexity and inapplicability in a dynamic setting potamias et al propose an approximate technique for distance estimation over large networks a theoretical proof is presented to indicate that the problem is and they propose heuristic solutions in specific they propose a smart landmark selection technique that can yield higher accuracy reaching times less space than selecting landmarks at random among their evaluated strategies the centrality is more robust than the degree strategy also strategies based on partitioning exhibit better computational cost across datasets kleinberg et al propose an algorithm with provable performance guarantees for triangulation and embedding the algorithms are basically designed for triangulation where static algorithms they use the triangle inequality to deduce the unmeasured distances they indicate that a multiplicative error of on a fraction of distances can be achieved by reconstruction given a constant number of beacons the algorithm also achieves a constant distortion over of distances et al claim that dijkstra s algorithm can be enhanced by precomputing the shortestpath distances they propose to partition the graph into k clusters and perform two operations store the start and end point store the shortest connection between each pair of clusters the proposed algorithm achieves a scaling factor of k in contrast to dijkstra s algorithm advanced search edge labels is an approach that relies on precomputing the information for an edge e and vertices m the superset m e represents all the vertices on a that start with an edge the graph is first partitioned into a set of regions of the same size alongside a precomputed set of boundary vertices in order to compute the edge flags an sssp computation is done on the regions for all the boundary vertices various work kohler et al schulz et al and lauther further present some of the variations et al propose an algorithm for sparse directed graphs with edge weights termed the approach the approach preprocesses graph data to generate information that speeds up queries by dividing the graph into regions and determining if an arc in a specific region lies on the given a suitable partitioning scheme and a search the approach times faster than the standard dijkstra s algorithm over a large graph schilling et al present a further improvement by searching once for each region their approach achieves of more than on a subnetwork of million vertices goldberg and werneck propose an based search landmarks alt algorithm that uses the triangle inequality they show that precomputing the distances to a set of landmarks can bound the shortest path computational cost they propose an average of landmarks that are over the corners of the graph in turn their approach leads to speed up for route planning bauer et al study how to systematically combine techniques proposed for dijkstra s algorithm adding approaches to hierarchical approaches they present generalized technique that demonstrates how performance can be improved their results show that highway vertex routing and achieves the best while maintaining an adequate preprocessing cost they also present a hierarchical search landmarks alt algorithm on dense graphs delling et al present an algorithm termed public transit router raptor raptor is not based on dijkstra s algorithm as it probes each route in the graph at most once raptor works in fully dynamic scenarios and can be extended to handle for example flexible departure times bauer and delling uses hierarchical based techniques to extend the edge flag approach using contraction hierarchies during preprocessing and hence tackling a main processing drawback of edge flags the proposed work is termed shortcuts or sharc for short the key observation about sharc is that it is enough to set edge flags to most edges and this focuses the preprocessing on important edges only another observation is that sharc incorporates hierarchical aspects implicitly sharc also extends the edge flag approach of et al to achieve a fast unidirectional query algorithm maue et al propose a algorithm that utilizes precomputed cluster distances pcd the proposed approach first partitions the graph into clusters this is followed by precomputing the shortest connections between the pairs of clusters u and v pcds produce bounding factors for distances that can be used to prune the search when compared with the algorithm in turn this achieves a static algorithms comparable to alt while using less space hierarchical hierarchical algorithms deal with generating a vertex hierarchy in the preprocessing stage a hierarchical structure is prominent in areas road networks where it exhibits hierarchical properties ordering important streets motorways and urban streets in general methods using contraction hierarchies provide low space complexity contraction hierarchies contain many variants such as methods and highway hierarchies and vertex routing on the other hand routing and hub labels provide fast the following sections discuss various algorithms that follow a hierarchical approach highway hierarchies highway hierarchies capture properties for example highway edges exhibit a better representation for shortest paths although they may not be located between the source and the destination vertices the algorithm generates a hierarchy of graphs that enables fast query time with correctness guarantees sanders and schultes propose a static undirected highway hierarchies algorithm around the notion of correctly defining local search and highway network appropriately they define local search as one that visits h tuning parameter closest vertices from the source or target a highway edge is created if it lies on the path from the source vertex to the destination vertex with that edge not being within the h closest vertices from the source or destination nannixini et al propose an algorithm that relies on lengths they extend the original algorithm by sanders and schultes to the case of directed graphs their aim is to find the fastest paths on a large dynamic road network that have quasi updates contraction hierarchies a contraction hierarchy has a level for each vertex reaching up to n levels hierarchical models can improve query performance as search can be conducted in an upwards manner only over the graph this reduced the space complexity as edges are stored at their lower endpoints only geisberger et al propose contraction hierarchies where vertices are initially ordered by importance and then a hierarchy is generated by contracting the least important vertices in an iterative manner contracting is the process of replacing the passing a vertex by what they call shortcuts they propose a hierarchical algorithm that utilizes a bidirectional search technique batz et al propose a version of the algorithm it tackles road networks where it proposes a fast and exact route planning algorithm the issue it faces is space complexity they tackle this problem by using approximations of functions that lead to significant space reduction while preserving correctness the proposed approach relies on approximating shortcuts and to acquire edge weights then these weights can then be used with their bidirectional search algorithm to create a corridor of shortcuts that can be searched kieritzcite et al propose a distributed memory parallelization of contraction hierarchies the algorithm identifies vertices that can be contracted in every iteration parallelization is achieved when each process contracts its vertices independently and the vertices contractions do not overlap with each other they attempt to approximate the ordering of the sequential algorithms used static algorithms geisberger et al devise an algorithm based on contraction hierarchies to calculate the preprocessing step relies on the hierarchical properties of road networks in order to add shortcut edges they use a modified version of dijkstra s algorithm that visits only a few hundred vertices that in turn makes it suitable to implement on mobile devices graphs in a overlay graph if a set of vertices lie at a specific level then the in that level do not use vertex from the upper levels in turn this method depends on the correct selection of vertices to act as landmarks on the higher levels schulz et al propose a decomposition method that targets space reduction this method precomputed the and replaces the weights of single edges with a weight equal to the length the result is a subgraph that is smaller in size when compared with the original graph the subgraph distances between a set of vertices is the same as the graph distance between the same set of vertices in the original graph holzer et al introduce several vertex selection criteria on overlay graphs these include criteria to determine a representative subset of the original graph they investigate the criteria s effectiveness over multilevel overlay graphs and the achieved for computation transit vertex routing transit vertex routing precomputed the shortest paths to and from all landmarks identified in a graph the algorithm requires extensive preprocessing but exhibits very fast query time as it requires a limited number of between landmarks located in different locations bast et al propose transit vertex routing they suggest that a vertical and horizontal sweep are sufficient to compute the set of transit vertices they also illustrate some techniques to make the approach more arz et al propose a variant of contraction hierarchies that achieves an order of magnitude speeds up similar to the time needed to find contraction hierarchies they propose a locality filter that does not affect the query time hub labeling modeling road networks as a graph is a method used for computing the shortest paths one method used for such modeling is the process of labeling algorithms for labeling have been introduced in the distributed computing field in the labeling preprocessing stage each vertex v is computed and assigned a forward label and a reverse label the forward label encompasses a set of vertices w where each vertex contains a computed distance dist v w from the reverse label consists of a set of vertices u where each vertex contains a computed distance dist u v to these labels are later used in the query stage to determine the vertices that minimize the distance from source to destination a label can be perceived as a set of hubs that a vertex v has a direct connection to the labeling algorithm ensures that any two vertices have one hub in common when computing the shortest path hub labeling starts by preprocessing the vertices where for each vertex v it precomputes the distance to a set of landmarks l v in the vertex label the query algorithm is fast as long as the number of landmarks of the source and destination vertices is small storing the labels in a consecutive manner allows the algorithm to exhibit good locality abraham and delling propose a labeling scheme that given a vertex s and t it considers the dynamic algorithms sets of vertices visited by the forward contraction hierarchy from s and the reverse contraction hierarchy of the contraction hierarchies algorithm computes for the the intersection of the forward and reverse sets that contain the vertex babenko et al propose an approximation algorithm for producing small labels their main target is to reduce the size of the maximum this reduction process leads to unbalanced solutions as vertices will have a skewed label sizes they propose an approximation algorithm for the maximum label size that runs in o logn the proposed approach reduces the the problem to a problem cohen et al propose a data structure for storing the reachability label using a cover of all the paths in a graph each vertex t precomputes the label lin and lout v such that for any pair s and t at least one vertex is in lout s lin t the distance labeling query finds the from source s totdestination t by finding the minimum distance from lout s x to x lin t for each label lout s lin t the size of a label l is not guaranteed and the polynomial preprocessing time is approximately o logn for finding a cover of the invariant paths whose size is larger than the set of all chang et al propose a distance labeling with a size smaller than another labeling approach in the preprocessing phase the algorithm stores a parent function p that assigns the parent vertex to each vertex by avoiding the preprocessing of the the proposed approach performs vertex separation on the graph g that divides g into multiple connected subgraphs the graph is further decomposed into a minimal tree t i f where i v represents the set of vertices and f is the set of edges the approach uses the distance query to compute the minimum distance the time complexity of query processing is o tw h where tw represents the width and h represents the height of the decomposed tree t highway node routing the motivation behind using highway node routing is that prominent vertices that overlap various will generate sparse overlay graphs the result would be faster query processing and lower space overhead schultes and sanders proposes a dynamic algorithm that is and allows query time to be thousand times faster when compared to dijkstra s algorithm the choice of vertices is achieved by capitalizing on previous results in addition to using the required vertex sets defined by highway hierarchies algorithms they simplify the complications of computation into the prepreprocessing step this also leads to simplification of the query processing algorithm especially the dynamic variants abraham suggests that road networks do not necessarily have a significant highway dimension the proposed algorithm relies on realizing balls of a specific radius for every r there exits a sparse set sr where of length more than r will have a vertex from the set sr if every ball having radius o r contains less number of vertices than sr then the set sr is sparse dynamic algorithms the main requirement of dynamic algorithms is to process updates and query operations efficiently in an online fashion in the update operation edges are inserted or deleted from the graph in the query operation the distance between vertices is computed fully dynamic algorithms are those that can process insertions and deletions incremental algorithms can process insert operations but not delete operations decremental algorithms can process delete operations but not insert operations this implies that incremental and decremental algorithms are dynamic algorithms partially dynamic the following section illustrates the algorithms that demonstrate the aforementioned differences apsp the algorithms reports the distances between any two vertices in a graph the algorithms attempt to answer distance queries between any two vertices while dynamically maintaining changes that can occur to the graph such as inserts deletes and updates demetrescu and italiano propose a fully dynamic algorithm over directed graphs for with edge weights every edge can have a predefined number of values their algorithm achieves an amortized time complexity of o log n for update operations while achieving an optimal for query processing time the proposed algorithm for the update operation inserts or deletes a vertex in addition to all its possible edges the algorithm also maintains a complete distance matrix between updates thorup improves over demetrescu and italiano by reducing the graph problem to a smaller set of decremental problems thorup adopts the idea of a minimum spanning tree by utilizing the efficiency of the decremental algorithm to solve the shortestpaths problem bernstein presents a algorithm for apsp over an undirected graph with positive edge weights bernstein s algorithm achieves an update time that is almost linear and a query time of o loglogn the proposed query algorithm is deterministic while the update procedure is randomized the algorithm behavior depends on the distance from the source vertex to the destination vertex since d x y is not known beforehand the algorithm relies on guessing several different values for d x y roditty and zwick propose a fully dynamic apsp algorithm for unweighted directed graphs the algorithm is randomized and the correctness of the returned results are claimed to be high the proposed algorithm passes through a set of phases that rely on the ideas of a decremental algorithm they demonstrate how the incremental and decremental versions of the sssp problems are similar in terms of complexity to the the static problem over directed or undirected graphs bernstein proposes an approximate algorithm that improves over existing studies with respect to the delete operation and edge weight increase the algorithm computes the decremental on weighted graphs the approach achieves an update time of o using a randomized algorithm henzinger et al enhances over the fastest deterministic algorithm by shiloach and even by achieving an update time of o it also achieves a constant query time also they propose a deterministic algorithm with with an update time of o mn and a query time of o loglogn they introduce two techniques namely a lazy tree algorithm the proposed approach maintains a tree that is bounded by distance with a tree based technique the algorithm reports the distances from a given source vertex the dynamic algorithm computes the update and query operations in an online fashion the update operation inserts deletes or modify the edge s weight the query operation probes for the distance from the source vertex to a given target vertex fakcharoenphol and rao propose an algorithm for planar graphs with edge weights algorithms it achieves a time complexity of o nlog n it performs update and query operations in o log n amortized time the proposed algorithm uses monge matrices with a combination of and dijkstra s algorithms for searching in time bernstein and roditty propose a dynamic algorithm that can achieve an update time better than o n without sacrificing query time in specific they obtain o total update time and constant query time the main type of graphs that it can achieve this result on is moderately sparse graphs bernstein and roditty propose two randomized decremental algorithms that operate over unweighted undirected graph for two approximate problems henzinger et al improve the update operation time of bernstein and roditty to o m while maintaining a constant query time the algorithm utilizes the data structure where given a parameter h and a constant maintains o h vertices referred to as centers the main property of the data structure is that every vertex within a specific distance is in a tree termed tree the proposed algorithm has the same property of the data structure and is fastest when h is moderately small algorithms a algorithm processes graphs that have edges associated with a function known as an function the function indicates how much time is needed to travel from one vertex to another vertex the query operation probes for the the path from the source to the destination vertex over graph the returned result represents the best departure time found in a given time interval algorithms kanoulas et al propose an algorithm that finds a set of all fastest paths from source to destination given a specified time interval the specified interval is defined by the user and represents the departure or arrival time the query algorithm finds a partitioning scheme for the time interval and creates a set of where each is assigned to a set of fastest paths unlike the algorithm the proposed algorithm probes the graph only once instead of multiple times ding et al propose an algorithm that finds the departure time that minimizes the travel time over a road network also the traffic conditions are dynamically changing in the road network the algorithm is capable of operating on a variety of graphs george et al propose a graph tag graph that changes its topology with time in tag vertices and edges are modeled as time series apart from time dependence it is also responsible for managing the edges and vertices that are absent during any instance in time they propose two algorithms to compute using network and best best finds the at the time of given query using a greedy algorithm on the other hand best algorithm finds out the best earliest travel time over the entire period using tag the time complexity of and best are o e logt and o et respectively where e represents edges n represents vertices and t represents the time instance ding et al propose an algorithm for the problem over a large graph gt each edge has a delay function that denotes the time taken from the source vertex to the destination vertex at a given time the user queries the least travel time ltt the proposed algorithm achieves a space complexity of o n m t and a time complexity of o nlogn m t stochastic algorithms algorithms nannicini et al propose a bidirectional algorithm that restricts the search to a set of vertices that are defined by a algorithm the bidirectional algorithm operates in two modes where the first mode namely theforward search algorithm is run on the graph weighted by a specific cost function while the second mode namely the backward search is run on the graph weighted by a function delling and wagner reanalyzes various technique the concluded that the most of the techniques that operate over graphs guarantee correctness by augmenting the preprocessing and query phases subroutines foschini et al study the computational complexity of the problem over timedependent graphs they conclude that linear functions causes the shortest path to the destination changes logn times they study the complexity of the arrival time by mapping the problem to a parametric problem in order for it to be analyzed correctly demiryurek et al propose a technique to the computation over timedependent spatial graphs they propose a technique based on the bidirectional algorithm that operates in two main stages the first stage is where it partitions the graph into a set of partitions that do not overlap next they calculate a distance label for vertices and borders the second state is online where it probes for the fastest path by utilizing a heuristic function based on the computed distance labels the results indicate that the proposed technique decreases the computation time and reduces the storage complexity significantly stochastic algorithms a stochastic attempts to capture the uncertainty associated with the edges by modeling them as random variables then the objective becomes to compute the based on the minimum expected costs the two notable lines of research in this problem are adaptive and nonadaptive algorithms the adaptive algorithms determine what the next best next hop would be based on the current graph at a certain time instance the algorithms focus on minimizing the length of the path adaptive algorithms and mahmassani propose an algorithm to determine the apriori from all source vertices to a single destination vertex this computation is for done for each departure time during busy time of the graph they also propose a over these apriori nikolova et al propose an algorithm that maximizes the probability without exceeding a specific threshold for the length they define a probabilistic model where edge weights are drawn from a known probability distribution the optimal path is the one with the maximum probability indicating a path that does not pass a specific threshold algorithms loui proposes using a utility function with the length of the path where the utility function is monotone and when the utility function exhibits a linear or an exponential behavior it parametric algorithms becomes separable into the edge lengths this allows the utility function to be identified using classical algorithms via paths that maximize the utility function nikolova et al propose an algorithm for optimal route planning under uncertainty they define the target as a function of both the path length and the departure time starting from the source they indicate that path and start time are jointly optimizable due to the penalizing behavior that they exhibit for late and early arrivals they also indicated that this joint optimization is reducible to classic shortestpath algorithms parametric algorithms parametric objective is to compute the for all vertices based on a specific parameter it probes for the parameter values known as breakpoints where the tends to change the edge value varies based on a linear function of the parameter value mulmuley and shah propose a model for computation it is a variant of the parallel random access machine the proof starts with a definition about the parametric complexity of the problem plotting the weights of the as a function results in an optimal cost graph that is and concave breakpoints are defined as a fixed set of linear weight functions over a fixed graph young et al propose a model where the computed edge values makes it more tractable than its predecessors this tractability allows obtaining in polynomial time they use the algorithm proposed by karp and orlin and modify it to use fibonacci heaps instead in order to improve its performance erickson proposes an algorithm for computing the maximum flow in planar graphs the algorithm maintains three structures namely an edge spanning tree a predecessor dual vertex set and the slack value of dual edge set they compute the initial predecessor pointers and slacks in o nlogn using dijkstra s algorithm replacement algorithms consider a graph g v e where v is the set of vertices and e is the set of edges for every edge e e on the from source s v to destination d v the replacement path algorithm calculates the from s to d that avoids emek et al propose an algorithm that computes the replacement path in time the algorithm requires o nlog n time during the preprocessing stage and o hloglogn time to answer the replacement path query where h is the number of hops in a weighted planar directed graph roditty and zwick propose a randomized algorithm that replacement path in an unweighted directed graph the complexity of the algorithm is m n the monte carlo algorithm improves the of the and vickrey pricing problems by a factor of bernstein proposes an approximate algorithm that computes the paths in o log n m nlog m nlogn mlog time where is the of largest and smallest in the graph bernstein s algorithm achieves a running time of km n when applied over the simple problem alternative algorithms alternative algorithms the alternative problem reports paths that avoid a given vertex or edge termed the unwanted vertex or the unwanted edge the key difference between the and the alternative is that the user is not required to specify the unwanted vertex or edge for replacement paths the goal of the alternative path problem is reusing the previously computed results of the unwanted vertex or edge in turn this achieves better performance existing algorithms dynamic do not solve the alternative problem because of the high complexity of the update operation xie et al propose a storage schemed termed ispqf it is an extension of the that further reduces the number of at each vertex the space complexity of the into forest spqf is o the spqf algorithm can find the alternative over a single source from source s to destination d that avoids vertex v as well as all pairs from set of sources x to set of destinations y that avoid vertex v in o n weighted region algorithms mitchell and papadimitriou define the weighted region problem wrp as a generalization of the shortest path problem with obstacles the problem assumes that the plane is subdivided into weighted polygonal regions the objective is to minimize the cost according to a weighted euclidean metric the study by mitchell and papadimitriou sheds light on the discriminating properties of the weighted region problem over planar divisions and proposes an algorithm that runs in o l where n is the number of vertices and l is the number of bits required to encode the problem instance in specific l o log nn where n is the maximum integer representing vertices of the triangulation and is a error value that can be tolerated mata and mitchell propose an algorithm to compute the approximate for the weighted planar subdivision problem by constructing a sparse graph termed the the approach uses snell s law of refraction to divide the vertices into cones that bound the path of a vertex the complexity to build the graph with o kn vertices is o where k is the number of cones after being scanned it produces the paths that are within a factor of from the optimal solution conclusion in this paper we devise a taxonomy for the problem for each branch of the taxonomy we illustrate the discriminating features and highlight the research the taxonomy provides investigators of the problem with a guideline on where a required problem definition maps within the current related work acknowledgements walid aref s research has been supported in part by the national science foundation under grant iis references references abraham and delling a labeling algorithm for shortest paths in road networks experimental algorithms abraham delling goldberg and werneck hierarchical hub labelings for shortest paths algorithmsesa abraham fiat goldberg and werneck highway dimension shortest paths and provably efficient algorithms proceedings of the annual symposium on discrete algorithms pages agarwal godfrey and approximate distance queries and compact routing in sparse graphs ieee infocom pages aho and hopcroft the design and analysis of computer algorithms addisonwesley longman publishing boston ma usa edition aiello chung and lu a random graph model for massive graphs stoc aingworth chekuri and motwani fast estimation of diameter and shortest paths without matrix multiplication soda pages arz luxen and sanders transit node routing reconsidered sea babenko goldberg gupta and nagarajan algorithms for hub label optimization automata languages and programming bannister and eppstein randomized speedup of the algorithm analco bast car or public transport two worlds efficient algorithms pages bast funke matijevic sanders and schultes in transit to constant time queries in road networks alenex baswana and a simple and linear time randomized algorithm for computing sparse spanners in weighted graphs random structures and algorithms pages batz geisberger neubauer and sanders contraction hierarchies and approximation experimental algorithms pages bauer and delling sharc fast and robust unidirectional routing journal of experimental algorithmics jea bauer delling sanders schieferdecker schultes and wagner combining hierarchical and techniques for dijkstra s algorithm journal of experimental algorithmics pages bellman dynamic programming princeton university press bellman on a routing problem quarterly of applied mathematics bernstein fully dynamic epsilon approximate shortest paths with fast query and close to linear update time annual ieee symposium on foundations of computer science pages bernstein a nearly optimal algorithm for approximating replacement paths and k shortest simple paths in general graphs proceedings of the annual acmsiam symposium on discrete algorithms pages references bernstein maintaining shortest paths under deletions in weighted directed graphs stoc page bernstein and roditty improved dynamic algorithms for maintaining approximate shortest paths under deletions symposium on discrete algorithms pages boas preserving order in a forest in less than logarithmic time pages boas kaas and zijlstra design and implementation of an efficient priority queue mathematical systems theory pages cabello many distances in planar graphs algorithmica pages cechlrov and szab on the monge property of matrices discrete mathematics chan shortest paths for unweighted undirected graphs in o mn time proceedings of the seventeenth annual symposium on discrete algorithm pages chan more algorithms for shortest paths in weighted graphs proceedings of the annual acm symposium on theory of computing pages chang yu qin cheng and qiao the exact distance to destination in undirected world the vldb journal chen sommer teng and wang a compact routing scheme and approximate distance oracle for graphs acm transactions on algorithms pages cohen halperin kaplan and zwick reachability and distance queries via labels siam journal on computing cohen and zwick paths journal of algorithms pages coppersmith and winograd matrix multiplication via arithmetic progressions journal of symbolic computation pages cormen stein rivest and leiserson introduction to algorithms higher education edition b dean shortest paths in fifo networks theory and algorithms rapport technique delling pajor and werneck public transit routing alenex delling and wagner route planning robust and online optimization demetrescu and italiano a new approach to dynamic all pairs shortest paths journal of the acm jacm pages demetrescu and italiano dynamic shortest paths and transitive closure algorithmic techniques and data structures journal of discrete algorithms pages demiryurek and shahabi online computation of fastest path in sstd pages denardo dynamic programming models and applications dover publications references dijkstra a note on two problems in connexion with graphs numerische mathematik pages ding yu and qin finding shortest paths over large graphs proceedings of the international conference on extending database technology advances in database technology edbt page djidjev efficient algorithms for shortest path queries in planar digraphs concepts in computer science pages dobosiewicz a more efficient algorithm for multiplication internat comput math dor halperin and zwick almost shortest paths siam journal on computing driscoll and gabow relaxed heaps an alternative to fibonacci heaps with applications to parallel computation communications of the acm pages elkin and peleg beta constructions for general graphs siam journal on computing pages emek peleg and roditty a algorithm for computing replacement paths in planar directed graphs acm transactions on algorithms erickson maximum flows and parametric shortest paths in planar graphs siam fakcharoenphol and rao planar graphs negative weight edges shortest paths and near linear time journal of computer and system sciences pages floyd algorithm shortest path communications of the acm pages ford network flow theory report the rand corporation foschini hershberger and suri on the complexity of shortest paths algorithmica fredman new bounds on the complexity of the shortest path problem siam pages fredman and tarjan fibonacci heaps and their uses in improved network optimization algorithms journal of the acm jacm pages fredman and willard algorithms for minimum spanning trees and shortest paths proceedings annual symposium on foundations of computer science pages fredman and willard surpassing the information theoretic bound with fusion trees journal of computer and system sciences pages fredman and willard blasting through the information theoretic barrier with fusion trees proceedings of the annual acm symposium on theory of computing stoc pages fu sun and rilett heuristic shortest path algorithms for transportation applications state of the art computers operations research pages gavoille peleg and raz distance labeling in graphs algorithms pages geisberger sanders schultes and delling contraction hierarchies faster and simpler hierarchical routing in road networks experimental algorithms pages references geisberger sanders schultes and vetter exact routing in large road networks using contraction hierarchies transportation science pages george kim and shekhar network databases and routing algorithms a summary of results spatial and temporal databases pages george and shekhar graphs for modeling advances in conceptual modelling pages goldberg shortest path algorithms with preprocessing sofsem pages goldberg and werneck computing shortest paths from external memory gutman routing a new approach to shortest path algorithms optimized for road networks hagerup improved shortest paths on the word ram automata languages and programming pages han improved fast integer sorting in linear space information and computation pages han improved algorithm for all pairs shortest paths information processing letters pages han an o time algorithm for all pairs shortest paths proceedings of the conference on annual european symposium volume pages han and takaoka an o log log n time algorithm for all pairs shortest paths proceedings of the scandinavian conference on algorithm theory pages hart nilsson and raphael formal basis for the heuristic determination of minimum cost paths systems science and cybernetics pages henzinger krinninger and nanongkai dynamic approximate shortest paths breaking the o mn barrier and derandomization ieee annual symposium on foundations of computer science pages henzinger krinninger and nanongkai a algorithm for decremental shortest paths soda pages henzinger and king fully dynamic biconnectivity focs pages henzinger klein rao and subramanian faster algorithms for planar graphs journal of computer and system sciences pages hershberger and suri vickrey prices and shortest paths what is an edge worth pages holzer schulz and wagner engineering multilevel overlay graphs for queries journal of experimental algorithmics holzer schulz wagner and willhalm combining techniques for computations journal of experimental algorithmics kanoulas y du xia and zhang finding fastest paths on a road network with speed patterns international conference on data engineering icde pages references karger koller and phillips finding the hidden path time bounds for shortest paths proceedings annual symposium of foundations of computer science pages karp a characterization of the minimum cycle mean in a digraph discrete mathematics pages karp and orlin parametric shortest path algorithms with an application to cyclic staffing discrete applied mathematics pages kawarabayashi klein and sommer approximate distance oracles for planar and graphs automata languages and pages kieritz luxen sanders and vetter distributed contraction hierarchies experimental algorithms pages klein mozes and weimann shortest paths in directed planar graphs with negative lengths a o n log n algorithm acm transactions on algorithms kleinberg slivkins and wexler triangulation and embedding using small sets of beacons annual ieee symposium on foundations of computer science pages and schilling acceleration of shortest path and constrained shortest path computation experimental and efficient algorithms pages lauther an extremely fast exact algorithm for finding short test paths in static networks with geographical background geoinformation undmobilit at von der forschung zur praktischen anwendung pages lipton rose and tarjan generalized nested dissection siam journal on numerical analysis loui optimal paths in graphs with stochastic or multidimensional weights communications of the acm mata and mitchell a new algorithm for computing shortest paths in weighted planar subdivisions proc annu acm sympos comput pages maue sanders and matijevic queries using precomputed cluster distances journal of experimental algorithmics and mahmassani least expected time paths in stochastic transportation networks transportation science pages mitchell and papadimitriou the weighted region problem finding shortest paths through a weighted planar subdivision moffat and takaoka an all pairs shortest path algorithm with expected running time o log n siam j computing page schilling wagner and willhalm partitioning graphs to speedup dijkstra s algorithm journal of experimental algorithmics moore the shortest path through a maze proceedings of the international symposium of switching theory part ii mozes and sommer exact distance oracles for planar graphs soda references mulmuley and shah a lower bound for the shortest path problem journal of computer and system sciences nannicini baptiste barbier krob and liberti fast paths in dynamic road networks computational optimization and applications nannicini delling liberti and schultes bidirectional a search for timedependent fast paths experimental algorithms pages nannicini and liberti shortest paths on dynamic graphs international transactions in operational research pages nikolova brand and karger optimal route planning under uncertainty icaps nikolova kelner brand and mitzenmacher stochastic shortest paths via quasiconvex maximization algorithmsesa patrascu and roditty distance oracles beyond the bound foundations of computer science potamias bonchi castillo and gionis fast shortest path distance estimation in large networks proceeding of the acm conference on information and knowledge management cikm page roditty and shapira shortest paths with a sublinear additive error acm transactions on algorithms pages roditty and zwick on dynamic shortest paths problems algorithmica pages roditty and zwick simple shortest paths in unweighted directed graphs acm transactions on algorithms sanders and schultes highway hierarchies hasten exact shortest path queries algorithmsesa pages sanders and schultes engineering highway hierarchies algorithmsesa pages sankaranarayanan alborzi and samet efficient query processing on spatial networks pages schilling and heiko fast shortest path computations with dimacs challenge schultes fast and exact shortest path queries using highway hierarchies des saarlandes schultes and sanders dynamic routing experimental algorithms pages schulz wagner and weihe dijkstra s algorithm on line an empirical case study from public railroad transport vitter zaroliagis schulz wagner and zaroliagis using graphs for timetable information in railway systems algorithm engineering and experiments pages approximating shortest paths in graphs walcom algorithms and computation pages references shiloach and even an problem acm sniedovich dijkstra s algorithm revisited the dynamic programming connexion journal of control and cybernetics pages sniedovich dynamic programming foundations and principles francis and taylor sommer queries in static networks acm computing surveys takaoka a new upper bound on the complexity of the all pairs shortest path problem information processing letters pages takaoka a faster algorithm for the shortest path problem and its application cocoon pages thorup on ram priority soda pages thorup undirected shortest paths with positive integer weights in linear time journal of the acm jacm pages thorup compact oracles for reachability and approximate distances in planar digraphs journal of the acm thorup shortest paths faster and allowing negative cycles algorithm pages thorup and zwick approximate distance oracles journal of the acm warntz transportation social physics and the law of refraction the professional geographer pages warshall a theorem on boolean matrices journal of the acm jacm constant time distance queries in planar unweighted graphs with subquadratic preprocessing time computational geometry pages xie deng shang zhou and zheng finding alternative shortest paths in spatial networks acm transactions on database systems pages y yen an algorithm for finding shortest routes from all source nodes to a given destination in general networks quarterly of applied mathematics young tarjant and orlin faster parametric shortest path and minimumbalance algorithms networks zwick all pairs shortest paths in weighted directed and almost exact algorithms proceedings annual symposium on foundations of computer science cat pages zwick exact and approximate distances in graphsa survey esa zwick a slightly improved algorithm for the all pairs shortest paths problem with real edge lengths pages
8
stabilization of disturbed linear systems over digital channels jan mohammad javad khojasteh mojtaba hedayatpour jorge massimo franceschetti we present an control strategy for stabilizing a scalar linear system over a digital communication channel having bounded delay and in the presence of bounded system disturbance we propose an scheme and determine lower bounds on the packet size and on the information transmission rate which are sufficient for stabilization we show that for small values of the delay the timing information implicit in the triggering events is enough to stabilize the system with any positive rate in contrast when the delay increases beyond a critical threshold the timing information alone is not enough to stabilize the system and the transmission rate begins to increase finally large values of the delay require transmission rates higher than what prescribed by the classic theorem the results are numerically validated using a linearized model of an inverted pendulum index control under communication constraints control quantized control rate of transmission this apparently counterintuitive result can be explained by noting that the act of triggering essentially reveals the state of the system which can then be perfectly tracked by the controller our previous work quantifies the information implicit in the timing of the triggering events as a function of the communication delay and for a given triggering strategy showing a phase transition behavior when there are no system disturbances and the delay in the communication channel is small enough a positive rate of transmission is all is needed to achieve exponential stabilization when the delay in the communication channel is larger than a critical threshold the implicit information in the act of triggering is not enough for stabilization and the transmission rate must increase these results are compared with a implementation subject to delay in i ntroduction the literature however has not considered to what extent the implicit information in the triggering events is still valuable in the presence of system disturbances these disturbances add an additional degree of uncertainty in the state estimation process beside the one due to the unknown delay and their effect should be properly accounted for with this motivation we consider stabilization of a linear timeinvariant system subject to bounded disturbance over a communication channel having a bounded delay in comparison with we consider here a weaker notion of stability requiring the state to be bounded at all times beyond a fixed horizon but without imposing exponential convergence guarantees this allows to simplify the treatment and to derive a simpler control strategy we design an scheme for this strategy and show that when the size of the packet transmitted through the channel at every triggering event is above a certain fixed value then for small values of the delay our strategy achieves stabilization using only implicit information and transmitting at a rate arbitrarily close to zero in contrast for values of the delay above a given threshold the transmission rate must increase and eventually surpasses the one prescribed by the classic theorem it follows that for small values of the delay we can successfully exploit the implicit information in the triggering events and compensate for the presence of system disturbances on the other hand large values of the delay imply that information has been excessively aged and corrupted by the disturbance so that increasingly higher communication rates are required all results are numerically validated by implementing our strategy to stabilize an inverted pendulum linearized about its equilibrium point over a communication channel proofs are omitted for brevity and will appear in full elsewhere networked control systems ncs where the feedback loop is closed over a communication channel are a fundamental component of systems cps in this context theorems state that the minimum communication rate to achieve stabilization is equal to the entropy rate of the system expressed by the sum of the logarithms of the unstable modes early examples of datarate theorems appeared in key later contributions appeared in and these works consider a communication channel capable of noiseless transmission of a finite number of bits per unit time evolution of the system extensions to noisy communication channels are considered in stabilization over channels including the erasure channel as a special case are studied in additional formulations include stabilization of systems with random open loop gains over channels stabilization of switched linear systems systems with uncertain parameters multiplicative noise optimal control and stabilization using strategies this paper focuses on the case of stabilization using eventtriggered communication strategies in this context a key observation made in is that if there is no delay in the communication process there are no system disturbances and the controller has knowledge of the triggering strategy then it is possible to stabilize the system with any positive khojasteh and franceschetti are with the department of electrical and computer engineering of university of california san diego hedayatpour is with the faculty of engineering applied science university of regina sk canada is with the department of mechanical and aerospace engineering university of california san diego mkhojasteh massimo cortes hedayatm notation throughout the paper r and n represent the set of real and natural numbers respectively also log and ln represent base and natural logarithms respectively for a function f r rn and t r we let f denote the limit of f at t namely f s in addition resp denote the nearest integer less resp greater than or equal to x we denote the modulo function by mod x y whose value is the remainder after division of x by sign x denotes the sign of ii p roblem we also define the k th triggering interval as tks s when referring to a generic triggering or reception time for convenience we skip the k in tkr and tkc in this setting the classical theorem states that the controller can stabilize the plant if it receives information at least with rate ln let bs t be the number of bits transmitted by the sensor up to time we define the information transmission rate as formulation the block diagram of a networked control system as a tuple is represented in figure the plant is described by a scalar rs lim sup bs t t since at every triggering interval the sensor sends g ts bits we have pn g tks rs lim sup n n at the controller the estimated state is represented by and evolves during the times as t t bu t fig system model linear model as ax t bu t w t where x t r and u t r for t are the plant state and control input respectively and w t r represents the process disturbance the latter is upper bounded as t m where m is a positive real number in a is a positive real number b r and l for some positive real number we assume that the sensor measures the system state exactly and the controller acts with infinite precision and without delay however the measured state is sent to the controller through a communication channel that only supports a finite data rate and is subject to bounded delay more precisely when the sensor transmits packet via the communication channel the controller will receive the packet entirely and without any error but with unknown bounded delay the sequence of triggering times at which the sensor transmits a packet of length g tks bits is denoted by tks and the sequence of times at which the controller receives the corresponding packet and decodes it is denoted by tkc communication delays are uniformly by a finite real number as follows where is the k th tkc tks communication delay for all k t tkc c starting from c with we assume that the sensor has knowledge of the time the actuator performs the control action this is to ensure that the sensor can also compute t for all time in practice this corresponds to assuming an instantaneous acknowledgment from the actuator to the sensor via the control input as discussed in to obtain such causal knowledge one can monitor the output of the actuator provided that the control input changes at each reception time in case the sensor has only access to the system state one can use a narrowband signal in the control input to excite a specific frequency of the state that can signal the time at which the control action has been applied the state estimation error is defined as z t x t t where z x we use this error to determine when a triggering event occurs in our controller design to ensure a property similar to practical stability for the system in iii c ontrol d esign this section proposes our control strategy along with a quantization policy to generate and send packets at every triggering event to stabilize the scalar continuoustime linear system described in section ii along the way we also characterize a sufficient information transmission rate to accomplish this assume a triggering event occurs when t j where j is a positive real number if the controller knows the triggering time ts then it also knows that x ts ts it follows that it may compute the exact value of x ts by just transmitting one single bit at every triggering time in general however the controller does not have knowledge of ts because of the delay but only knows the bound in let tc be an estimate of z tc constructed by the controller knowing that ts v ts and using and the decoded packet received through the communication channel we define the following updating procedure called jump strategy c tc tc at triggering time ts the sensor encodes the system state in packet p ts of size g ts consisting of the sign of z ts and a quantized version of ts which we denote by q ts and send it to the controller using the bound in and by decoding the received packet the controller reconstructs the quantized version of ts finally the controller can estimate z tc as follows tc sign z ts jea tc ts using and the quantization policy described in theorem if the sensor has causal knowledge of delay in the communication channel then the sensor can calculate t at each time next we show that the proposed scheme has triggering intervals that are uniformly lower bounded and consequently does not show zeno behavior namely infinitely many triggering events in a finite time interval lemma consider the model with plant dynamics estimator dynamics triggering strategy and jump strategy if the packet m then size satisfies for all k n and j for all k n tks s x tc c c n tc tc j ln a m ja e then holds we next propose our quantization algorithm and rely on lemma to lower bound the packet size to ensure theorem consider the model with plant dynamics estimator dynamics triggering strategy and jump strategy if the control has enough information about x such that state estimation error satisfies j there exists a quantization policy that achieves for all k n with a packet size k g ts max log ln e m where b and j next we show that using our encoding and decoding scheme if the sensor has a causal knowledge of the delay in the communication channel it can compute the state estimated by the controller proposition consider the model with plant dynamics estimator dynamics triggering strategy and jump strategy using lemma we deduce that rtr where is a constant design parameter to find a lower bound on the size of the packet so that is ensured the next result bounds how large the difference q ts of the triggering time and its quantized version can be lemma for the model with plant dynamics estimator dynamics triggering strategy and jump strategy using with j m if e q ts n rtr lim sup pn z tc tc the sensor chooses the packet size g ts large enough to satisfy the following equation for all possible tc ts ts the frequency with which transmission events are triggered is captured by the triggering rate noting that with the jump strategy we have z c a ln a j m a a m ln a for all initial conditions and possible delay and process noise values combining this bound and theorem we arrive at the following result corollary consider the model with plant dynamics estimator dynamics triggering strategy and jump strategy if the control has enough information about x such that state estimation m there exists error satisfies j with j a quantization policy that achieves for all k n and for all delay and process noise realization with an information transmission rate rs a m ln a max log ln figure shows the sufficient transmission rate as a function of the bound on the channel delay as expected the rate starts from zero and as increases goes above the theorem the next result ensures a property similar to practical stability for the system in theorem consider the model with plant dynamics estimator dynamics triggering strategy and jump strategy assume the pair a b is stabilizable if the control has enough information about x such that state estimation error satisfies m and if the sensor use j with j the quantization policy proposed in theorem then there exists a time and a real number such that t for all t provided that the packet size is lower bounded by in addition we add the process noise w t to the linearized system model w t is a vector of length four and we assume that all the elements of w t are upper bounded m also a simple feedback control law can be derived for as u where k is such that a bk is hurwitz we let k be as follows k note that although theorem holds for the linear system with any delay the linearizion is only valid for sufficiently small values of sufficient rate theorem rate channel delay upperbound sec fig illustration of sufficient transmission rate as a function of here m b m and j from corollary it follows that a transmission rate lower bounded by is sufficient to ensure the property similar to practical stability stated in theorem iv s imulation we now implement the proposed control scheme on a dynamical system such as a linearized inverted pendulum in this section initially a mathematical model of an inverted pendulum mounted on a cart is presented then the nonlinear equations are linearized about the equilibrium state of the system in addition a canonical transformation is applied to the linear system to decouple the equations of motion we consider the problem where motion of the pendulum is constrained in a plane and its position can be measured by angle we assume that inverted pendulum has mass length l and moment of inertia i also the pendulum is mounted on top of a cart of mass constrained to move in y direction nonlinear equations governing the motion of the cart and pendulum can be written as follows i l where is the damping coefficient between the pendulum and the cart and is the gravitational acceleration linearizion we define as the equilibrium position of the pendulum and as small deviations from we derive the linearized equations of motion using small angle approximation let s define state variable s y t where y and are the position and velocity of the cart respectively assuming kg kg l m i one can write the evolution of s in time as follows where t p s t and t p w t moreover t t where kp that is design for the first three coordinates of the diagonalized system which are stable the state estimation at the controller simply constructs as follows t t cos f as t bu t w t diagonalization the eigenvalues of the of the system a are e hence three of the four modes of the system are stable and do not need any actuation also the gain of the system a is diagonalizable all eigenvalues of a are distinct as a result diagonalization of the matrix a enables us to apply theorem to the unstable mode of the system and consequently stabilize the whole system using the eigenvector matrix p we diagonalize the system to obtain t t t b starting from the unstable mode of the system is as follow t t t then using the problem formulation in section ii the estimated state for the unstable mode evolves during the times as t t t t tkc c starting from c and the triggering occurs when t t t j where t is the estate estimation error for the unstable mode let be the eigenvalue corresponding to the unstable mode which is equal to then using theorem we choose m exceeds the triggering function depends on the random channel delay with upper bound in the second row of figure the evolution of the unstable state and its state estimation are presented finally the last row in figure represents the evolution of all actual states of the linearized system in time finally figure presents the simulation of information transmission rate versus the delay in communication channel for stabilizing the linearized model of the inverted pendulum simulations theorem rate channel delay upperbound sec fig information transmission rate in simulations compared to the datarate theorem note that the rate calculated from simulations does not start at zero delay because the minimum channel delay upper bound is equal to one sampling time seconds in this example m is chosen to be in these simulations and simulation time is t seconds and the size of the packet for all ts to be g ts max log ln e where b the packet size for the simulation has two differences from the lower bound provided in theorem because the packet size should be an integer we used the ceiling operator and since we should have at least one bit to send a packet we take the maximum between and the result of the ceiling operator simulation results the following simulation parameters are chosen for the system simulation time t seconds sampling time seconds p t and p t theorem is developed based on a continuous system but the simulation environments are all digital we tried to make the discrete model as close to the continuous model by choosing a very small sampling time however the minimum upper bound for the channel delay will be equal to one sampling time a set of three simulations are carried out as follows for simulation a we assumed the process disturbance is zero and channel delay upper bounded by sampling time in simulation b we assumed that the process disturbance upper bounded by m and channel delay upper bounded by sampling time finally for simulation c we assumed that the process disturbance upper bounded by m and channel delay upper bounded by simulation results for simulation a b and c are presented in figure each column represents a different simulation the first row shows the triggering function for and the absolute value of state estimation error for the unstable coordinate that is t t t as soon as the absolute value of this error is equal or greater than the triggering function sensor transmit a packet and the jumping strategy adjusts at the reception time to practically stabilize the system the amount this error c onclusions we have presented an control scheme for the stabilization of noisy scalar continuous linear timeinvariant systems over a communication channel subject to random bounded delay we have also developed an algorithm for the quantized version of the estimated states leading to the characterization of a sufficient transmission rate for stabilizing the system we have illustrated our results on a linearization of the inverted pendulum for different channel delay bounds future work will study the identification of necessary conditions on the transmission rate the investigation of the effect of delay on nonlinear systems and the implementation of the proposed control strategies on real systems acknowledgements this research was partially supported by nsf award r eferences hespanha naghshtabrizi and xu a survey of recent results in networked control systems proceedings of the ieee vol no pp kim and kumar systems a perspective at the centennial proceedings of the ieee vol special centennial issue pp murray astrom boyd brockett and stein future directions in control in an world ieee control systems vol no pp wong and brockett systems with finite communication bandwidth constraints ii stabilization with limited information feedback ieee transactions on automatic control vol no pp baillieul feedback designs for controlling device arrays with communication channel bandwidth constraints in aro workshop on smart structures pennsylvania state univ pp tatikonda and mitter control under communication constraints ieee transactions on automatic control vol no pp nair and evans stabilizability of stochastic linear systems with finite feedback data rates siam journal on control and optimization vol no pp sahai and mitter the necessity and sufficiency of anytime capacity for stabilization of a linear system over a noisy communication link part i scalar systems ieee transactions on information theory vol no pp tatikonda and mitter control over noisy channels ieee transactions on automatic control vol no pp matveev and savkin estimation and control over communication networks springer science business media khina halbawi and hassibi almost practical tree codes in ieee international symposium on information theory isit july pp sukhavasi and hassibi linear anytime codes for control over noisy channels ieee transactions on automatic control vol no pp m sec g ts bit m sec g ts bit triggering function m sec g ts bits triggering function triggering function time in seconds time in seconds time in seconds time in seconds time in seconds time in seconds time in seconds time in seconds a b time in seconds c fig simulation results the first row represents the absolute value of state estimation error for the unstable mode of the system the second row represents the unstable mode and its state estimate finally and the last row represents the evolution of all actual states of the real system in time minero franceschetti dey and nair data rate theorem for stabilization over feedback channels ieee transactions on automatic control vol no minero coviello and franceschetti stabilization over markov feedback channels the general case ieee transactions on automatic control vol no pp kostina peres and ranade control of systems with uncertain gain annual allerton conference on communication control and computing pp yang and liberzon feedback stabilization of switched linear systems with unknown disturbances under constraints ieee transactions on automatic control ranade and sahai control capacity in information theory isit ieee international symposium on ieee pp ding peres ranade and zhai when multiplicative noise stymies control arxiv preprint ranade and sahai in estimation and control in communication control and computing allerton annual allerton conference on ieee pp tatikonda sahai and mitter stochastic linear control over a communication channel ieee transactions on automatic control vol no pp kostina and hassibi tradeoffs in control in communication control and computing allerton annual allerton conference on ieee pp khina nakahira su and hassibi algorithms for optimal control with feedback in ieee annual conference on decision and control cdc dec pp khina pettersson kostina and hassibi control over awgn channels via analog joint coding in decision and control cdc ieee conference on ieee pp khojasteh tallapragada and franceschetti the value of timing information in control the scalar case annual allerton conference on communication control and computing pp tallapragada and stabilization of linear systems under bounded bit rates ieee transactions on automatic control vol no pp li wang and lemmon stabilizing in quantized event triggered control systems in proceedings of the acm international conference on hybrid systems computation and control acm pp pearson hespanha and liberzon control with minimal encoding and of encoders ieee transactions on automatic control vol no pp ling bit rate conditions to stabilize a scalar linear system based on event triggering ieee transactions on automatic control to appear linsenmayer blind and data rate bounds for containability of scalar systems vol no pp kofman and braslavsky level crossing sampling in feedback stabilization under constraints in ieee conference on decision and control cdc ieee pp khojasteh tallapragada and franceschetti the value of timing information in control arxiv preprint khojasteh tallapragada and franceschetti versus control over communication channels in ieee annual conference on decision and control cdc dec pp ling bit rate conditions to stabilize a linear system with feedback dropouts ieee transactions on automatic control vol pp no pp lakshmikantham leela and a martynyuk practical stability of nonlinear systems world scientific
3
mar and numbers enochs and estrada and iacob abstract we define and invariants for modules over a commutative noetherian local ring then we show the periodicity of these invariants provided that r is a hypersurface in case r is also gorenstein we see that a finitely generated m and its matlis dual have the same and numbers introduction we consider a commutative noetherian local ring r m k it is known that a module m has a complete projective resolution t p m if and only if m has finite gorenstein projective dimension we prove that when m is a finitely generated of finite gorenstein projective dimension we can construct a complete projective resolution t p m with both t and p homotopically minimal complexes and so unique up to isomorphism then each of the modules pn n and tn n z are free modules of finite ranks the ranks of the modules pn are usually denoted m and are called the betti numbers of the boundedness of the sequence of betti numbers of a module m as well as the interplay between the boundedness of the betti numbers and the eventual periodicity of the module m have been studied intensively see for example and we focus here on the invariants m where for each n z m is the rank of the module tn we call these invariants the numbers of m see for another way to define these invariants for an arbitrary module n we can use an analogous procedure to construct a complete injective resolution n i u where both i and u are homotopically minimal complexes and hence unique up to isomorphism then we can define the invariants bn p n for n z and p r a prime ideal of mathematics subject classification key words and number complete projective resolution eventually periodic complex matlis duality enochs and estrada and iacob our main results theorem and theorem give sufficient conditions on the residue field k that guarantee the periodicity of the numbers m where m is finitely generated of finite gorenstein projective dimension and respectively the periodicity of the numbers bn m n we prove theorem that if k has an eventually periodic minimal projective resolution with period s then for every finitely generated m we have that the invariants of m are periodic of period we also prove theorem that under the same hypothesis on k we have that the invariants bn m n are periodic of period s for every module n of finite gorenstein injective dimension in the second part of the paper we consider a commutative local gorenstein ring r m k and a finitely generated we prove that if t p m is a minimal complete projective resolution of m then m p t is a minimal complete injective resolution of m it follows that for each n we have bn m m also m m see section for definitions preliminaries we recall that a module g is gorenstein projective if there is an exact and hom p roj exact complex of projective modules such that g ker definition a module r m has finite gorenstein projective dimension if there exists an exact sequence gn m with all gi gorenstein projective modules if the integer n is the least with this property then m has gorenstein projective dimension n in short m n if no such n exists then m has infinite gorenstein projective dimension the gorenstein injective modules and gorenstein injective dimension are defined dually the tate cohomology modules are defined by means of complete resolutions we recall the definition definition a module m has a complete projective resolution if there u exists a diagram t p m with p m a projective resolution t a totally acyclic complex and u t p a map of complexes such that un tn pn is an isomorphism for all n and numbers it is known that a module m has a complete projective resolution if and only if and only if it has finite gorenstein projective dimension in particular when r is gorenstein every r m has a complete projective resolution complete injective resolutions are defined dually it is known that a module r n has a complete injective resolution if and only if n has finite gorenstein injective dimension over a gorenstein ring r every module n has such a complete injective resolution numbers and numbers let r be a commutative local noetherian ring and let m be an rmodule of finite gorenstein projective dimension then there is a complete projective resolution of m t p if m is finitely generated then we can choose p to be a minimal projective resolution of m we recall that a complex c is said to be homologically minimal if any homology isomorphism f c c is an isomorphism in c r mod a complex c is said to be homotopically minimal if each homotopy isomorphism f c c is an isomorphism so if c is homologically minimal it is also homotopically minimal thus a minimal projective resolution p of m as above is homotopically minimal and in fact homologically minimal see page in chapter of and so such a p is unique up to isomorphism we show first of all that when m is finitely generated we can also get t to be homotopically minimal and so also unique up to isomorphism we will use the following lemma let k be a finitely generated gorenstein projective reduced k has no nontrivial projective direct summands then there exists an exact and hom p roj exact complex k with each qn a finitely generated free module proof let k be finitely generated gorenstein projective and reduced then the dual k hom k r is also such also if k p k is exact where p k is a projective cover of k then k is also finitely generated gorenstein projective and reduced since the dual module k is a finitely generated gorenstein projective module that is also reduced there exists a short exact sequence enochs and estrada and iacob l p k with p k a projective cover and with l gorenstein projective finitely generated and reduced this gives an exact sequence k p with p finitely generated projective and with finitely generated gorenstein projective and reduced then k p is a projective preenvelope and therefore if k q is the projective envelope then k q is an injective map also since coker k q is a direct summand of it is finitely generated gorenstein projective and reduced so k has a projective envelope k q where k q is an injection and where coker k q is also finitely generated gorenstein projective and reduced also if k q is exact then q is a projective cover proposition so there exists a short exact sequence k q with k q a projective preenvelope and with a finitely generated reduced gorenstein projective continuing we obtain an exact and hom p roj complex k q with each qn a finitely generated free we can prove now that when m is a finitely generated of finite gorenstein projective dimension we can construct a complete projective resolution t p m with both t and p unique up to isomorphism to see this let km be a partial minimal projective resolution of m but where m then km is gorenstein projective and since the resolution is minimal km is also reduced has no nontrivial projective direct summands using lemma above it is not hard to see that there exists an exact and hom p roj exact complex t with each tn finitely generated free module and with km ker since for each d the complex kd is a minimal projective resolution it follows that t is a homologically minimal complex page so for a finitely generated m we can construct a complete projective resolution t and numbers where both t and p are homotopically minimal and so unique up to isomorphism we call such a diagram a minimal complete projective resolution of then each of pn n and tn n arbitrary are free modules of finite rank as usual the ranks of the pn are denoted m we denote the ranks of the tn by m the numbers m are called the betti invariants of we call the invariants m the invariants of m see for another way to define these invariants for an arbitrary module n we can use an analogous procedure to construct a complete injective resolution n where both i and u are homotopically minimal complexes and hence unique up to isomorphism then using matlis and bass we can define the bass invariants p n for n and p r a prime ideal of r and then the invariants bn p n for arbitrary n and p a prime ideal our main results are about the periodicity of these invariants we recall first the following definition a complex c cn is said to be eventually periodic of period s if for some we have that for all n that cn saying that c is periodic of period s will have the obvious meaning remark if t p m is a minimal complete projective resolution of a finitely generated m and if p is eventually periodic then trivially t is also eventually periodic but using the minimality of t it can be seen that in fact t is periodic if this is the case where the period is s then we see that m m for all so we can say that the invariants are periodic of period however it may happen that the invariants of m are periodic without t being periodic so we can speak of the invariants being periodic without the associated complex being periodic enochs and estrada and iacob we prove that when the residue field k has an eventually periodic minimal projective resolution the numbers of any finitely generated r m of finite gorenstein projective dimension are periodic then we prove that under the same hypothesis on k the numbers bn m n are periodic for any r n of finite gorenstein injective dimension we will use to deduce the following balance result see also section theorem let r be any commutative ring if t p m and n i u are complete projective and injective resolutions of m and n respectively equivalently m and n are finite then the homologies of hom t n and of hom m u are naturally isomorphic the analogous result for t n and m u also holds proof without lost of generality let us assume m m and n n and m h i hom t n h hom t h hom km u h i hom m u where ker km ker tm and follows from corollary note that if case m n we are using the fact that the class of gorenstein injective modules is closed under cokernels of monomorphisms the second statement follows in the same way we can prove now our main results theorem if the residue field k of r as an has an eventually periodic minimal projective resolution with the period being s then for every finitely generated module m of finite gorenstein projective dimension we have that the invariants of m are periodic of period proof we let t p k be a minimal complete projective resolution of since p is eventually periodic of period s we have that the complex t is periodic of period consequently the complex t m is periodic of period this gives that for every n hn t m t m but by hn t m hn k t for all n where t p m is and numbers a minimal complete projective resolution of n is a homologically since t minimal complex it follows proposition of that im for all k consider the complex k t k k we have x m y x m y by the above tn y rz with r m and z so x m y x m rz xr m z m z thus im and ker k for all then the homology module of k t b b ker hn k t im k k k is a vector space of dimension over since hn k t hn t m and since hn t m t m b b for all n we see that m m for all theorem if the residue field k of r as an has an eventually periodic minimal projective resolution with period s then for any module n of finite gorenstein injective dimension the invariants bn m n are periodic of period proof again let t p k be a minimal complete projective resolution of k and let n i u be a minimal complete injective resolution of we have that hom t n and hom k u have naturally isomorphic homology modules theorem but t is periodic of period s so hom t n is periodic of period so we get that h n hom k u h hom k u for all but as in bass we see that h n hom k u is a vector space over k whose dimension is precisely bn m n remark if m is eventually periodic then its betti sequence is bounded the converse is not true in general a counterexample was given by schulz in proposition eisenbud proved that the converse does hold over group rings of finite groups and that it also holds in the commutative noetherian local setting when the rings considered are complete intersections in fact it was shown that over a hypersurface that is a complete intersection ring of codimension one any minimal free resolution eventually becomes periodic so over a hypersurface ring both theorem and theorem hold remark our main results hold provided that k has an eventually periodic minimal projective resolution and so its betti numbers are enochs and estrada and iacob bounded by corollary if the betti numbers of k are bounded then r is a hypersurface so theorems and both hold if and only if r is a hypersurface matlis duality let r m k be a commutative local gorenstein ring and let m be a finitely generated then there exists a diagram t p m as above with both t and p homotopically minimal and so unique up to isomorphism b then for each n z we have pn and tn we show that m p t is a minimal complete injective resolution of the module m where m denotes the matlis dual homr m e k since e k is injective both p and t are exact complexes of injective modules let mj ker then is an injective preenvelope with hom e k e k so the injective envelope of is a direct summand of e k so it is e k tj for some tj then as in the proof of corollary there is an exact sequence rtj mj therefore rtj mj is a projective precover since pj mj is a projective cover it follows that is a direct summand of rtj so tj and so we have tj for all j and is an injective envelope for all t t since t is exact with each tj finitely generated free and with each ker tj gorenstein projective it follows that t is an exact complex of injective modules with ker gorenstein injective we have that each gj is also gorenstein flat in this case so t gj a for any injective module a for each j then a a hom gj e k hom t gj a e k for any injective module a it follows that is gorenstein injective thus t is a totally acyclic injective complex as above since is a minimal projective resolution it follows that is a minimal injective resolution similarly is a minimal injective resolution of by theorem the gorenstein injective module is reduced thus is in fact an injective cover similarly we have that is an injective cover for each and numbers j thus is a minimal left injective resolution so we have m p t a complete injective resolution of m with both p and t minimal b we have e k and e k for each it follows that for each n we have bn m m also m m for each n references bass on the ubiquity of gorenstein rings math bergh complexity and periodicity colloq christensen and jorgensen tate co homology via pinched complexes transactions of the ams eisenbud homological algebra on a complete intersection with an application to group representations transactions of the ams enochs and jenda gorenstein injective and projective modules mathematische zeitschrift enochs and estrada and iacob balance with unbounded complexes bull london math enochs and jenda relative homological algebra walter de gruyter de gruyter exposition in math enochs and jenda relative homological algebra walter de gruyter de gruyter exposition in math gasharov and peeva boundedness versus periodicity over commutative local rings transactions of the ams gulliksen a proof of the existence of minimal resolutions acta peeva exponential growth of betti numbers journal of pure and applied schultz boundedness and periodicity of modules over qf rings journal of
0
the function and exponent of sparse regression codes with optimal encoding paper studies the performance of sparse regression codes for lossy compression with the distortion criterion in a sparse regression code codewords are linear combinations of subsets of columns of a design matrix it is shown that with encoding sparse regression codes achieve the shannon function for gaussian sources d as well as the optimal exponent this completes a previous result which showed that d and the optimal exponent were achievable for distortions below a certain threshold the proof of the result is based on the second moment method a popular technique to show that a random variable x is strictly positive with high probability in our context x is the number of codewords within target distortion d of the source sequence we first identify the reason behind the failure of the standard second moment method for certain distortions and illustrate the different failure modes via a stylized example we then use a refinement of the second moment method to show that d is achievable for all distortion values finally the refinement technique is applied to suen s correlation inequality to prove the achievability of the optimal gaussian exponent index compression sparse superposition codes function gaussian source error exponent second moment method large deviations i ntroduction d eveloping practical codes for lossy compression at rates approaching shannon s bound has long been an important goal in information theory a practical compression code requires a codebook with low storage complexity as well as encoding and decoding with low computational complexity sparse superposition codes or sparse regression codes sparcs are a recent class of codes introduced by barron and joseph originally for communcation over the awgn channel they were subsequently used for lossy compression with the distortion criterion in the codewords in a sparc are linear combinations of columns of a design matrix a the storage complexity of the code is proportional to the size of the matrix which is polynomial in the block length a computationally efficient encoder for compression with sparcs was proposed in and shown to achieve rates approaching the shannon function for gaussian sources this work was partially supported by a marie curie career integration grant grant agreement number and by nsf grant this paper was presented in part at the ieee international symposium on information theory venkataramanan is with the department of engineering university of cambridge cambridge uk tatikonda is with the department of statistics and data science yale university new haven ct usa log rate bits jun ramji venkataramanan senior member ieee and sekhar tatikonda senior member ieee fig the solid line shows the previous achievable rate d given in the function d is shown in dashed lines it coincides with d for where in this paper we study the compression performance of sparcs with the distortion criterion under optimal encoding we show that for any ergodic source with variance sparcs with optimal encoding achieve a given by d log note that d is the optimal ratedistortion function for an gaussian source with variance the performance of sparcs with optimal encoding was first studied in where it was shown that for any distortionlevel d rates greater than d d max log d are achievable with the optimal gaussian exponent the rate d in is equal to d when d d x but is strictly larger than r d when x where x see fig in this paper we complete the result of by proving that sparse regression codes achieve the gaussian function d for all distortions d we also show that these codes attain the optimal exponent for gaussian sources at all rates though encoding is not practically feasible indeed the main motivation for sparse regression codes is that they enable encoding and decoding characterizing the function and exponent under optimal encoding establishes a benchmark to compare the performance of various computationally efficient section m columns section m columns decoder this is a mapping h bm l rn on receiving bm l from the encoder the decoder produces reconstruction h since there are m columns in each of the l sections the total number of codewords is m l to obtain a compression rate of r we therefore need section l m columns a m l enr l t fig a is an n m l matrix and is a m l binary vector the positions of the in correspond to the gray columns of a which combine to form the codeword encoding schemes further the results of this paper and together show that sparcs retain the good covering properties of the gaussian random codebook while having a compact representation in terms of a matrix whose size is a polynomial in the block length let us specify some notation before proceeding uppercase letters are used to denote random variables and lowercase letters for their realizations letters are used to denote random vectors and matrices all vectors have length the source sequence is s sn and the reconstruction sequence is kxk denotes the is the normalized version of vector x and kxk n n denotes the gaussian distribution with mean and variance logarithms are with base e and rate is measured in nats unless otherwise mentioned the notation an bn means that log an log bn and is used to abbreviate the phrase with high probability we will use to denote generic positive constants whose exact value is not needed sparcs with optimal encoding a sparse regression code is defined in terms of a design matrix a of dimension n m l whose entries are n here n is the block length and m and l are integers whose values will be specified in terms of n and the rate as shown in fig one can think of the matrix a as composed of l sections with m columns each each codeword is a linear combination of l columns with one column from each section formally a codeword can be expressed as where is an m vector l with the following property there is exactly one for i m one for m i and so forth the values of are all set equal to where c is a constant that will be specified later denote the set of all s that satisfy this property by bm l encoder this is defined by a mapping g rn bm l given the source sequence s the encoder determines the that produces the codeword closest in euclidean distance g s argmin ks l for our constructions we choose m lb for some b so that implies nr l log l b thus l is log n and the number of columns m l in the dictionary a is log n a polynomial in overview of our approach to show that a rate r can be achieved at d we need to show that with high probability at least one of the enr choices for satisfies if satisfies we call it a solution denoting the number of solutions by x the goal is to show that x with high probability when r d note that x can be expressed as the sum of enr indicator random variables where the ith indicator is if i is a solution and zero otherwise for i enr analyzing the probability p x is challenging because these indicator random variables are dependent codewords and will be dependent if and share common nonzero terms to handle the dependence we use the second moment method second mom a technique commonly used to prove existence achievability results in random graphs and random constraint satisfaction problems in the setting of lossy compression the second mom was used in to obtain the function of ldgm codes for binary symmetric sources with hamming distortion for any random variable x the second mom bounds the probability of the event x from below p x ex e x therefore the second mom succeeds if we can show that ex x as n it was shown in that the second mom succeeds for r d where d is defined in in contrast for d r d it was found that ex x so the second mom fails from this result in it is not clear whether the gap from d is due to an inherent weakness of the sparse regression codebook or if it is just a limitation of the second mom as a proof technique in this paper we demonstrate that it is the latter and refine the second mom to prove that all rates greater than d are achievable the inequality follows from the inequality e xy ex ey by substituting y x our refinement of the second mom is inspired by the work of and on finding sharp thresholds for of random hypergraphs the idea is as follows the key ratio ex x can be expressed as ex x where x denotes the total number of solutions conditioned on the event that a given is a solution recall that is a solution if thus when the second mom fails the ratio goes to zero we have a situation where the expected number of solutions is much smaller than the expected number of solutions conditioned on the event that is a solution this happens because for any s there are atypical realizations of the design matrix that yield a very large number of solutions the total probability of these matrices is small enough that ex in not significantly affected by these realizations however conditioning on being a solution increases the probability that the realized design matrix is one that yields an unusually large number of solutions at low rates the conditional probability of the design matrix being atypical is large enough to make e x ex causing the second mom to the key to rectifying the second mom failure is to show that x ex with high probability although e x ex we then apply the second mom to count just the good solutions solutions for which x ex this succeeds letting us conclude that x with high probability error probability decays exponentially for all rates smaller than the channel capacity in contrast we use a refinement of the second moment method for the function and suen s correlation inequality to obtain the exponent beyond the exponent the dispersion is another quantity of interest in a lossy compression problem for a fixed probability the dispersion specifies how fast the rate can approach the function with growing block length it was shown that for discrete memoryless and gaussian sources the optimal dispersion was equal to the inverse of the second derivative of the exponent given that sparcs attain the optimal exponent it would be interesting to explore if they also achieve the optimal dispersion for gaussian sources with distortion the rest of the paper is organized as follows the main results specifying the function and the excessdistortion expoenent of sparcs are stated in section ii in section iii we set up the proof and show why the second mom fails for r as the proofs of the main theorems are technical we motivate the main ideas with a stylized example in section the main results are proved in section iv with the proof of the main technical lemma given in section related work as mentioned above the second moment method was used in to analyze the function of ldgm codes for binary symmetric sources with hamming distortion the idea of applying the second mom to a random variable that counts just the good solutions was recently used to obtain improved thresholds for problems such as random hypergraph of random graphs and random however the key step of showing that a given solution is good with high probability depends heavily on the geometry of the problem being considered this step requires identifying a specific property of the random object being considered sparc design matrix hypergraph or boolean formula that leads to a very large number of solutions in atypical realizations of the object for example in sparc compression the atypical realizations are design matrices with columns that are unusually with the source sequence to be compressed in random hypergraph the atypical realizations are hypergraphs with an edge structure that allows an unusually large number of vertices to take on either color it is interesting to contrast the analysis of sparc lossy compression with that of sparc awgn channel coding in the dependence structure of the sparc codewords makes the analysis challenging in both problems but the techniques required to analyze sparc channel coding are very different from those used here for the excess distortion analysis in the channel coding case the authors use a modified union bound together with a novel bounding technique for the probability of pairwise error events lemmas to establish that the this is similar to the inspection paradox in renewal processes ii m ain r esults the probability of excess distortion at d of a code cn with block length n and encoder and decoder mappings g h is pe cn d p h g s d for a sparc generated as described in section the probability measure in is with respect to the random source sequence s and the random design matrix a of sparc definition a rate r is achievable at distortion level d if there exists a sequence of sparcs cn such that pe cn d where for all n cn is a rate r code defined by an n ln mn design matrix whose parameter ln satisfies with a fixed b and mn lbn theorem let s be drawn from an ergodic source with mean and variance for d let d log fix r d and b bmin where bmin x x x x r x for x then there exists a sequence of rate r sparcs cn for which pe cn d where cn is defined by an n ln mn design matrix with mn lbn and ln determined by remark though the theorem is valid for all d it is most relevant for the case where is the solution to the equation x log x let fix any and a b max bmin d for theorem already guarantees that the optimal function can be achieved with a smaller value of b than that required by the theorem above there exists a sequence of rate r sparcs with parameter b that achieves the exponent log exponent of sparc consequently the supremum of exponents achievable by sparcs for gaussian sources sources is equal to the optimal one given by the exponent at d of a sequence of rate r codes cn is given by r d r lim sup log pe cn d n where pe cn d is defined in the optimal excessdistortion exponent for a pair r d is the supremum of the exponents over all sequences of codes with rate r at the optimal exponent for discrete memoryless sources was obtained by marton and the result was extended to memoryless gaussian sources by ihara and kubo fact for an gaussian source distributed as n and distortion criterion the optimal exponent at rate r and d is r d r log r d r d lim log p n log for thus p decays exponentially with n in comparison exp decays faster than exponentially with therefore from the exponent satisfies log pe cn d n h log p lim inf n exp log p a log lim inf where for r d the exponent in is the divergence between two gaussians distributed as n and n respectively the next theorem characterizes the exponent performance of sparcs theorem let s be drawn from an ergodic source with mean zero and variance let d r log and let b max bmin where bmin is defined in then there exists a sequence of rate r sparcs cn where cn is defined by an n ln mn design matrix with mn lbn and ln determined by whose probability of excess distortion at distortionlevel d can be bounded as follows for all sufficiently large pe cn d p exp proof from theorem we know that for any there exists a sequence of rate r sparcs cn for which exp pe cn d p p for sufficiently large n as long as the parameter b satisfies for s that is n s large deviation theorem yields where c are strictly positive universal constants corollary let s be drawn from an gaussian source with mean zero and variance fix rate r log and since can be chosen arbitrarily small the supremum of all achievable exponents is log which is optimal from fact we remark that the function bmin x is increasing in x therefore implies that larger values of the design parameter b are required to achieve exponents closer to the optimal value smaller values of in corollary iii i nadequacy of the d irect s econd m o m a first steps of the proof fix a rate r d and b greater than the minimum value specified by the theorem note that since r log let be any number such that code construction for each block length n pick l as specified by and m lb construct an n m l design matrix a with entries drawn n the codebook consists of all vectors such that bm l the entries of are all set equal to a value specified below encoding and decoding if the source sequence s is such that then the encoder declares an error if d then s can be trivially compressed to within distortion d using the codeword the addition of this extra codeword to the codebook affects the rate in a negligible way if d then s is compressed in two steps first quantize with an uniform scalar quantizer q with support in the interval d for input x d if d i d i n n for i n then the quantizer output is d i q x d n conveying the scalar quantization index to the decoder with an additional log n nats allows us to adjust the codebook ance according to the norm of the observed psource sequence the entries of are each set to q d so that each sparc codeword has variance q d define a version of s as s q note that q we use the sparc to compress the encoder finds argmin l the decoder receives and reconstructs note that for block length n the total number of bits transmitted by encoder is log log m yielding an overall rate of logn n error analysis for s such that d the overall distortion can be bounded as can be bounded as n for some positive constant the overall rate including that of the scalar quantizer is r logn n denoting the probability of excess distortion for this random code by pe n we have d pe n p lim p to bound the second term in without loss of generality we can assume that the source sequence this is because the codebook distribution is rotationally invariant due to the n design matrix a for any the entries of i n d we enumerate the codewords as i where i bm l for i enr define the indicator random variables if i d ui otherwise we can then write nr e x p e p ui for a fixed the ui s are dependent to see this consider codewords i j corresponding to the vectors i j bm l respectively recall that a vector in bm l is uniquely defined by the position of the value in each of its l sections if i and j overlap in r of their positions then the column sums forming codewords i and j will share r common terms and consequently ui and uj will be dependent for brevity we henceforth denote ui by just ui applying the second mom with nr x e x ui we have from p x ex ex a e x e where a is obtained by expressing e x as follows nr e enr x x e x e x ui e xui the scalar quantization step is only included to simplify the analysis in fact we could use the same codebook variance d for all s that satisfy d but this would make the forthcoming large deviations analysis quite cumbersome n n for some positive constants the last inequality holds because the of the scalar quantizer is and d let e be the event that the minimum of over bm l is greater than the encoder declares an error if e occurs if e does not occur the overall distortion in p e as the ergodicity of the source guarantees that max d enr x p ui e ex e the last equality in holds because ex penr p ui and due to the symmetry of the code construction as e x ex implies that e ex therefore to show that x we need the expected number of solutions is given by ex enr p enr p d e as n ex since and is n d applying lemma we obtain the bounds enr d ex enr d n note that f d d log d ex versus e to compute ex we derive a general lemma specifying the probability that a randomly chosen n y codeword is within distortion z of a source sequence s with x this lemma will be used in other parts of the proof as well lemma let s be a vector with x let be an n y random vector that is independent of then for x y z and sufficiently large n we have x y z p z x y z n where is a universal positive constant and for x y z the rate function f is xz a a if z x y ay ln f x y z otherwise and p a y y proof we have p z p n n sk z n x z n where the last equality is due to the rotational invariance of the distribution of has the same joint distribution as for any orthogonal rotation matrix o in particular we to be the matrix that rotates s to the vector x x and note that then using the strong version of s large deviation theorem due to bahadur and rao we have n x y z e x z x y z n n where the rate function i is given by n i x y z sup log x the expectation on the rhs of is computed with n y using standard calculations we obtain log ee x log substituting the expression in in and maximizing over yields i x y z f x y z where f is given by next consider e if i and j overlap in r of their positions the column sums forming codewords i and j will share r common terms therefore nr e e x p ui nr e x p ui p l p r a x l m r p where r is the event that the codewords corresponding to and share r common terms in a because for each codeword i there are a total of lr m codewords which share exactly r common terms with i for r from and we obtain e ex l x l p r m r enr p x l p a m p l l l b x l l l where a is obtained by substituting lr and enr m l the notation xl yl means that xl as l the equality b is from appendix a where it was also shown that r log min log l h l b where h log the inequality in is asymptotically tight the term in may be interpreted as follows conditioned on being a solution the expected number of solutions that share common terms with is ex recall that we require the left side of to tend to as n therefore we need for l l from we need h to be positive in order to guarantee that however when r it can be verified that h for where is the solution to h thus is positive for when log r consequently implies that e x as n e ex and the second mom fails as follows e p rn e p rn e en a en e en e en en en en where a is obtained by using bayes rule to compute p rn the second mom ratio in therefore equals a stylized example before describing how to rectify the second mom failure in the sparc setting we present a simple example to give intuition about the failure modes of the second mom the proofs in the next two sections do not rely on the discussion here consider a sequence of generic random structures a sequence of random graphs or sparc design matrices denoted by rn n suppose that for each n the realization of rn belongs to one of two categories a category structure which has which has en solutions or a category structure which has solutions in the case of sparc a solution is a codeword that is within the target distortion let the probabilities of rn being of each category be p rn p rn where p is a constant regardless of the realization we note that rn always has at least en solutions we now examine whether the second mom can guarantee the existence of a solution for this problem as n the number of solutions x can be expressed as a sum of indicator random variables n x ui where ui if configuration i is a solution and n is the total number of configurations in the sparc context a configuration is a codeword we assume that the configurations are symmetric as in the sparc so that each one has equal probability of being a solution p ui rn en p ui rn n n due to symmetry the second moment ratio can be expressed as ex e x e x ex ex en the conditional expectation in the numerator can be computed e x ex en ex ex en en we examine the behavior of the ratio above as n for different values of case p the dominant term in both the numerator and the denominator of is and we get e x as n ex and the second mom succeeds case p the dominant term in the numerator is en while the dominant term in the denominator is hence e x en o en ex case p the dominant term in the numerator is en while the dominant term in the denominator is en hence e x en n o enp ex e thus in both case and case the second mom fails because the expected number of solutions conditioned on a solution is exponentially larger than the unconditional expected value however there is an important distinction between the two cases which allows us to fix the failure of the second mom in case but not in case consider the conditional distribution of the number of solutions given from the calculation in we have p x en p rn en en en p x p rn en en en when p the first term in the denominator of the rhs dominates and the conditional distribution of x is n p x e e p x e e o o using this notation we have thus the conditional probability of a realization rn being category given is slightly smaller than the unconditional probability which is however conditioned on a realization rn is still extremely likely to have come from category have en solutions therefore when p conditioning on a solution does not change the nature of the typical or realization this makes it possible to fix the failure of the second mom in this case the idea is to define a new random variable x which counts the number of solutions coming from typical realizations only category structures the second mom is then applied to x to show that is strictly positive with high probability when p conditioning on a solution completely changes the distribution of x the dominant term in the denominator of the rhs in is en so the conditional distribution of x is p x en o p x o thus conditioned on a solution a typical realization of rn belongs to category has solutions on the other hand if we draw from the unconditional distribution of rn in a typical realization has en solutions in this case the second moment method can not be fixed by counting only the solutions from realizations of category because the total conditional probability of such realizations is very small this is the analog of the condensation phase that is found in problems such as random hypergraph coloring in this phase although solutions may exist even an enhanced second mom does not prove their existence fortunately there is no condensation phase in the sparc compression problem despite the failure of the direct second mom we prove lemma that conditioning on a solution does not significantly alter the total number of solutions for a very large fraction of design matrices analogous to case above we can apply the second mom to a new random variable that counts only the solutions coming from typical realizations of the design matrix this yields the desired result that solutions exist for all rates r d iv p roofs of m ain r esults a proof of theorem the code parameters encoding and decoding are as described in section we build on the proof of section given that bm l is a solution for l define to be the number of solutions l that share terms with the total number of solutions given that is a solution is x x l l l e a e x ex ex x e b ex l l l x l l l where a holds because the symmetry of the code construction allows us to condition on a generic bm l being a solution b follows from note that e and e x are expectations evaluated with the conditional distribution over the space of design matrices given that is a solution the key ingredient in the proof is the following lemma which shows that is much smaller than ex l l in particular ex even for for which e as n ex lemma let r log if bm l is a solution then for sufficiently large l p ex for l where b b min the function bmin is defined in proof the proof of the lemma is given in section the probability measure in lemma is the conditional distribution on the space of design matrices a given that is a solution definition for call a solution if x ex l l l since we have fixed whether a solution is or not is determined by the design matrix lemma guarantees that any solution will be if is a solution the design matrix is such that the number of solutions sharing any common terms with is less x the key to proving theorem is to apply the second mom only to solutions fix for i enr define the indicator random variables if i d and i is vi otherwise the number of solutions denoted by xg is given by xg venr we will apply the second mom to xg to show that p xg as n we have p xg exg exg e e xg where the second equality is obtained by writing e exg e xg similar to lemma a exg ex where is defined in b e xg ex proof due to the symmetry of the code construction we have exg e nr a p e nr p p ex p is is a solution in a follows from the definitions of vi in and ui in given that is a solution lemma shows that x ex l l l with probability at least as is according to definition if is satisfied thus exg in can be lower bounded as exg ex for part b first observe that the total number of solutions x is an upper bound for the number of solutions xg therefore e xg e given that is an solution the expected number of solutions can be expressed as e e e x l l l there are m l codewords that share no common terms with each of these codewords is independent of and thus independent of the event e e m l p d m l p d ex next note that conditioned on being an solution x ex l l l with certainty this follows from the definition of in using and in we conclude that e ex combining with completes the proof of lemma using lemma in we obtain exg p xg e xg b b min where the last equality is obtained by using the definition of in and hence the probability of the existence of at least one good solution goes to as l thus we have shown that for any d the quantity p e in tends to zero whenever r log and b bmin combining this with we conclude that that the probability that d n goes to one as n as can be chosen arbitrarily close to the proof of theorem is complete b proof of theorem the code construction is as described in section with the parameter b now chosen to satisfy recall the definition of an solution in definition we follow the of section and count the number of solutions for an appropriately defined as before we want an upper bound for the probability of the event xg where the number of solutions xg is defined in theorem is obtained using suen s correlation inequality to upper bound on the probability of the event xg suen s inequality yields a sharper upper bound than the second mom we use it to prove that the probability of xg decays in in comparison the second mom only guarantees a polynomial decay we begin with some definitions required for suen s inequality definition dependency graphs let vi be a family of random variables defined on a common probability space a dependency graph for vi is any graph with vertex set v i whose set of edges satisfies the following property if a and b are two disjoint subsets of i such that there are no edges with one vertex in a and the other in b then the families vi and vi are independent fact example suppose is a family of independent random variables and each vi i i is a function of the variables for some subset ai a then the graph with vertex set i and edge set ij ai aj is a dependency graph for ui in our setting we fix let vi be the indicator the random variable defined in note that vi is one if and only if i is an solution the set of codewords that share at least one common term with i are the ones that play a role in determining whether i is an solution or not hence the graph with vertex set v enr and edge set e given by ij i j and the codewords i j share at least one common term nr is a dependency graph for the family vi this follows from fact by observing that i each vi is a function of the columns of a that define i and all other codewords that share at least one common term with i and ii the columns of a are generated independently of one another for a given codeword i there are lr m other codewords that have exactly r terms in common with i for r l therefore each vertex in the dependency nr graph for the family vi is connected to x m m l m l r fact suen s inequality let vi bern pi i i be a finite family of bernoulli random variables having a dependency graph write i j if ij is an edge in define x x xx e vi vj max evk evi then p vi we have nr e x evi exg exp min we apply suen s inequality with the dependency graph nr specified above for vi to compute an upper bound for penr p xg where xg vi is the total number of solutions for note that the chosen here is smaller than the value of used for theorem this smaller value is required to prove the decay of the probability via suen s inequality we also need a stronger version of lemma lemma let r log if bm l is a solution then for sufficiently large l p ex for l l where b b min a ex p is is a solution where a follows from given that is a solution lemma shows that x ex with probability at least as is according to definition if is satisfied thus the rhs of can be lower bounded as follows ex p is is a solution ex using the expression from for the expected number of solutions ex we have en log d n where is a constant for b bmin implies that approaches with growing second term due to the symmetry of the code construction we have x max p vk enr proof the proof is nearly identical to that of lemma replaced given in section v with the terms and by l and respectively throughout the lemma thus we obtain the following condition on b which is the analog of x p vk enr l m p r m l m l p combining this together with the fact that l b l l l other vertices x first term m x evi m l p log r min max min log l l l l we obtain ml l l m l l m l bmin d l where the second equality is obtained by substituting m lb using a taylor series bound for the denominator of see the result is then obtained using arguments analogous to sec v for details yields the following lower bound for sufficiently large l and we now compute each of the three terms in the rhs of suen s inequality third term we have m in lemma to go to with growing using in we conclude that for any de r the probability of excess distortion can be bounded as l xx e vi vj pe n p l m x p vi p vj vi x a exg p vj hx i vj exg e x b exg e l in a holds because of the symmetry of the code construction the inequality b is obtained as follows the number of solutions that share common terms with is bounded above by the total number of solutions sharing common terms with the latter quantity can be expressed as the sum of the number of solutions sharing exactly common terms with for l conditioned on the event that is a solution the total number of solutions that share common terms with is bounded by ex therefore from we have x exg e l l l exg ex ex where we have used and the fact that xg x combining and we obtain where is a strictly positive constant applying suen s inequality using the lower bounds obtained in and in we obtain nr e x p vi exp min e n log d n log p e p exp l ex ex max d where is a positive constant recalling from that l logn n and r ln we see that for b nr e x vi exp where c is a constant note that the condition b bmin was also needed to obtain via suen s inequality in particular this condition on b is required for provided the parameter b satisfies b max max bmin d it can be verified from the definition in that bmin x is strictly increasing in x therefore the maximum on the rhs of is bounded by max bmin choosing b to be larger than this value will guarantee that holds this completes the proof of the theorem p roof of l emma we begin by listing three useful properties of the function f x y z defined in recall that the probability that an n y sequence is within distortion within distortion z of a sequence is x y z for fixed x y f is strictly decreasing in z x y for fixed y z f is strictly increasing in x z for fixed x z and x z f is convex in y and attains its minimum value of log xz at y x z these properties are straightforward to verify from the definition using elementary calculus for k l let denote the restriction of to the set k coincides with in the sections indicated by k and the remaining entries are all equal to zero for example if k the second and third sections of will each have one entry the other entries are all zeros definition given that is a solution for define as the event that l l l for every size subset k l where is the solution to the equation f d the intuition behind choosing according to is the following any subset of sections of the design matrix a defines a sparc of rate with each codeword consisting of n entries note that the entries of a single codeword are though the codewords are dependent due to the sparc structure the probability that a codeword from this rate code is within distortion z of the source sequence is z hence the expected number of codewords in the rate codebook within distortion z of is z as f z is a strictly decreasing function of z in says that is the smallest expected distortion for any rate code with codeword entries chosen n d for z the expected number of codewords within distortion z of is vanishingly small conditioned on the idea is that any sections of can not by themselves represent with distortion less than in other words in a typical realization of the design matrix all the sections contribute roughly equal amounts to finding a codeword within d of on the other hand if some sections of the sparc can represent with distortion less than the remaining sections have less work to this creates a proliferation of solutions that share these common sections with consequently the total number of solutions is much greater than ex for these atypical design matrices the first step in proving the lemma is to show that for any the event holds the second step is showing that when holds the expected number of solutions that share any common terms with is small compared to ex indeed using we can write p ex p ex p ex p p p ex p e ex where the last line follows from markov s inequality we will show that the probability on the left side of is small for any solution by showing that each of the two terms on the rhs of is small first a bound on lemma for log consequently for l l f d proof the last equality in holds because f x z x ln z define a function g log then g g r derivative is ln and the second g therefore g is strictly concave in and its minimum value attained at is this proves recalling the definition of in implies that f d f d note that d is not the function at rate as the codewords are not chosen with the optimal variance for rate as f is decreasing in its third argument the distortion we conclude that we now bound each term on the rhs of showing that the first term of is small implies that any sections by themselves will leave a residual distortion of at least showing that the second term is small implies that under this condition the expected number of solutions sharing any common terms with is small compared to ex bounding from the definition of the event we have p p is a solution where the union is over all subsets of l using a union bound becomes l p d c p p d where k is a generic subset of l say k recall from that for sufficiently large n the denominator in can be bounded from below as p d d n and f d d be expressed as log the numerator in can p d z y p d y dy where is the density of the random variable using the cdf at y to bound y in the rhs of we obtain the following upper bound for sufficiently large p d z p y z a p d y dy y n p d y dy z b y y d dy n z c d e dy n in a holds for sufficiently large n and is obtained using the strong version of s large deviation theorem note that is a linear combination of columns of a hence it is a gaussian random vector with n d entries that is independent of inequality b is similarly obtained has n d entries and is independent of both and finally c holds because the overall exponent the two exponents in are equal to bound we use the following lemma f d y f y d d is a decreasing function of y for y and using and in for sufficiently large n we have l p f d d l bounding e there are m codewords which share common terms with therefore l m p d d e where is a codeword that shares exactly common terms with if k is the set of common sections between and then c and p d d p c d d n x a d p c i b d n where b holds for sufficiently large in a is obtained as follows under the event the norm is at least and c is an n d vector independent of and a then follows from the rotational invariance of the distribution of c inequality b is obtained using the strong version of s large deviation theorem using in we obtain for sufficiently large n e l m d n l en d n overall bound substituting the bounds from and in for sufficiently large n we have for l p ex f d d d d since is chosen to satisfy f d d lemma for l we have h f d f d d i f d d if d if where is the solution of is a positive constant given by and d d d p log d proof see appendix i we observe that is strictly decreasing for this can be seen by using the taylor expansion of log x for x to write d ln log x d k since r log d d shows that is strictly positive and strictly decreasing in with d d d lim p d d d d p r log d substituting in we have for l l p ex l exp min taking logarithms and dividing both sides by l log l we obtain log p ex l log l l log log n min l log l l log l l log l log a log min l log l log l min b r where to obtain a we have used the bound l log min log l l log l l log and the relation for the right side of to be negative for sufficiently large l we need min b log min r log l this can be arranged by choosing b to be large enough since has to be satisfied for all l we need b h n r log o i min min log l l l l a l bmin d l we want a lower bound for where is the solution to f d we consider the cases d and d separately recall from lemma that case d in this case both the f terms in the definition of are strictly positive we can write where d expanding around using taylor s theorem we obtain g since g g here is a number in the interval d we bound g from below by obtaining separate lower bounds for g and lower bound for g using the definition of f in the second derivative of g u is g g g u max in a holds because is of constant order for all hence the maximum is attained at the constant is given by and bmin is defined in the statement of theorem when b satisfies and l is sufficiently large for l l the bound in becomes log p ex l log l min b bmin o log l log l r b bmin log b bmin log l log l l r l log l l therefore b b min p ex this completes the proof of lemma d u hp d d u d d hp d d d it can be verified that g u is a decreasing function and hence for d g g d d d d d d d lower bound for from and note that is the solution to f d a ppendix i p roof of l emma for l define the function r r as u f d u f u d d ln d using taylor s theorem for f in its third argument around the point p d we have f p p ln where p d for some d as is a quadratic in with positive coefficients for the and terms replacing the coefficient with an upper bound and solving the resulting quadratic will yield a lower bound for since the function x y z p p y y y y is decreasing in z the coefficient can be bounded as follows finally using the lower bounds for g and from and in we obtain d d d g p ln d d where can be computed to be p d d d hp d d d therefore we can obtain a lower bound for denoted by by solving the equation ln a we thus obtain a r ln we now show that can be bounded from below by by obtaining lower and upper bounds for from we have d d hp d d d d p d case in this case g is given by g f d f d d ln d ln d where we have used and the fact that f d d for the right hand side of the equation f d z is decreasing in z for z d therefore it is sufficient to consider d in order to obtain a lower bound for that holds for all d next we claim that the that solves the equation f d d lies in the interval indeed observe that the lhs of is increasing in while the rhs is decreasing in for since the lhs is strictly greater than the rhs at r ln the solution is strictly less than on the other hand for we have r d f d d ln d where the inequality is obtained by noting that is strictly increasing in and hence taking gives a lower bound analogously taking yields the upper bound a using the bounds of and in we obtain p ln d f d the lhs of is strictly less than the rhs therefore the that solves lies in to obtain a lower bound on the rhs of we expand f d d using taylor s theorem for the second argument f d d f d d f ln d d d f ln d d where d and lies in the interval d d using and the shorthand f f d can be written as acknowledgement d ln f d we thank the anonymous referee for comments which helped improve the paper r eferences or d ln f d solving the quadratic in we get d r ln d f d f using this in we get ln d h r d r ln d f d f the lhs is exactly the quantity we want to bound from below from the definition of f in the second partial derivative with respect to y can be computed f y y d y y y y d the rhs of is strictly decreasing in y we can therefore bound f as f d f f d d substituting these bounds in we conclude that for d ln d r ln d d d g barron and joseph least squares superposition codes of moderate dictionary size are reliable at rates up to capacity ieee trans inf theory vol pp feb joseph and barron fast sparse superposition codes have exponentially small error probability for r c ieee trans inf theory vol pp feb kontoyiannis rad and gitzenis sparse superposition codes for gaussian vector quantization in ieee inf theory workshop venkataramanan joseph and tatikonda lossy compression via sparse linear regression performance under encoding ieee trans inf thy vol pp june venkataramanan sarkar and tatikonda lossy compression via sparse linear regression computationally efficient encoding and decoding ieee trans inf theory vol pp june alon and spencer the probabilistic method john wiley sons wainwright maneva and martinian lossy source compression using generator matrix codes analysis and algorithms ieee trans inf theory vol no pp janson random graphs wiley and the condensation transition in random hypergraph in proc annual symp on discrete algorithms pp and vilenchik chasing the threshold in proc ieee annual symposium on foundations of computer science pp and panagiotou going after the threshold in proc annual acm symposium on theory of computing pp ingber and kochman the dispersion of lossy source coding in data compression conference pp march kostina and lossy compression in the finite blocklength regime ieee trans inf theory vol no pp marton error exponent for source coding with a fidelity criterion ieee trans inf theory vol pp mar ihara and kubo error exponent for coding of memoryless gaussian sources with a fidelity criterion ieice trans fundamentals vol den hollander large deviations vol amer mathematical society bahadur and rao on deviations of the sample mean the annals of mathematical statistics vol no
7
the conference paper assignment problem using order weighted averages to assign indivisible goods may jing wu nicholas renee and toby csiro and unsw sydney australia lianjingwu ibm watson research center new york usa csiro unsw and tu berlin berlin germany abstract motivated by the common academic problem of allocating papers to referees for conference reviewing we propose a novel mechanism for solving the assignment problem when we have a two sided matching problem with preferences from one side the over the other side the and both sides have capacity constraints the assignment problem is a fundamental problem in both computer science and economics with application in many areas including task and resource allocation we draw inspiration from multicriteria decision making and voting and use order weighted averages owas to propose a novel and flexible class of algorithms for the assignment problem we show an algorithm for finding an assignment in polynomial time in contrast to the of finding an egalitarian assignment inspired by this setting we observe an interesting connection between our model and the classic proportional election problem in social choice introduction assigning indivisible items to multiple agents is a fundamental problem in many fields including computer science economics and operations research algorithms for matching and assignment are used in a variety of application areas including allocating runways to airplanes residents to hospitals kidneys to patients students to schools assets to individuals in a divorce jobs to machines and tasks to cloud computing nodes understanding the properties of the underlying algorithms is an important aspect to ensuring that all participating agents are happy with their allocations and do not attempt to misrepresent their preferences a key area of study for computational social choice an area that is near to many academics hearts is the problem of allocating papers to referees for peer review the results of grant journal and conference reviewing can have significant impact on the careers of scientists ensuring that papers and proposals are reviewed by the most referees most is part of ensuring that items are treated properly and all participants support the outcome of the processes making sure these processes work for both the proposers and the reviewers is important and methods for improving peer review have been proposed and discussed in ai and broadly across the sciences there are a number of ways one can improve the quality of peer review first is to ensure that reviewers are not incentivized to misreport their reviews for personal gain along this line there has been significant interest recently in strategyproof mechanisms for peer review unfortunately the method that we discuss in this paper is not strategyproof another way is to ensure that reviewers are competent to provide judgements on the papers they are assigned the toronto paper matching system is designed to improve the process from this model a third alternative and the one we focus on in this study is ensuring that reviewers are happy with the papers they are asked to review this is fundamentally a question about the optimization objectives of the assignment functions used formally we study the conference paper assignment problem cpap which is a special of the resource allocation problem mara and propose a novel assignment the assignment in the cpap setting we have a market where on one side the have preferences over the other side the and both sides have possibly infinite upper and lower capacities a fundamental tension in assignment settings is the tradeoff between maximizing the social welfare also know as the utilitarian maximal assignment and the rawlsian fairness concept of maximizing the utility of the worst off agent known as the egalitarian maximal assignment these two ideas are incompatible optimization objectives and diverge in a computational sense as well computing the utilitarian assignment for additive utilities can be done in polynomial time while computing the egalitarian assignment is this perhaps could be the reason that implementers of large conference paper assignment software often opt for utilitarian assignments as is supposedly the case for easychair however it is also not clear if an egalitarian assignment is desirable for cpap contributions we establish a motivation for using owa vectors in the assignment setting and define a novel notion of allocation the assignment we give algorithm to compute an maximal assignment in polynomial time and we show that the owa objective generalizes the utilitarian objective we show that assignments satisfy a notion of pareto optimality the pairwise comparisons of the objects by the agents we implement an algorithm for assignments and perform experiments on real world conference paper assignment data preliminaries from here we will use the more general notation to describe our setting in assignment settings each agent provides their preference over the objects as a reflexive complete and transitive preference relation weak order over the set of objects i this is technically unsubstantiated as when the authors contacted easychair to understand the assignment process we were told we do not provide information on how paper assignment in easychair is implemented the information in garg may be incorrect or out of date none of the authors worked for easychair they also had no access to the easychair we do not assume that i is complete it is possible that some agents may have conflicts of interest or have no preference for a particular object this assumption is often called having unacceptable objects in the literature in many cpap settings there are a fixed number of equivalence classes into which agents are asked to place the objects we assume that the number of equivalence classes ranks of objects are given as input to the problem and agents tell us within which rank each objects belong agents also provide a decreasing utility value for each our main result can be extended to the case where the number of equivalence classes is not fixed formally the cpap problem is defined by n o u a set of n agents n an a set of m objects o om for each i n a reflexive and transitive preference relation weak order over the set of objects i divided into equivalence classes ranks and for each i n a utility vector ui is of length and assigns a decreasing utility ui k r for each k ui ui ui let ri j be the rank of object j for i and ui ri j denote the value of i for j side constraints and feasible assignments there are two practical constraints that we include in our model making our model more general than the standard mara or cpap problems studied in computer science upper and lower capacities on both the agents and objects agent capacity each agent i n has possibly all equal upper and lower bound on their capacity the number of objects they can be allocated cnmin i and cnmax i object capacity each object j o has a possibly all equal upper and lower o bound on the number of agents assigned to it co min j and cmax j respectively we can now define a feasible assignment a for an instance n o u for a given assignment a let a i denote the set of objects assigned to agent i in a let a j denote the set of agents assigned to object j and let denote the size number of elements of a set or vector a feasible assignment a must obey o n cnmin i i cnmax i j o co min j j cmax j we write the set of all feasible assignments for an instance as a n o u individual agent evaluation we first formalize how an individual agent evaluates their assigned objects each feasible assignment a a gives rise to a signature vector for each agent i n intuitively the signature vector is the number of objects at each rank assigned to i formally let i a a i a where i l a j a i j l for each l we assume that agents can give any utilities as input however often the utilities are restricted to be the same borda utilities in conference paper bidding or come from some fixed budget bidding fake currency as in course allocation at harvard we will omit the arguments when they are clear from context for indivisible discrete objects the lexicographic relation can be modeled by the additive utility relation by setting the agent utilities to high enough values formally if the utility for rank i j is u i u j m then the lexicographic and additive utility relations are the same no matter how many additional objects of rank j the agent receives one additional object of rank i is more preferred we now define the relations that a referee might consider between assignments a and lexicographic an agent i lexicographically prefers a to b if i a comes before i b in the lexicographic order that is there is an index l such that for all k l we have i k a i k b and i l a i l b i receives at least one more paper of a higher rank in a than in b the lexicographic relation over vectors has a long history in the assignment literature additive utility an agent i prefers assignment a to b if he has more additive utility for the objects assigned to him in a than in b formally and slightly abusing notation ui a i ui ri j i ui ri j or an alternative formulation using the dot product ui a ui i a ui i b overall assignment evaluation in the literature there are several optimization objectives defined over an assignment that an implementer may wish to consider we limit our discussion to the two classical notions below additional discussion of objectives including the imposition of various fairness criteria for the cpap setting can be found in garg et al and for the mara setting see bouveret and utilitarian social welfare maximal assignment often called the utilitarian assignment we want to maximize the total social welfare over all the agents an assignment is a utilitarian assignment if it satisfies arg max i ui ri j arg max ui i a egalitarian social welfare maximal assignment often called the egalitarian assignment we want to enforce the rawlsian notion of fairness by making sure that the worst off referee is as happy as possible maximize the utility of the least well off agent formally arg max min ui ri j arg max min ui i a i in the discrete mara and cpap setting where objects are not divisible the problem of finding an egalitarian assignment is while finding a utilitarian assignment can be done in polynomial time background and related work one and two sided matching and assignment problems have been studied in economics and computer science for over years matching and assignment have many applications including kidneys exchanges and school choice our problem is often called the resource allocation mara problem in computer science the papers to referees formulation of this problem has some additional side constraints common in the economics literature but not as common in computer science in the economics literature the problem is the most closely related analogue to our problem modeling matchings with capacities the conference paper assignment has been studied a number of times over the years in computer science as has defining and refining notions of fairness for the assignment vectors in allocation problems we build off the work of garg et al who extensively study the notion of fair paper assignments including leximin and assignments within the context of conference paper assignment garg et al show that for the setting we study finding an egalitarian optimal assignment and finding a leximin optimal assignment are both when there are three or more equivalence classes and polynomial time computable when there are only two they also provide an approximation algorithm for leximin optimal assignments we know that if the capacity constraints are hard values each reviewer must review x papers and each paper must receive exactly y reviews then the resulting version of capacitated assignment is answer set programming for cpap was studied by amendola et al they encode the cpap problem in asp and show that finding a solution that roughly correspond to the leximin optimal and egalitarian solutions can be done in reasonable time for large settings agents cpap also receives considerable attention in the recommender systems and machine learning communities often though this work takes the approach of attempting to infer a more refined utility or preference model in order to distinguish papers fairness and efficiency concerns are secondary a prime example of this is the toronto paper matching system designed by charlin and zemel this system attempts to increase the accuracy of the matching algorithms by having the papers express preferences over the reviewers themselves where these preferences are inferred from the contents of the papers we make use of order weighted averages owas often employed in decision making owas have recently received attention in computational social choice for voting and ranking finding a collective set of items for a group and voting with proportional representation the key difference between cpap and voting using owas in the comsoc literature is that cpap does not select a set of winners that all agents will share instead all agents are allocated a possibly disjoint set of objects assignments we now formally define owas and their use for defining assignment objectives we will discuss alternative formulations of which have been studied an order weighted average owa is a function defined for an integer k as a vector k of k numbers let x xk be a vector of k numbers and let be the rearrangement of x then we say k x in order to apply owas to our setting we need to define the weighted rank signature of an assignment let i a be defined as the sorted vector of utility that a referee gets from an assignment a formally i a sort j a i ui r j for example if a i included two objects with utility one of utility and one of utility we would have i a our inspiration for applying owas comes from a voting rule known as proportional approval voting pav in approval voting settings each agent can approve of as many candidates as they wish under the standard approval voting av method all approvals from each agent assign one point to the candidate for which they are cast however this can lead to a number of pathologies described by aziz et al and it intuitively does not seem fair once a candidate that you like has been selected to the winning set your next candidate selected to the winning set should seemingly count less hence in pav which is designed to be more fair a voter s first approval counts for a full point the second for the next for and on as a harmonically decreasing sequence transitioning this logic to the cpap setting we were motivated to find a way to distribute objects to agents that increases the number of agents who receive their top ranked objects this is the logic of pav once you get a candidate into the winning set you should count less until everyone else has a candidate in the winning set if we desire to directly get a rank maximal assignment completely ignoring the utilities then we know this is polynomial by a result from garg et al however if we wish to modulate between using the utilities and using only the ranks perhaps we can use owas we use the sum over all agents of as the optimization criteria for the assignment in order to cleanly define this we need to place some restrictions on our owa vectors firstly the length of needs to be at least as long as the maximum agent capacity arg cnmax i typically the literature on owas assumes that is normalized we do not enforce this convention as we wish to study the pav setting with this is formally a relaxation and we observe that whether or not the owas are normalized does not affect our computational results however we do require that our owa vector be and that each entry be for any i j i j we have i j a ssignment input given an assignment setting n o u with agent capacities cnmin i cnmax i for all i n and object capacities o co min j cmax j for all j o and a owa vector i with cnmax i question find a feasible assignment a such that a arg max i i a in our formulation the owa operator is applied to the vector of agent utilities and then we aggregate or sum these modified utilities to give the assignment objective hence the name we observe that this formulation strictly generalizes the utilitarian assignment objective if we set n we recover the utilitarian assignment one may also wish to consider applying the owa over the sorted vector of total agent utility for their allocation which one could call the version of our problem indeed this formulation of the problem has been considered before and proposed in the earliest writings on owas for decision making taking the formulation allows one to recover both the utilitarian assignment as well as the egalitarian assignment however because the formulation is a generalization of the egalitarian assignment it becomes in general we think of the vector as a kind of control knob given to the implementer of the market allowing them to apply a transform to the agent utilities this ability may be especially useful when agents are free to report their normalized utilities for ranks via bidding or other mechanisms in many settings the utility vector is controlled by the individual agents while the owa vector is under the control of the market implementers consider the following example example consider a setting with four agents n agents and four objects o for all agents let cnmin cnmax and for all objects let o co min cmax for the assignment let we get the following allocations utilitarian a a a a a a a a ui a owa a a a a a a a a ui a egalitarian a a a a a a a a ui a inspecting the results of example we observe that in the set of all utilitarian maximal assignments have and each being assigned to and in the set of all maximal assignments is assigned one of or while is assigned one of or while in the set of all egalitarian maximal assignments each of the agents receives one of either or along with one of or thus we observe the following observation the set of assignments returned by each of the three objective functions utilitarian egalitarian and owa can be disjoint there are instances where the set of assignments is the same as the set of egalitarian assignments but disjoint from the set of utilitarian assignments hence it is an interesting direction for future work to fully characterize assignments and discover owa vectors with nice properties pareto optimality an allocation s is more preferred by a given agent with respect to pairwise comparisons than allocation t if s is a result of replacing an item in t with a strictly more preferred item note that the pairwise comparison relation is transitive an allocation is pareto optimal with respect to pairwise comparisons if there exists no other allocation that each agent weakly prefers and at least one agent strictly prefers lemma consider an agent i and two allocations s and t of equal size then if s is at least as preferred as t by i with respect to pairwise comparison then s yields at least as much owa value as t for any owa vector no matter if it is increasing or decreasing proof note that s can be viewed as a transformation from t where each item j is replaced by some other item that is at least as preferred hence the value of the item either stays the same or increases in either case the corresponding owa multiplied with the value is the same since the owa transform is bilinear the total owa score of s is at least as much as that of t proposition the maximal assignment is pareto optimal with respect to pairwise comparison irrespective of the owa proof assume for contradiction that a maximal assignment a is not pareto optimal with respect to pairwise comparisons from lemma there exists another outcome that each agent weakly prefers and at least one agent strictly prefers but this means that in each agent gets at least as much owa score and at least one agent gets strictly more but this contradicts the fact that a is owa maximal an algorithm for assignments we give an algorithm for finding assignments using flow networks in this proof we use the most general formulation of our problem by allowing the values of the upper and lower capacities cnmin i cnmax i to vary for each agent and the o upper and lower object capacities co min j cmax j to vary for each object theorem an assignment can be found in polynomial time proof we reduce our problem to the problem of finding a minimum cost feasible flow in a graph with upper and lower capacities on the edges which is a polynomial time solvable problem in addition to being polynomial time solvable we know that the flow is integral as long as all edge capacities are integral even if we have real valued costs figures and provide a high level view of the flow network that we will construct n c cnmin cnmax n c max c s n c min c cnmin cnmax c cnmin n cnmax n o gadget gadget gadget gadget an om an o c co min cmax cmo in o cm ax o co min cmax t o c co min m cmax m fig main gadget for the reduction which enforces the agent and object capacity constraints in figure we first build a tripartite graph with two sets of nodes and one set of gadgets per agent the agent nodes one for each agent ai the agent gadgets one illustrated in figure for each agent ai and the object nodes one for each object o j there is an edge from the source node s to each of the agent nodes each with cost minimum flow capacity cnmin i and a maximum flow capacity cnmax i this set of edges and nodes enforces the constraint that each ai has capacity cnmin i cnmax i we also construct an edge from each object node to the sink each of these edges has a cost o a minimum capacity co min j and a maximum capacity cmax j this set of edges o o enforces the constraint that each o j has capacity cmin j cmax j we now turn to the agent gadget depicted in figure for arbitrary ai the leftmost node and the rightmost set of nodes in figure correspond to the agent nodes n and n o w w i u i o w ai ai ai ai om om o i m om ai o i u i o d w o i i o m om fig the per agent gadget note that all costs on edges are and all capacities are unless otherwise noted object nodes o in figure respectively in each agent gadget we create a tripartite subgraph with the agent node ai serving as the source and the set of object nodes o serving as the sinks we create three layers of nodes which we describe in turn from left to right first we create a set of decision nodes with labels where cnmax i d intuitively we will be multiplying the owa value by the utility for some object so we need to keep track of all the values that could result the arcs from ai to each of the nodes in this set has upper capacity minimum capacity and cost if we have the case that cnmax i d then we can set the maximum capacity of the edges to node s j j cnmax i to this enforces that each value in the owa vector can modify at most one utility value for each of the decision nodes constructed we create a set of nodes for each o j which we denote o j from each of the decision nodes we create an edge to each of the nodes created for this particular decision node om for each of these edges has maximum capacity and a cost equal to ui o j for rank and object o j o these costs are the negative cost that matching agent ai with object o j at weighted rank dk contributes to the owa objective finally we create one set of nodes one for each o j denoted ai o j from all the nodes we connect all nodes with a label of o j to the corresponding node all connect to ai with cost and maximum capacity we then connect the node to the corresponding object node in the main construction from o ai to with cost and maximum capacity this set of nodes and edges enforces that each agent can be assigned each object once we can extract an assignment from the minimum cost feasible flow by observing that paper o j is allocated to agent ai if and only if there is a unit of flow passing from the particular node ai o j to the object node o j we now argue for the correctness of our algorithm in two steps that all constraints for the assignment problem are enforced and that a minimum cost feasible flow in the constructed graph gives an assignment for we note that since the units of flow across the graph represent the assignment and we have explained how the capacity constraints on all edges enforce each of the particular constraints imposed by our definition of a feasible assignment there is a feasible flow iff the flow satisfies the constraints for observe that for each agent the nodes fill with flow in order from to as the owa vector is and the utilities are decreasing for each agent the edge costs monotonically increase from the edges associated with to the edges associated with thus for each agent the first unit of flow to this agent will use the least cost most negative edge must be associated with and similarly for through from the capacity constraints we know there is only one unit of flow that enters each decision node and there is only one unit of flow that can leave each node ai o j this means that each can modify only one o j and each o j selected must be unique for this agent as the decision nodes are filled in order and can only modify the value for a single object we know the total cost of the flow across the agent gadget for each ai is equal to i i hence the price of the min cost flow across all agents is equal to i i a thus the min cost flow in the graph is an assignment generalizations we observe two possible generalizations of the above construction which allow us to use this constructive proof for more general instances than the cpap first the proof above can be generalized to allow for to vary for each agent specifically observe that the decision nodes for each agent ai are independent from all other agents this means that for each agent or a class of agents we could use an owa vector ai this ability may be useful for instance when a group of agents reports the same extreme utility distribution and the organizer wishes to apply the same transform to these utilities the second generalization that we can make to the above construction is to allow each agent to be assigned to each object more than once while this ability does not make sense in the setting unless there are sub reviewers there could be other capacitated assignment settings where we may wish to assign the agents to objects multiple times if there are discrete jobs that need to be done a certain number of times but and a single agent can be assigned the same job multiple times in order to generalize the capacity constraint from for each agent i for each object j we introduce a capacity upper bound zi j which encodes the number of times that agent i can be assigned to object j taking zi j for all i and j gives us the original cpap setting in order to enforce this constraint within each agent gadget figure we add a capacity constraint equal to zi j from each edge ai o j to o j if we want a lower bound for the number of copies of o j assigned to ai we can encode this lower bound on this edge as well we can extract an assignment from the minimum cost feasible flow by observing that paper o j is allocated to agent ai zi j times if and only if there are units of flow passing from the particular node ai o j to the object node o j the argument for correctness follows exactly from the proof of theorem above corollary an assignment can be found in polynomial time even if each agent ai has a unique owa vector ai and each object o j can be assigned to each agent ai any number of times not just once experiments we now turn to the question of how good are assignments in practice we answer this question using real world data from three large international conferences from ref l ib org we focus discussion on which has agents and objects we implemented the algorithm given in section using networkx for python and lemon for however we still have a run time o v giving runtime which caused our computers to crash even with of memory this was quite disappointing as we thought the flow argument could be used to solve this problem on instances not to be deterred we still wanted to investigate the assignments we get from owa compare to the utilitarian and egalitarian assignments consequently we implemented the model as an mip in gurobi and it ran in under minute for all instances and settings using cores our mip is similar to the one given by skowron et al and the mara mip by bouveret et al however as we have capacity constraints and length owas our mip is more general than either to encode the problem we introduce a binary variable xa o indicating that agent a is assigned object o we introduce a real valued variable uowa a which is the utility for agent a finally we introduce ra o p for the owa matrix which notes that agent a is assigned object o at owa rank the mip is given below max ua o p ra o p o co min o xa o cmax o n a cn a x c a o max min ra o p ra o p ra o p xa o ra o p ra o ra o p ua o ra o ua o o a a o a p a o a p a p description object capacities agent capacities one object per owa rank objects have one rank assignment to owa link fcn ranks fill in increasing order agent utility must be decreasing constraints enforce the cardinality constraints on the agents objects and owa rank matrix constraint links the agent and object assignments to be positions in the owa rank matrix line enforces that the rank matrix fills from the first position to the cnmax position for each agent and finally enforces that the value of the assignment positions in the rank matrix must be decreasing we then maximize the sum over all agents of the owa objective value we found the utilitarian egalitarian and assignments for each of the real world datasets when each object must receive reviews and each agent must review objects in the data each agent sorts the papers into equivalence classes which we gave utility values we use the pav inspired decreasing harmonic owa vector to compute the assignment one of the reasons we wanted to use the assignment is to allow the market designer to enforce a more equitable distribution of papers with respect to the ranks hence our test statistic is the number of top ranked items that the average agent can expect to receive figure shows the agent counts and the cumulative distribution function cdf for the number of top ranked items the agents receive looking at the left side of the figure we see that agents receive top ranked papers under the assignment while under the utilitarian assignment only do under the utilitarian assignment agents receive more than top ranked papers consequently on average agents can expect to get top ranked papers in the assignment in the egalitarian assignment and in the utilitarian assignemnt however under the utilitarian assignment several agents receive an entire set of top ranked objects while the egalitarian assignment modulates this so that most agents only receive top ranked items in contrast the assignment is a balance between these with the most agents receiving top ranked items agents per num top ranked num of agents num top ranked cdf of num top ranked p num top ranked x utilitarian egalitarian owa utilitarian egalitarian owa num top ranked fig the count of agents receiving x top ranked papers top and the cumulative distribution function cdf bottom for the number of agents being assigned x top ranked objects for though of the agents receive between and top ranked items for the egalitarian and assignments cdf the pdf shows that the most agents receive the most top ranked items under the assignment conclusions we have proposed and provided algorithms for the novel notion of a assignment the assignment using decreasing owa vectors gives the central nizer a slider to move from utility maximizing towards a more rank maximal assignment computationally efficient package an important open question for future work is to find axiomatic characterizations for good owa vectors additionally the owa method and all methods for cpap that we surveyed treat objects as having positive utility it is generally the case that reviewers at a conference want to review fewer not more papers consequently it would be interesting to study cpap from the point of view of chores as they are called in the economics literature references pathak roth the new york city high school match american economic review pp ahuja magnanti network flows theory algorithms and applications prentice hall amendola dodaro leone ricca on the application of answer set programming to the conference paper assignment problem in proc of the international conference of the italian association for artificial intelligence ai ia pp aziz brill conitzer elkind freeman walsh justified representation in committee voting in proc of the aaai conference aziz gaspers gudmundsson mackenzie mattei walsh computational aspects of approval voting in proc of the aamas conference aziz lev mattei rosenschein walsh strategyproof peer selection mechanisms analyses and experiments in proc of the aaai conference bouveret chevaleyre lang fair allocation of indivisible goods in brandt conitzer endriss lang procaccia eds handbook of computational social choice chap pp cambridge university press bouveret characterizing conflicts in fair division of indivisible goods using a scale of criteria autonomous agents and systems brandt conitzer endriss lang procaccia eds handbook of computational social choice cambridge university press budish cantillon the assignment problem theory and evidence from course allocation at harvard the american economic review charlin zemel the toronto paper matching system an automated assignment system in proc of the icml workshop on peer reviewing and publishing models peer charlin zemel boutilier a framework for optimizing paper matching corr conry koren ramakrishnan recommender systems for the conference paper assignment problem in proc of the acm conference on recommender systems recsys pp demko hill equitable distribution of indivisible objects mathematical social sciences dickerson procaccia sandholm price of fairness in kidney exchange in proc of the aamas conference pp elkind ismaili extensions of the rule in proc of the adt conference pp elkind faliszewski skowron slinko properties of multiwinner voting rules in proc of the aamas conference pp fishburn lexicographic orders utilities and decision rules a survey management science garg kavitha kumar mehlhorn mestre assigning papers to referees algorithmica golden perny infinite order lorenz dominance for fair multiagent optimization in proc of the aamas conference pp goldsmith lang mattei perny voting with rank dependent scoring rules in proc of the aaai conference pp goldsmith sloan the ai onference paper assignment problem in proc of the aaai conferenceworkshop on preference handling for artificial intelligence mpref kilgour approval balloting for elections in handbook on approval voting chap springer klaus manlove rossi matching under preferences in brandt conitzer endriss lang procaccia eds handbook of computational social choice chap pp cambridge university press long wong peng ye on good and fair assignment in proc of the ieee international conference on data mining icdm pp manlove algorithmics of matching under preferences world scientific mattei walsh preflib a library for preferences http in proc of the adt conference pp merrifield saari telescope time without tears a distributed approach to peer review astronomy geophysics price flach computational support for academic peer review a perspective from artificial intelligence cacm rawls a theory of justice harvard university press roth sotomayor matching a study in gametheoretic modeling and analysis cambridge university press skowron faliszewski lang finding a collective set of items from proportional to group recommendation aij yager on ordered weighted averaging aggregation operators in multicriteria decisionmaking ieee transactions on systems man and cybernetics
2
dec estimating a monotone probability mass function with known flat regions dragi anevski and vladimir pastukhov centre for mathematical sciences lund university lund sweden abstract we propose a new estimator of a discrete monotone probability mass function with known flat regions we analyse its asymptotic properties and compare its performance to the grenander estimator and to the monotone rearrangement estimator introduction in this paper we introduce a new estimator of a monotone discrete distribution the problem has been studied before and in particular by who were the first to study the estimation problem and who also introduced two new estimators the problem of monotone probability mass function estimation is related to the problem of density estimation under shape constraints first studied and much earlier by grenander the literature for the continuous case problem is vaste to mention just a few results see for example in the discrete case problem some recent results are both in the discrete and continuous case problems one has derived in particular limit distribution results under the assumption of regions of constancy the true and underlying mass function however to our knowledge one has previously not used the assumption of regions of constancy in the estimation procedure in this paper we do use this information in the constructing of the estimator thus we present a maximum likelihood estimator mle under the assumption of regions of constancy of the probability mass function and derive some limit properties for new estimator pastuhov the paper is mainly motivated by the paper by jankowski and j wellner which was the first to study the problem of estimating a discrete monotone distribution to introduce the estimator suppose that p pi is a monotone decreasing probability mass p function with support with several known flat regions pi pi and pk wj where k sup i pi m is the number of flat regions of p w wm is the vector of the lengths the p numbers of points of the flat regions of the true mass function p so that m wj k for k and p m w otherwise note that if p is strictly decreasing at some point j i then w and if p is strictly decreasing on the whole support then m k and w for k and m otherwise suppose that we have observed xn random variables with probability mass function the empirical estimator of p is then given by i ni n ni n x xj i i and it is also the unrestricted maximum likelihood estimator mle argmax k y gini where k o n x k gk g gi p p and with ni xj i then for a given n ni the vector nk follows a multinomial distributions mult n p the empirical estimator is unbiased consistent and asymptotically normal see it however does not guaranty that the order restriction k is satisfied we next discuss two estimators that do satisfy the order restrictions first introduced in these are the order restricted mle and monotone rearrangement of the empirical estimator the monotone rearrangement of the empirical estimator n is defined as n rear where is the unrestricted mle in and rear v for a vector v vk is the vector the estimator n clearly satisfies the order restriction the mle under the order restriction n is defined as n argmax f k y fini where n fk f k x fi fk o gk it is equivalent to the isotonic regression of the unrestricted mle see defined by n argmin f k x i fi where the basic estimator i is the unrestricted mle in the estimator n is usually called the grenander estimator and is derived using the same algorithm as for the continuous case problem it is the vector of left derivatives of the leastp concave majorant lcm of the empirical distribution function fn x x xi r the estimators n and were introduced and studied in detail in the paper by jankowski and wellner in particular jankowski and wellner derived consistency of the estimators and analysed further asymptotic properties and performance estimators for distributions and different r g data sets they showed that n p and n p converge weakly to the processes y r and y g which are obtained by the following transform of a gaussian process on the space with mean zero and covariance matrix with the components pi pi pj for all periods of constancy r through s of p let y r r s rear y r s y g r s y r s g where y r s denotes the r through s elements of y cf theorem in in this paper we construct an estimator of a monotone probability mass function in the following way argmax f k x fini where n f k y fi wj fk o p and with ni xj i we note that the vector w wm constitutes the lengths of m flat regions of the true probability mass function we propose the following algorithm assume we are given a data set xn of observations from n random variables xn and the vector of the lengths of the flat regions of the true mass function wm we group the probabilities which are required to be equal at each flat region of p pi k into the single parameters j m note here that the true values are strictly decreasing and satisfy the following linear constraint wm next we find the order restricted mle of which is equivalent to the isotonic regression with weights wm argmin f w m x j wj where fm w m o n x m f wj fm and with j the unrestricted mle defined by argmax m y w j where gm w m n o x m g wj fm w cf lemma below for a proof of thepequivalence here the data are reduced to the vector with xl qj qj wj and where qj is an index of the first element in the flat region of having obtained the mle of we finally construct the mle of p by letting the probabilities in the flat region of p be equal to the corresponding values in this can be written in matrix form as where a is a k m matrix with elements all ones a qj qj j with j m qj qj is the first index of the flat region of p and wj is the length of the flat region our goal is to investigate the estimator and compare its performance with the monotone rearrangement estimator n defined in and the grenang der estimator defined in the paper is organised as follows in lemma in section we prove that the order restricted mle for the grouped parameters is given by the isotonic regression of the unrestricted mle of the grouped parameters next lemma shows consistency and asymptotic normality of the unrestricted mle for the grouped parameters after that in lemma we show that the order restricted mle for the grouped parameters is consistent and asymptotically normal finally in theorem we show consistency and derive the limit distribution for the new estimator in section we make a comparison with previous estimators in particular in lemma we show that has properly scaled asymptotically smaller risk both with as well as with hellinger loss compared to the grenander estimator the asymptotically smaller risk of compared to n follows from this result together with r the result by on the better risk performance of n with respect to the paper ends with a small simulation study illustrating the small sample r behaviour of in comparison with n and the new estimator seems to r g perform better then both and proof of characterization of estimator and asymptotic results in this section we prove the statements which have been made for the algorithm above and analyse the asymptotic properties of the estimator we begin with a lemma which will be used later in this section lemma assume xn and yn are sequences of random variables taking values in the metric space rk with k endowed with its borel sigma d d algebra if xn x and p xn yn then yn x proof to prove the statement of the lemma we use the portmanteau lemma in giving several equivalent characterisations of distributional convergence from the portmanteau lemma it follows that we have to prove e h yn e h x for all bounded lipschitz functions by the triangle inequality h yn e h x h xn e h x h yn e h xn where the first term h xn e h x by the portmanteau lemma next take an arbitrary then the second term in is bounded as h yn e h xn e yn h xn e yn h xn yn xn yn h xn yn xn here using the boundness of h for the first term in the right hand side of we have that e yn h xn yn xn sup h x e yn xn sup h x p yn xn where p yn xn for every since p xn yn the second term in the right hand side of can be written as e yn h xn yn xn p yn xn where is the lipschitz norm is the smallest number such that x h y x y furthermore p yn xn for every since p xn yn therefore taking the limsup of the the left hand side of equation we obtain lim h yn e h xn where is an arbitrary positive number thus h yn e h xn as n our goal is to obtain the asymptotic distribution of defined in the true probability mass function p satisfies the order restrictions in let us make a reparametrisation by grouping the probabilities which are required to be equal at each flat region of p pi k into the single parameters j m the reparametrisation transforms into fm w m o n x m wj fm f and the the estimation problem becomes argmax f f f mm f w pn where xl qj qj wj with qj an index of the first element in the flat region of lemma the solution to the ml problem defined in is given by the weighted isotonic regression problem argmin f w m x j wj where j is the unrestricted without order restrictions mle argmax g g g w where gm w m n o x m g wj proof the result is the consequence of the problem of maximising the product of several factors given relations of order and linear side condition cf pages in and pages in fact the results show that for a product of several factors the mle under the order restrictions coincides with the isotonic regression of the unrestricted ml estimates next we analyse the asymptotic behaviour of the unrestricted mle in lemma the unrestricted mle in is given by j wj n p xl qj qj wj where qj is an index of the first element in the flat region of it is consistent p and asymptotically normal d n n where is an m m matrix such that ij wii with the indicator function for i j proof the result of the lemma for a case of a finite support of p k and consequently m follows directly from the theorem in also see pages in next we consider a case of an infinite support k and obviously m let us introduce the notations zn n and z for a n and note that zn is a sequence of processes in endowed with its borel sigma algebra b first for any finite integer s the sequence of vectors zn t converges in distribution to the vector z n where ij wpii pi pj with i s and j this fact follows again from second we show that the sequence zn is tight in the metric this is shown similarly to as in in fact from lemma in it is enough to show that the two conditions sup e n x e t lim sup n are satisfied we note that for any n zn j n j n wj n where is bin n wj therefore e zn j wjj thus both conditions of lemma are satisfied third since the space is separable and complete from prokhorov s theorem it follows that zn is relatively compact which means that every sequence from zn contains a subsequence which converges weakly to some process z in addition if the limit processes have the same laws for every convergent subsequence then zn converges weakly to z next we show the equality of laws of the limit processes of the convergent subsequences first note that since is a separable space the borel equals the generated by open balls in then it is enough to show that the limit laws agree on finite intersections of open balls in since these constitute a to show this we note that the open balls in can be written as b z bm where bm am n m x am y l yj n n m by the finite support part of the lemma the vectors zn zn t m converge weakly to z m zt m for all finite m which implies that any m m subsequence of zn converges weakly to z m that means that with pn the law of an arbitrary but fixed subsequence of zn and p m the law of m z m pn a p m a for any p m set a we note that the limit law p m is the same for all subsequences therefore since am n is a continuity set for the gaussian limit law p m and by the continuity properties of a probability measure we obtain p b z lim p bm m lim lim p am n m lim lim p m am n m where p is the law of z thus we have shown that the limit laws p of the convergent subsequences of zn agree on the open balls b z and therefore also on the finite intersections of these open balls since the laws agree on the they are all equal to p they agree on the borel summarising the results from the previous lemmas we obtain the final limit result for the estimator lemma the estimator is consistent p and asymptotically normal d n n where ij wpii pi pj and is the indicator function for i j proof from lemma it follows that the basic estimator is consistent p from theorem in it follows that if the basic estimator is consistent then its isotonic regression is also consistent p since and both are consistent and since is an interior point of fm w gm w there is an open set fm w such that and p p as n furthermore since fm w and as long as fm w the equality holds we have that p p p and since the left hand side of this inequality goes to one as n we have shown that now let xn n and yn n then clearly p xn yn p as n applying lemma shows the statement of the lemma theorem the estimator is consistent p p and asymptotically normal d n p n where with ij wpii pi pj and a is a k m matrix whose elements are a qj qj j j m qj is the first index of the flat region of the true mass function p and wj stands for the regions length proof from lemma it follows that is consistent and asymptotically normal the estimator is given by the statements of the theorem now follow from the delta method see for example theorem in comparison of the estimators to compare the estimators we consider the metric p with k and the hellinger distance k x pi k h p p pi with k in it has been shown that the grenander estimator pg n has smaller risk than the rearrangement estimator pr for both l and h loss n the next lemma shows that the new estimator performs better than the grenander estimator pg n asymptotically in both the expected l and hellinger distance sense properly normalised lemma for the metric we have that lim e p lim e n p and for the hellinger distance with k lim e nh p lim e nh n p equalities hold if and only if the true probability mass function p is strictly monotone proof first from theorem and the continuous mapping theorem we have d p where v n second using the reduction of error property of isotonic regression theorem in for any n we have m x j wj m x j wj which is the same as p p where is constructed from in the same way as from since that for every m we have e p p m e p p m e p p m and lim sup e p p m lim sup e p p m from lemma for any n we have that e p m x w j pj pj wj and also e m x w j pj pj wj and using the delta method and the continuous mapping theorem it can d be shown that p which proves that the sequence p is asymptotically uniformly integrable see for example theorem in lim lim sup e p p m m which together with proves that lim lim sup e p p m m which shows the asymptotic uniform integrability of the sequence p third since the sequence p is asymptotically uniformly integrable and converges in distribution to v it also converges in expectation theorem in lim e p e m x wj wj furthermore for n proposed in we have lim e n p wj m x x q pm pwj p it is obvious that m pj q pj this finishes wj pj wj pj the proof of statement for the metric to prove the statement for hellinger distance let us assume that k is an arbitrary it is sufficient to note that nh p k x n i pi p i pi since then from the weak convergence and consistency of slutsky s theorem and the continuous mapping theorem it follows that k nh d p x pi furthermore asymptotic uniform integrability of nh p can be shown using the inequality nh p p and asymptotic integrability of p see therefore we also have convergence in expectation k m hx i x lim e nh p e wj pj pi wj finally shows that the hellinger distance of the estimator n converges in expectation m lim e nh n p wj xx pj lim e nh p q where we note the inequality from a comparison with it is clear that equality holds if and only if p is strictly monotone for a visualisation of the finite sample performance of the proposed estimator we make a small simulation study we choose the same probability mass functions as the ones chosen in in figure we present results of monte carlo simulations for samples for sample sizes n and n for the probability mass functions top p x center p x bottom p x where u k stands for the uniform discrete distribution on k the results shown are boxplots for the hellinger distance and metric with sample sizes n on the left and n on the right in fig the simulation study clearly illustrates that the newly proposed estimator has a better finite sample performance than both the grenander and the monotone rearrangement estimators in both and h distance sense acknowledgements vp s research is fully supported and da s research is partially supported by the swedish research council whose support is gratefully acknowledged h h h h h h h h h h h h h h h h h h h h h h h h figure the boxplots for norms and hellinger distances for the estimators the empirical estimator white the rearrangement estimator n grey grenander estimator dark grey and estimator shaded n n references aitchison and silvey estimation of parameters subject to restraints the annals of mathematical statistics balabdaoui durot koladjo on asymptotics of the discrete convex lse of a pmf tech barlow bartholomew bremner and brunk statistical inference under order restrictions john wiley sons bogachev i measure theory vol berlin carolan and dykstra asymptotic behavior of the grenander estimator at density flat regions the canadian journal of statistics durot huet koladjo robin estimation of a convex discrete distribution computational statistics and data analysis giguelay j estimation of a discrete probability under constraint of tech grenander u on the theory of mortality measurement skand jankowski and wellner j a estimation of a discrete monotone distribution electronic journal of statistics prakasa estimation of a unimodal density sankhya series a robertson wright and dykstra order restricted statistical inference john wiley sons chichester shiryaev a probability springer new york silvey statistical inference penguin books baltimore md van der vaart asymptotic statistics cambridge university press cambridge
10
depth prediction from sparse depth samples and a single image feb fangchang and sertac we consider the problem of dense depth prediction from a sparse set of depth measurements and a single rgb image since depth estimation from monocular images alone is inherently ambiguous and unreliable to attain a higher level of robustness and accuracy we introduce additional sparse depth samples which are either acquired with a depth sensor or computed via visual simultaneous localization and mapping slam algorithms we propose the use of a single deep regression network to learn directly from the raw data and explore the impact of number of depth samples on prediction accuracy our experiments show that compared to using only rgb images the addition of spatially random depth samples reduces the prediction error by on the indoor dataset it also boosts the percentage of reliable prediction from to on the kitti dataset we demonstrate two applications of the proposed algorithm a module in slam to convert sparse maps to dense maps and for lidars and video are publicly available i ntroduction depth sensing and estimation is of vital importance in a wide range of engineering applications such as robotics autonomous driving augmented reality ar and mapping however existing depth sensors including lidars depth sensors and stereo cameras all have their own limitations for instance the lidars are with up to cost per unit and yet provide only sparse measurements for distant objects depth sensors kinect are and with a short ranging distance finally stereo cameras require a large baseline and careful calibration for accurate triangulation which demands large amount of computation and usually fails at featureless regions because of these limitations there has always been a strong interest in depth estimation using a single camera which is small and ubiquitous in consumer electronic products however the accuracy and reliability of such methods is still far from being practical despite over a decade of research effort devoted to depth prediction including the recent improvements with deep learning approaches for instance the depth prediction methods produce an average error measured by the root mean squared error of over in indoor scenarios on the dataset such ma and karaman are with the laboratory for information decision systems massachusetts institute of technology cambridge ma usa fcma sertac https https a rgb b sparse depth c ground truth d prediction fig we develop a deep regression model to predict dense depth image from a single rgb image and a set of sparse depth samples our method significantly outperforms rgbbased and other algorithms methods perform even worse outdoors with at least meters of average error on and kitti datasets to address the potential fundamental limitations of rgbbased depth estimation we consider the utilization of sparse depth measurements along with rgb data to reconstruct depth in full resolution sparse depth measurements are readily available in many applications for instance lowresolution depth sensors a lidars provide such measurements sparse depth measurements can also be computed from the output of and odometry algorithms in this work we demonstrate the effectiveness of using sparse depth measurements in addition to the rgb images as part of the input to the system we use a single convolutional neural network to learn a deep regression model for depth image prediction our experimental results show that the addition of as few as depth samples reduces the root mean squared error by over on the dataset and boosts the percentage of reliable prediction from to on the more challenging kitti outdoor dataset in general our results show that the addition of a few sparse depth samples drastically improves depth reconstruction performance our quantitative results may help inform the development of a typical slam algorithm such as keeps track of hundreds of landmarks in each frame sensors for future robotic vehicles and consumer devices the main contribution of this paper is a deep regression model that takes both a sparse set of depth samples and rgb images as input and predicts a depth image the prediction accuracy of our method significantly outperforms methods including both and techniques furthermore we demonstrate in experiments that our method can be used as a module to sparse visual odometry slam algorithms to create an accurate dense point cloud in addition we show that our method can also be used in lidars to create much denser measurements ii r elated w ork depth prediction early works on depth estimation using rgb images usually relied on features and probabilistic graphical models for instance saxena et al estimated the absolute scales of different image patches and inferred the depth image using a markov random field model approaches were also exploited to estimate the depth of a query image by combining the depths of images with similar photometric content retrieved from a database recently deep learning has been successfully applied to the depth estimation problem eigen et al suggest a convolutional neural network cnn with one predicting the global coarse scale and the other refining local details eigen and fergus further incorporate other auxiliary prediction tasks into the same architecture liu et al combined a deep cnn and a continuous conditional random field and attained visually sharper transitions and local details laina et al developed a deep residual network based on the resnet and achieved higher accuracy than and unsupervised learning setups have also been explored for disparity image prediction for instance godard et al formulated disparity estimation as an image reconstruction problem where neural networks were trained to warp left images to match the right depth reconstruction from sparse samples another line of related work is depth reconstruction from sparse samples a common ground of many approaches in this area is the use of sparse representations for depth signals for instance hawe et al assumed that disparity maps were sparse on the wavelet basis and reconstructed a dense disparity image with a conjugate method liu et al combined wavelet and contourlet dictionaries for more accurate reconstruction our previous work on sparse depth sensing exploited the sparsity underlying the secondorder derivatives of depth images and outperformed both in reconstruction accuracy and speed sensor fusion a wide range of techniques attempted to improve depth prediction by fusing additional information from different sensor modalities for instance mancini et al proposed a cnn that took both rgb images and optical flow images as input to predict distance liao et al studied the use of a laser scanner mounted on a mobile ground robot to provide an additional reference depth signal as input and obtained higher accuracy than using rgb images alone compared to the approach by liao et al this work makes no assumption regarding the orientation or position of sensors nor the spatial distribution of input depth samples in the pixel space cadena et al developed a to learn from three input modalities including rgb depth and semantic labels in their experiments cadena et al used sparse depth on extracted fast corner features as part of the input to the system to produce a depth prediction the accuracy was comparable to using rgb alone in comparison our method predicts a depth image learns a better representation for rgb and sparse depth and attains a significantly higher accuracy iii m ethodology in this section we describe the architecture of the convolutional neural network we also discuss the depth sampling strategy the data augmentation techniques and the loss functions used for training cnn architecture we found in our experiments that many bottleneck architectures with an encoder and a decoder could result in good performance we chose the final structure based on for the sake of benchmarking because it achieved accuracy in depth prediction the network is tailed to our problem with input data of different modalities sizes and dimensions we use two different networks for kitti and this is because the kitti image is triple the size of and consequently the same architecture would require times of gpu memory exceeding the current hardware capacity the final structure is illustrated in figure the feature extraction encoding layers of the network highlighted in blue consist of a resnet followed by a convolution layer more specifically the is used for kitti and is used for the last average pooling layer and linear transformation layer of the original resnet have been removed the second component of the encoding structure the convolution layer has a kernel size of the decoding layers highlighted in yellow are composed of upsampling layers followed by a bilinear upsampling layer we use the upproj module proposed by laina et al as our upsampling layer but a deconvolution with larger kernel size can also achieve the same level of accuracy an empirical comparison of different upsampling layers is shown in section b depth sampling in this section we introduce the sampling strategy for creating the input sparse depth image from the ground truth during training the input sparse depth d is sampled randomly from the ground truth depth image on the fly in particular for any targeted number of depth samples m fig cnn architecture for and kitti datasets respectively cubes are feature maps with dimensions represented as features the encoding layers in blue consist of a resnet and a convolution the decoding layers in yellow are composed of upsampling layers upproj followed by a bilinear upsampling fixed during training we compute a bernoulli probability m n where n is the total number of valid depth pixels in then for any pixel i j i j with probability p d i j otherwise with this sampling strategy the actual number of nonzero depth pixels varies for each training sample around the expectation note that this sampling strategy is different from dropout which scales up the output by during training to compensate for deactivated neurons the purpose of our sampling strategy is to increase robustness of the network against different number of inputs and to create more training data a data augmentation technique it is worth exploring how injection of random noise and a different sampling strategy feature points would affect the performance of the network data augmentation we augment the training data in an online manner with random transformations including scale color images are scaled by a random number s and depths are divided by rotation color and depths are both rotated with a random degree r color jitter the brightness contrast and saturation of color images are each scaled by ki color normalization rgb is normalized through mean subtraction and division by standard deviation flips color and depths are both horizontally flipped with a chance nearest neighbor interpolation rather than the more common or interpolation is used in both scaling and rotation to avoid creating spurious sparse depth points we take the center crop from the augmented image so that the input size to the network is consistent loss function one common and default choice of loss function for regression problems is the mean squared error is sensitive to outliers in the training data since it penalizes more heavily on larger errors during our experiments we found that the loss function also yields visually undesirable boundaries instead of sharp transitions another common choice is the reversed huber denoted as berhu loss function defined as if c b e otherwise uses a parameter c computed as of the maximum absolute error over all pixels in a batch intuitively berhu acts as the mean absolute error when the error falls below c and behaves approximately as when the error exceeds in our experiments besides the aforementioned two loss functions we also tested and found that it produced slightly better results on the depth prediction problem the empirical comparison is shown in section as a result we use as our default choice throughout the paper for its simplicity and performance iv e xperiments we implement the network using torch our models are trained on the and kitti odometry datasets using a nvidia tesla gpu with memory the weights of the resnet in the encoding layers except for the first layer which has different number of input channels are initialized with models pretrained on the imagenet dataset we use a small batch size of and train for epochs the learning rate starts at and is reduced to every epochs a small weight decay of is applied for regularization a the dataset the dataset consists of rgb and depth images collected from different indoor scenes with a microsoft kinect we use the official split of data where scenes are used for training and the remaining for testing in particular for the sake of benchmarking the small labeled test dataset with images is used for evaluating the final performance as seen in previous work for training we sample spatially evenly from each raw video sequence from the training dataset generating roughly synchronized image pairs the depth values are projected onto the rgb image and with a filter using the official toolbox following the original frames of size are first downsampled to half and then producing a final size of b the kitti odometry dataset in this work we use the odometry dataset which includes both camera and lidar measurements the odometry dataset consists of sequences among them one half is used for training while the other half is for evaluation we use all images from the training sequences for training the neural network and a random subset of images from the test sequences for the final evaluation we use both left and right rgb cameras as unassociated shots the velodyne lidar measurements are projected onto the rgb images only the bottom crop is used since the lidar returns no measurement to the upper part of the images compared with even the ground truth is sparse for kitti typically with only projected measurements out of the image pixels error metrics we evaluate each method using the following metrics rmse root mean squared error rel mean absolute relative error percentage of predicted pixels where the relative error is within a threshold specifically o n card max card yi where yi and are respectively the ground truth and the prediction and card is the cardinality of a set a higher indicates better prediction r esults in this section we present all experimental results first we evaluate the performance of our proposed method with different loss functions and network components on the prediction accuracy in section second we compare the proposed method with methods on both the and the kitti datasets in section third in section we explore the impact of number of sparse depth samples on the performance finally in section and section we demonstrate two use cases of our proposed algorithm in creating dense maps and lidar a architecture evaluation in this section we present an empirical study on the impact of different loss functions and network components on the depth prediction accuracy the results are listed in table i problem rgb loss berhu rgbd encoder conv conv conv conv conv conv chandrop depthwise conv decoder rmse rel upconv upproj upproj upproj upproj table i evaluation of loss functions upsampling layers and the first convolution layer rgbd has an average sparse depth input of samples a comparison of loss functions is listed in row b comparison of upsampling layers is in row c comparison of the first convolution layers is in the bottom rows loss functions to compare the loss functions we use the same network architecture where the upsampling layers are simple deconvolution with a kernel denoted as berhu and loss functions are listed in the first three rows in table i for comparison as shown in the table both berhu and significantly outperform in addition produces slightly better results than berhu therefore we use as our default choice of loss function upsampling layers we perform an empirical evaluation of different upsampling layers including deconvolution with kernels of different sizes and as well as the upconv and upproj modules proposed by laina et al the results are listed from row to in table i we make several observations firstly deconvolution with a kernel outperforms the same component with only a kernel in every single metric secondly since both and upconv have a receptive field of meaning each output neuron is computed from a neighborhood of input neurons they have comparable performance thirdly with an even larger receptive field of the upproj module outperforms the others we choose to use upproj as a default choice first convolution layer since our rgbd input data comes from different sensing modalities its input channels r g b and depth have vastly different distributions and support we perform a simple analysis on the first convolution layer and explore three different options the first option is the regular spatial convolution conv the second option is depthwise separable convolution denoted as depthwise which consists of a spatial convolution performed independently on each input channel followed by a pointwise convolution across different channels with a window size of the third choice is channel dropout denoted as chandrop through which each input channel is preserved as is with some probability p and zeroed out with probability the bottom rows compare the results from the options the networks are trained using rgbd input with an average of sparse input samples depthwise and conv yield very similar results and both significantly outperform the chandrop layer since the difference is small for the sake of comparison consistency we will use the convolution layer for all experiments a b comparison with the in this section we compare with existing methods dataset we compare with approaches as well as the fusion approach that utilizes an additional laser scanner mounted on a ground robot the quantitative results are listed in table ii problem samples rgb sd rgbd method rmse rel roy et al eigen et al laina et al liao et al table ii comparison with on the dataset the values are those originally reported by the authors in their respective paper our first observation from row and row is that with the same network architecture we can achieve a slightly better result albeit higher rel by replacing the berhu loss function proposed in with a simple secondly by comparing problem group rgb row and problem group sd row we draw the conclusion that an extremely small set of sparse depth samples without color information already produces significantly better predictions than using rgb thirdly by comparing problem group sd and proble group rgbd row by row with the same number of samples it is clear that the color information does help improve the prediction accuracy in other words our proposed method is able to learn a suitable representation from both the rgb images and the sparse depth images finally we compare against bottom row our proposed method even using only samples outperforms with laser measurements this is because our samples are spatially uniform and thus provides more information than a line measurement a few examples of our predictions with different inputs are displayed in figure kitti dataset the kitti dataset is more challenging for depth prediction since the maximum distance is meters as opposed to only meters in the dataset a greater performance boost can be obtained from using our approach although the training and test data are not the same across different methods the scenes are similar in the sense that they all come from the same sensor setup on a car and the data were collected during driving we report the values from each work in table iii the results in the first rgb group demonstrate that rgbbased depth prediction methods fail in outdoor scenarios with a rmse of close to meters note that we b c d e fig predictions on from top to bottom a rgb images b prediction c sd prediction with and no rgb d rgbd prediction with sparse depth and rgb e ground truth depth problem samples method rgb mancini eigen et al rgbd liao et al rmse rel table iii comparison with on the kitti dataset the values are those reported in use sparsely labeled depth image projected from lidar instead of dense disparity maps computed from stereo cameras as in in other words we have a much smaller training dataset compared with an additional depth samples bring the rmse to meters a half of the rgb approach and boosts from only to our performance also compares favorably to other fusion techniques including and at the same time demands fewer samples on number of depth samples in this section we explore the relation between the prediction accuracy and the number of available depth samples we train a network for each different input size for optimal rmse m a color b sparse input depth c rgbd sparse depth rgb rgbd sparse depth rgb number of depth samples rel number of depth samples d ground truth fig example of prediction on kitti from top to bottom a rgb b sparse depth c rgbd dense prediction d ground truth depth projected from lidar performance we compare the performance for all three kinds of input data including rgb sd and rgbd the performance of depth prediction is independent of input sample size and is thus plotted as a horizontal line for benchmarking rmse m rgbd sparse depth rgb rel number of depth samples rgbd sparse depth rgb number of depth samples rgbd sparse depth rgb number of depth samples rgbd sparse depth rgb number of depth samples fig impact of number of depth sample on the prediction accuracy on the dataset left column lower is better right column higher is better on the dataset in figure the rgbd outperforms rgb with over depth samples and the performance gap quickly increases with the number of samples with a set of samples the rmse of rgbd decreases to around half of rgb the rel sees a larger rgbd sparse depth rgb number of depth samples rgbd sparse depth rgb number of depth samples fig impact of number of depth sample on the prediction accuracy on the kitti dataset left column lower is better right column higher is better improvement from to reduced by two thirds on one hand the rgbd approach consistently outperforms sd which indicates that the learned model is indeed able to extract information not only from the sparse samples alone but also from the colors on the other hand the performance gap between rgbd and sd shrinks as the sample size increases both approaches perform equally well when sample size goes up to which accounts for less than of the image pixels and is still a small number compared with the image size this observation indicates that the information extracted from the sparse sample set dominates the prediction when the sample size is sufficiently large and in this case the color cue becomes almost irrelevant the performance gain on the kitti dataset is almost identical to as shown in figure with samples the rmse of rgbd decreases from meters to a half meters this is the same percentage of improvement as on the dataset similarly the rel is reduced from to again the same percentage of improvement as the on both datasets the accuracy saturates as the number of depth samples increases additionally the prediction has blurry boundaries even with many depth samples see figure we believe both phenomena can be attributed to the fact that fine details are lost in bottleneck network architectures it remains further study if additional skip connections from encoders to decoders help improve performance application dense map from visual odometry features in this section we demonstrate a use case of our proposed method in sparse visual slam and visual inertial odometry vio the algorithms for slam and vio are usually sparse methods which represent the environment with sparse landmarks although sparse algorithms are robust and efficient the output map is in the form of sparse point clouds and is not useful for other applications motion planning a rgb b sparse landmarks fig application to lidar creating denser point cloud than the raw measurements from top to bottom rgb raw depth and predicted depth distant cars are almost invisible in the raw depth but are easily recognizable in the predicted depth application lidar c ground truth map d prediction fig application in sparse slam and visual inertial odometry vio to create dense point clouds from sparse landmarks a rgb b sparse landmarks c ground truth point cloud d prediction point cloud created by stitching rgbd predictions from each frame we present another demonstration of our method in superresolution of lidar measurements lidars have a low vertical angular resolution and thus generate a vertically sparse point cloud we use all measurements in the sparse depth image and rgb images as input to our network the average rel is as compared to when using only rgb an example is shown in figure cars are much more recognizable in the prediction than in the raw scans vi conclusion to demonstrate the effectiveness of our proposed methods we implement a simple visual odometry vo algorithm with data from one of the test scenes in the dataset for simplicity the absolute scale is derived from ground truth depth image of the first frame the landmarks produced by vo are onto the rgb image space to create a sparse depth image we use both rgb and sparse depth images as input for prediction only pixels within a trusted region which we define as the convex hull on the pixel space formed by the input sparse depth samples are preserved since they are well constrained and thus more reliable dense point clouds are then reconstructed from these reliable predictions and are stitched together using the trajectory estimation from vio we introduced a new depth prediction method for predicting dense depth images from both rgb images and sparse depth images which is well suited for sensor fusion and sparse slam we demonstrated that this method significantly outperforms depth prediction using only rgb images and other existing fusion techniques this method can be used as a module in sparse slam and visual inertial odometry algorithms as well as in superresolution of lidar measurements we believe that this new method opens up an important avenue for research into rgbd learning and the more general perception problems which might benefit substantially from sparse depth samples the results are displayed in figure the prediction map resembles closely to the ground truth map and is much denser than the sparse point cloud from vo the major difference between our prediction and the ground truth is that the prediction map has few points on the white wall where no feature is extracted or tracked by the vo as a result pixels corresponding to the white walls fall outside the trusted region and are thus removed this work was supported in part by the office of naval research onr through the onr yip program we also gratefully acknowledge the support of nvidia corporation with the donation of the used for this research acknowledgment r eferences liu shen and lin deep convolutional neural fields for depth estimation from a single image in proceedings of the ieee conference on computer vision and pattern recognition pp eigen and fergus predicting depth surface normals and semantic labels with a common convolutional architecture in proceedings of the ieee international conference on computer vision pp laina rupprecht et deeper depth prediction with fully convolutional residual networks in vision fourth international conference on ieee pp silberman hoiem et indoor segmentation and support inference from rgbd images computer pp saxena sun and ng learning scene structure from a single still image ieee transactions on pattern analysis and machine intelligence vol no pp geiger lenz and urtasun are we ready for autonomous driving the kitti vision benchmark suite in conference on computer vision and pattern recognition cvpr montiel and tardos orbslam a versatile and accurate monocular slam system ieee transactions on robotics vol no pp saxena chung and ng learning depth from single monocular images in advances in neural information processing systems pp karsch liu and kang depth extraction from video using sampling in european conference on computer vision springer pp konrad wang and ishwar image conversion by learning depth from examples in computer vision and pattern recognition workshops cvprw ieee computer society conference on ieee pp karsch liu and kang depthtransfer depth extraction from video using sampling pattern analysis and machine intelligence ieee transactions on no pp liu salzmann and x he depth estimation from a single image in proceedings of the ieee conference on computer vision and pattern recognition pp eigen puhrsch and fergus depth map prediction from a single image using a deep network in advances in neural information processing systems pp he zhang et deep residual learning for image recognition in proceedings of the ieee conference on computer vision and pattern recognition pp kuznietsov and leibe semisupervised deep learning for monocular depth map prediction arxiv preprint zhou brown et unsupervised learning of depth and from video arxiv preprint garg carneiro and reid unsupervised cnn for single view depth estimation geometry to the rescue in european conference on computer vision springer pp godard mac aodha and brostow unsupervised monocular depth estimation with consistency arxiv preprint hawe kleinsteuber and diepold dense disparity maps from sparse disparity measurements in computer vision iccv ieee international conference on ieee pp liu chan and nguyen depth reconstruction from sparse samples representation algorithm and sampling ieee transactions on image processing vol no pp ma carlone et sparse sensing for resourceconstrained depth reconstruction in intelligent robots and systems iros international conference on ieee pp sparse depth sensing for robots arxiv preprint mancini costante et fast robust monocular depth estimation for obstacle detection with fully convolutional networks in intelligent robots and systems iros international conference on ieee pp liao huang et parse geometry from a line monocular depth estimation with partial laser observation in robotics and automation icra ieee international conference on ieee pp cadena dick and reid as joint estimators for robotics scene in robotics science and systems srivastava hinton et dropout a simple way to prevent neural networks from journal of machine learning research vol no pp a owen a robust hybrid of lasso and ridge regression contemporary mathematics vol pp collobert kavukcuoglu and farabet a environment for machine learning in biglearn nips workshop russakovsky deng et imagenet large scale visual recognition challenge international journal of computer vision vol no pp roy and todorovic monocular depth estimation using neural regression forest in proceedings of the ieee conference on computer vision and pattern recognition pp
2
entropy rate estimation for markov chains with large state space feb yanjun jiantao tsachy yihong tiancheng february abstract estimating the entropy based on data is one of the prototypical problems in distribution property testing and estimation for estimating the shannon entropy of a distribution on s elements with independent samples showed that the sample complexity is sublinear in s and showed that consistent estimation of shannon entropy is possible if and only if the sample size n far exceeds logs s in this paper we consider the problem of estimating the entropy rate of a stationary reversible markov chain with s states from a sample path of n observations we show that a as long as the markov chain mixes not too slowly the relaxation time is at most o s consistent estimation is achievable when n log b as long as the markov chain has some slight dependency the relaxation time is at least consistent estimation is impossible when n log s under both assumptions the optimal estimation accuracy is shown to be n log s in parison the empirical entropy rate requires at least s samples to be consistent even when the markov chain is memoryless in addition to synthetic experiments we also apply the estimators that achieve the optimal sample complexity to estimate the entropy rate of the english language in the penn treebank and the google one billion words corpora which provides a natural benchmark for language modeling and relates it directly to the widely used perplexity measure introduction consider a stationary stochastic process xt where each xt takes values in a finite alphabet x of size the shannon entropy rate or simply entropy rate of this process is defined as lim where h x n x h x n n px n xn ln xn n px n xn yanjun han jiantao jiao lee tsachy weissman are with the department of electrical engineering stanford university email jiantao yjhan czlee tsachy yihong wu is with the department of statistics and data science yale university email tiancheng yu is with the department of electronic engineering tsinghua university email is the shannon entropy or entropy of the random vector x n xn and px n xn p xn xn is the joint probability mass function since the entropy of a random variable depends only on its distribution we also refer to the entropy h p of a discrete distribution p ps defined as s x h p pi ln pi the shannon entropy rate is the fundamental limit of the expected logarithmic loss when predicting the next symbol given the all past symbols it is also the fundamental limit of data compressing for stationary stochastic processes in terms of the average number of bits required to represent each symbol estimating the entropy rate of a stochastic process is a fundamental problem in information theory statistics and machine learning and it has diverse for example there exists extensive literature on entropy rate estimation it is known from data compression theory that the normalized codelength of any universal code is a consistent estimator for the entropy rate as the number of samples approaches infinity this observation has inspired a large variety of entropy rate estimators see however most of this work has been in the asymptotic regime attention to analysis has only been more recent and to date almost only for data there has been little work on the performance of an entropy rate estimator for dependent is where the alphabet size is large making asymptotically large datasets infeasible and the stochastic process has memory an understanding of this regime is increasingly important in modern machine learning applications for example there have been substantial recent advances in probabilistic language models which are used in applications such as machine translation and search query completion the entropy rate of say the english language represents a fundamental limit on the efficacy of a language model measured by its perplexity so it is of interest to language model researchers to obtain an accurate estimate of the entropy rate which sheds light on how much room there is left for improvement however since the alphabet size here is the size of the entire english lexicon and google s one billion words corpus includes about two million unique it is unrealistic to assume the asymptotics especially when dealing with combinations of words bigrams trigrams etc it is therefore of significant practical importance to investigate the optimal entropy rate estimator with limited sample size in the context of analysis for samples paninski first showed that the shannon entropy can be consistently estimated with o s samples when the alphabet size s approaches infinity the seminal work of showed that when estimating the entropy rate of an source n logs s samples are necessary and sufficient for consistency the entropy estimators proposed in and refined in based on linear programming have not been shown to achieve the minimax estimation rate another estimator proposed by the same authors has been shown to achieve the minimax rate in the restrictive regime of lnss n sln s using the idea of best polynomial approximation the independent work of s log s and obtained estimators that achieve the minimax error n log s n for entropy estimation the intuition for the logs s sample complexity in the independent case can be interpreted as follows as opposed to estimating the entire distribution which has s this exceeds the estimated vocabulary of the english language partly because different forms of a word count as different words in language models and partly because of edge cases in tokenization the automatic splitting of text into words parameters and requires s samples estimating the scalar functional entropy can be done with a logarithmic factor reduction of samples for markov chains which are characterized by the transition matrix consisting of s s free parameters it is reasonable to expect an log s sample complexity indeed we will show that this is correct provided the mixing time is not too slow estimating the entropy rate of a markov chain falls in the general area of property testing and estimation with dependent data the prior work provided a analysis of estimation of entropy rate in markov chains and showed that it is necessary to assume certain assumptions on the mixing time for otherwise the entropy rate is impossible to estimate there has been some progress in related questions of estimating the mixing time from sample path and estimating the transition matrix the current paper makes contribution to this growing field in particular the main results of this paper are highlighted as follows we provide a tight analysis of the sample complexity of the empirical entropy rate for markov chains when the mixing time is not too large this refines results in and shows that when mixing is not too slow the sample complexity of the empirical entropy does not depend on the mixing time we obtain a characterization of the optimal sample complexity for estimating the entropy rate of a stationary reversible markov chain in terms of the sample size state space size and mixing time and partially resolve one of the open questions raised in in particular we show that when the mixing is neither too fast nor too slow the sample complexity does not depend on mixing time in this regime the performance of the optimal estimator with n samples is essentially that of the empirical entropy rate with n log n samples as opposed to the lower bound for estimating the mixing time in obtained by applying le cam s method to two markov chains which produce are statistically indistinguishable the minimax lower bound in the current paper is much more involved which in addition to a series of reductions by means of simulation relies on constructing two stationary reversible markov chains with random transition matrices so that the marginal distributions of the sample paths are statistically indistinguishable we construct estimators that are efficiently computable and achieve the minimax sample complexity the key step is to connect the entropy rate estimation problem to shannon entropy estimation on large alphabets with samples the analysis uses an alternative probabilistic descriptions of markov chains by billingsley and concentration inequalities for markov chains we compare the empirical performance of various estimators for entropy rate on a variety of synthetic data sets and demonstrate the superior performances of the informationtheoretically optimal estimators compared to the empirical entropy rate we apply the optimal estimators to estimate the entropy rate of the penn treebank ptb and the google one billion words datasets we show that even only with estimates using up to there may exist language models that achieve better perplexity than the current the rest of the paper is organized as follows after setting up preliminary definitions in section we present a summary of our main results in section we analyze the empirical entropy rate and prove the achievability theorems for both the empirical entropy rate and our entropy rate estimator in section the lower bound on the sample complexity for the empirical entropy rate is proven in section and the minimax lower bound is proven in section section provides empirical results on the performance of various entropy rate estimators on synthetic data and section applies the estimators to estimate the entropy rate of the penn treebank ptb and the google one billion words datasets the auxiliary lemmas used throughout the paper are collected in section and the proofs of the lemmas are presented in section preliminaries we denote by multi n p the multinomial distribution with n number of trials and event probability vector p consider a markov chain on a finite state space x s with transition kernel t we denote the entries of t as tij that is tij for i j x let ti denote the ith row of t which is the conditional law of given i throughout the paper we focus on markov chains since any markov chain can be converted to a one by extending the state space we say that a markov chain is stationary if the distribution of denoted by satisfies s x tij for all j x we say that a markov chain is reversible if it satisfies the detailed balance equations tij tji for all i j x if a markov chain is reversible the left spectrum of its transition matrix t contains s real eigenvalues which we denote as we define the spectral gap of a reversible markov chain as t the absolute spectral gap of t is defined as t max and it clearly follows that for any reversible markov chain t t the relaxation time of a reversible markov chain is defined as t t the relaxation time of a reversible markov chain approximately captures its mixing time which roughly speaking is the smallest n for which the marginal distribution of xn is close to the markov chain s stationary distribution we refer to for a survey in fact always and when the markov chain has no memory intuitively speaking the shorter the relaxation time the faster the markov chain mixes that is the shorter its memory or the sooner evolutions of the markov chain from different initial states begin to look similar we consider the following observation model we observe a sample path of a stationary finitestate markov chain xn since the limit in exists for any stationary process we assume that the markov chain starts at rather than in order to simplify some formulae the entropy rate of any stationary markov chain is always without any additional assumptions on mixing time irreducibility or aperiodicity furthermore for markov chains the shannon entropy rate in reduces to s x s x tij ln tij h h where is the stationary distribution of this markov chain we denote by s the set of all discrete distributions with alphabet size s the s probability simplex and by s the set of all markov chain transition matrices on a state space of size let rev s s be the set of transition matrices of all stationary reversible markov chains on a state space of size we define the following class of stationary markov reversible chains whose relaxation time is at most rev s t rev s t x i xj xj x i j n the goal is to characterize the sample complexity of entropy rate estimation as a function of s and the estimation accuracy throughout this paper sometimes we slightly abuse the notation by using s s rev s rev s to also denote the spaces of probability measures of the corresponding stochastic processes generated according to those parameter sets given a sample path x xn let denote the empirical distribution of states with xj i n and the subsequence of x containing elements following any occurrence of the state i as since the entropy rate of a markov chain can be written as s x h i a natural idea is to use to estimate and an appropriate shannon entropy estimator to estimate h i the empirical entropy rate is defined as s x x i where y computes the shannon entropy of the empirical distribution of its argument y ym in fact can also be interpreted as the maximum likelihood estimate of lemma the entropy rate estimator proposed in this paper differs from in that we replace the estimator with a minimax estimator for the shannon entropy from samples that is our entropy rate estimator is s x x i where is defined in where is any minimax shannon entropy estimator designed for data such as those found in the main property needs to satisfy is a concentration property which is presented in lemma main results as motivated by the results in the independent case one might expect that bias dominates when n is close to s for the empirical entropy and that the total error including bias and variance should not depend on the mixing properties when the mixing is not too slow this intuition is supported by the following theorem theorem suppose xn is a sample path from a stationary reversible markov chain with spectral gap if s n s and ln s n ln sn s there exists some constant c independent of n s such that the entropy rate estimator as s p c n ln s under the same conditions there exists some constant c independent of n s such that the empirical entropy rate in satisfies as s p c n remark theorem shows that when the number of samples is not too large n s and the mixing is not too slow ln s n ln sn s it suffices to take n lns s for the estimator to achieve a vanishing error and n s for the empirical entropy rate theorem improves over in the analysis of the empirical entropy rate in the sense that unlike the error term our dominating term o sn does not depend on the mixing time we emphasize that o the constraint that n being not be too large is not restrictive the theory of entropy estimation with data shows that the bias only dominates when n s for the optimal estimator and n s for the empirical entropy in theorem we are only characterizing the regime where the bias dominates the next result shows that the bias of the empirical entropy rate is unless n s even when the data are independent theorem suppose is the empirical entropy rate defined in and denotes the true entropy rate let n s if xn are mutually independent and uniformly distributed then e ln the following corollary is immediate corollary there exists a universal constant c such that when n cs the absolute value of the bias of is bounded away from zero even if the markov chain is memoryless the next theorem presents a minimax lower bound for entropy rate estimation which quantifies the limit that any estimation scheme can not beat the asymptotic results in this section are interpreted by parameterizing n ns and and s subject to the conditions of each theorem theorem for n ln s ln n lim inf inf s ln s sup q s s n we have n ln s p t rev s here are universal constants from theorem the following corollary which follows from theorem and presents the critical scaling that determines whether consistent estimation of the entropy rate is possible corollary if s s there exists an estimator which estimates the entropy rate with a uniformly vanishing error over markov chains rev s if and only if n ln s to conclude this section we summarize our result in terms of the sample complexity for estimating the entropy rate within a few bits classified according to the relaxation time this is the case and the sample complexity is lnss in this narrow regime the sample complexity is at most o lns s and no matching lower bound is known s s the sample complexity is lns s s the sample complexity is lns s and no matching upper bound is known in this case the chain mixes very slowly and it is likely that the variance will dominate upper bound analysis proof of theorem the performance of and in terms of shannon entropy estimation is collected in the following lemma lemma suppose and one observes n samples xn p then there exists an entropy estimator xn ln s such that for any t s p h p t n ln s exp where are universal constants and h p is the shannon entropy defined in moreover the empirical entropy xn ln s satisfies for any t p h p t s n exp consequently for any p s h p n ln s s p s h p n s and proof the part pertaining to the concentration of follows from the part pertaining to the empirical entropy follows from proposition eqn alternative probabilistic description of markov chains one key step in the analysis is to identify a event that ensures the accuracy of the estimator we refer to such an event as a good event to this end we adopt the following view on the generation of a markov chain the process can be viewed as having been generated in the following fashion consider an independent collection of random variables and win i s n such that i pwin j tij imagine the variables win set out in the following array wsn first is sampled if i then the first variable in the ith row of the array is sampled and the result assigned by definition to if j then the first variable in the jth row is sampled unless j i in which case the second variable is sampled in any case the result of the sampling is by definition the next variable sampled is the first one in row which has not yet been sampled this process thus continues it follows from the definition that the joint distribution of xn sampled from this model is p xk xk k n p mk xk k n where mk is the number of elements among that are equal to xk due to the independence assumptions we have p xk xk k n p p p mn an taj y after collecting xn from the model we assume that the last variable sampled from row i is wini it is clear that ni n s x ni n and the subsequence of the sample path defined in is simply x i wini analysis of and next we define two events that ensure the proposed entropy rate estimator and the empirical entropy rate is accurate respectively definition good event in estimation let and be some universal constants we take for every i i s define the event ei max ln n s ln n for every i s such that define the event hi as s wim hi m ln s for all m such that from lemma q ln n m q ln n s where are finally define the good event as the intersection of all the events above gopt s ei n i ln hi analogously we define the good event gemp for the empirical entropy rate in a similar fashion with replaced by s wim hi m s the following lemma shows that the good events defined in definition indeed occur with high probability lemma both gopt and gemp in definition occur with probability at least where s c n n now we present our main upper bound which implies theorem as a corollary theorem suppose xn comes from a stationary reversible markov chain with spectral gap then with probability at least the value in the entropy rate estimator in satisfies s s s ln s s ln s ln n s ln n s n ln s n n where are constants in definition and is introduced in we take similarly for the empirical entropy rate in we have n s n s ln s s ln s ln n n s s ln n s with probability at least the value in proof we write s x s x h i where hi h i x i wini write s x hi z s x z next we bound the two terms separately under the condition that the good event gopt in definition occurs q ln n note that the function n is an increasing function when thus we have whenever s ln n q s m ln s let m note that for each i s n o hi ni ln n s ln n which is decreasing in let i max n hi ni max ln n s n ln n ln n wini hi ni i ni ni i i q ln n o o wim hi m the key observation is that for each fixed m wim are as ti taking the intersection over i s we have n o hi ni i s gopt note that effectively we are taking a union over the value of ni instead of conditioning in fact conditioned on ni m wim are no longer as ti therefore on the event gopt we have s x hi x n i ln x hi x n i ln s ln s n i ln x s hi ln s n i ln n ln s s n s ln s s ln s ln n where the last step follows from and the fact that on the event gopt we have s x ln s s x max ln n s p s ln n s for any as for s ln s ln n s s ln n s combining and and using lemma completes the proof of the proof of follows entirely analogously with gopt replaced by gemp impossibility results lower bound on the empirical entropy rate we first prove theorem which quantifies the performance limit of the empirical entropy rate lemma in section shows that min p s ln n where s denotes the set of all markov chain transition matrices with state space x of size since ln ep n we know e q we specify the true distribution to be the product distribution p xi and it suffices to lower bound ep ln min ln n n p s n h i h p ep h h h ep h h ep h where is the empirical distribution of the counts xi i n and is the marginal distribution of it was shown in that for any h ep h ln n now choosing to be the uniform distribution we have ep ln min ln n n p s n ln s ln n ln ln n where we have used the fact that the uniform distribution on s elements has entropy ln s and it maximizes the entropy among all distribution supported on s elements minimax lower bound we use a sequence of reductions to prove the lower bound for markov chains specifically we introduce two auxiliary models namely the independent multinomial and independent poisson model and show that the sample complexity of the markov chain model is lower bounded by that of the independent multinomial model lemma which is further lower bounded by that of the independent poisson model lemma finally theorem follows from the lower bound for the independent poisson model in theorem to be precise we use the notation pmc pim pip to denote the probability measure corresponding to the three models respectively reduction from markov chain to independent multinomial definition independent multinomial model given a stationary reversible markov chain with transition matrix t tij rev s stationary distribution i s and absolute spectral gap fix an integer n under the independent multinomial model the statistician observes and the following arrays of independent random variables wsms constant and within the ith row the random variables n q ln n wimi where the number of observations in the ith row is mi max ln n o for some ti equivalently the observations can be summarized into the following sufficient statistic s s matrix c cij where each row is independently distributed multi mi ti hence the name of independent multinomial model the following lemma relates the independent multinomial model to the markov chain model lemma if there exists an estimator under the markov chain model with parameter n such that sup t rev s pmc then there exists another estimator under the independent multinomial model with parameter n such that sup t rev s where pim and is the constant in definition reduction from independent multinomial to independent poisson we introduce the independent poisson model below which is parametrized by an s s symmetric matrix r an integer n and a parameter definition independent poisson model given an s s symmetric matrix r rij with rij and a parameter under the independent poisson model we observe r and an s s matrix c cij with independent entries distributed as cij poi where r ri r ri s x rij r s x ri for each symmetric matrix r we can define a transition matrix t t r by normalizing the rows tij rij s x rij thanks to the symmetry of r t is the transition matrix of a reversible markov chain with a stationary distribution r indeed the detailed balance equation is satisfied rij s x rij rji s x rji where we used the fact that rij rji upon observing the poisson matrix c the functional to be estimated is the entropy rate of the normalized transition matrix t t r which can be simplified as follows rij p s t r ln rij x x ri rij ln r rij ps rij rij s x x ri ln ri rij ln r rij given and q define the following collection of symmetric matrices r s q r t r x i j rij q where t t r is the normalized transition matrix defined in mini here the parameters and q ensure there exist sufficiently many observations to simulate the independent multinomial model this is made precise in the next lemma lemma if there exists an estimator for the independent multinomial model with parameter n such that sup t rev s pim then there exists another estimator for the independent poisson model with parameter such that sup s q provided q ln n pip t r where is the constant in definition minimax lower bounds for the independent poisson model now our task is reduced to lower bounding the sample complexity of the independent poisson model the general strategy is the method of fuzzy hypotheses which is an extension of lecam s methods the following version is adapted from theorem see also lemma lemma let z be a random variable distributed according to for some let be a pair of probability measures not necessarily supported on let z be an arbitrary estimator of the functional f based on the observation z suppose there exist r such that f f then inf sup f r tv where fi is the marginal distributions of z induced by the prior for i and tv is the total variation distance between distributions and to apply this method for the independent poisson model the parameter is the s s symmetric matrix r the function to be estimated is t r the observation sufficient statistic for r is c cij cji i j i j s cii i s the goal is to construct two symmetric random matrices whose distributions serve as the priors such that a they are sufficiently concentrated near the desired parameter space r s q for properly chosen parameters q b the entropy rates have different values c the induced marginal laws of c are statistically inseparable to this end we need the following results cf proof of proposition lemma let x x ln x x let c d and be some absolute constants for any there exist random variables u u supported on such that e u e u d j e u j e u e u e u lemma lemma let and be random variables taking values in m if e e j l then tv e poi e poi l r where e poi v poi pv denotes the poisson mixture with respect to the distribution of a positive random variable v now we are ready to define the priors for the independent poisson model for simplicity we assume the cardinality of the state space is s and introduce a new state definition prior construction suppose n ln s set s n ln s s ln s d d where and d is the constant in lemma recall the random variables u u are introduced in lemma we use a construction that is akin to that studied in define s s symmetric random matrices u uij and where uij i j s be copies of u and i j s be copies of u respectively let b b a a u r a a where b let and be the laws of r and respectively the parameters q will be chosen later and we set in the independent poisson model as in lemma the construction of this pair of priors achieves the following three goals a statistical indistinguishablility note that the distributions of the first row and column of r and are identical hence the sufficient statistics are and c cij cji i j i j s cii i s denote its the marginal distribution as fi under the prior for i the following lemma shows that the distributions of the sufficient statistic are indistinguishable lemma for n ln s we have tv o as s b functional value separation under the two priors the corresponding entropy rates of the independent poisson model differ by a constant factor of nsln s here we explain the intuition in view of for x ln x we have where ri ps rij s s x x t r rij ri r i and r ps i rij similarly s s x x rij t r i we will show that both r and r are close to their common mean s s furthermore ri and also concentrate on their common mean thus in view of lemma we have t r t u e u n ln s the precise statement is summarized in the following lemma lemma assume that n lns s and ln n some r such that as s s s there exist universal constants and p t r n ln s o p t r n ln s o c concentration on parameter space although the random matrices r and may take values outside the desired space r s q we show that most of the mass is concentrated on this set with appropriately chosen parameters the following lemma which is the core argument of the lower bound makes this statement precise ln s lemma assume that n there exist universal constants c such that as s p r r s q o p r s q o where q s s n s and q n ln s fitting lemma lemma and lemma into the main lemma the following minimax lower bound holds for the independent poisson model theorem there exist universal constants such that if n q ln s ln n s ln s n lim inf inf where s q sup pip s q n ln s s ln s n ln s proof of theorem for the choice of and nsln s in lemma a combination of lemma and lemma gives p r r s q t r n ln s o as s so that o similarly o by lemma we have tv o now theorem follows from lemma directly proof of theorem now we combine the previous results to prove theorem firstly theorem shows that as s inf sup s q where s q n ln s ln s ensures q inf n ln s sup t rev s o moreover since a larger results in a smaller set of parameters for all models we may always assume that pip n ln s pim ln n q s s n for this choice of the assumption and thus lemma implies n ln s o o o o n finally an application of lemma gives inf sup t rev s pmc n ln s completing the proof of theorem experiments the entropy rate estimator we proposed in this paper that achieves the minimax rates can be viewed as a conditional approach in other words we apply a shannon entropy estimator for observations corresponding to each state and then average the estimates using the empirical frequency of the states more generally for any estimator of the shannon entropy from data the conditional approach follows the idea of s x x i where is defined in we list several choices of the empirical entropy estimator which simply evaluates the shannon entropy of the empirical distribution of the input sequence it was shown not to achieve the minimax rates in shannon entropy estimation and also not to achieve the optimal sample complexity in estimating the entropy rate in theorem and corollary the jvhw estimator which is based on best polynomial approximation and proved to be minimax in the independent work is based on similar ideas the vv estimator which is based on linear programming and proved to achieve the lnss phase transition for shannon entropy in the profile maximum likelihood estimator pml which is proved to achieve the lnss phase transition in however there does not exist an efficient algorithm to even approximately compute the pml with provably ganrantees there is another estimator the lz entropy rate estimator which does not lie in the category of conditional approaches the lz estimator estimates the entropy through compression it is well known that for a universal lossless compression scheme its codelength per symbol would approach the shannon entropy rate as length of the sample path grows to infinity specifically for the following random matching length defined by lni max l n i xi xj it is shown in that for stationary and ergodic markov chains lni ln n lim we use alphabet size s and vary the sample size n from to to demonstrate how the performance varies as the sample size increases we compare the performance of the estimators by measuring the root mean square error rmse in the following four different scenarios via monte carlo simulations uniform the eigenvalue of the transition matrix is uniformly distributed except the largest one and the transition matrix is generated using the method in here we use spectral gap zipf the transition probability tij geometric the transition probability tij memoryless the transition matrix consists of identical rows in all of the four cases the jvhw estimator outperforms the empirical entropy rate the results of vv and lz are not included due to their considerable longer running time for example when s and n and we try to estimate the entropy rate from a single trajectory of the markov chain the empirical entropy and the jvhw estimator were evaluated in less than seconds the evaluation of lz estimator and the conditional vv method did not terminate after a the main reason for the slowness of the vv methods in the context of markov chains is that for each context it needs to call the original vv entropy estimator times in total in the above experiment each of which needs to solve a linear programming entropy rate estimation error for various estimators entropy rate estimation error for various estimators rmse rmse empirical jvhw conditional empirical jvhw conditional number of samples a uniform number of samples b zipf entropy rate estimation error for various estimators rmse rmse entropy rate estimation error for various estimators empirical jvhw conditional empirical jvhw conditional number of samples number of samples c geometric d memoryless figure comparison of the performances of the empirical entropy rate and jvhw estimator in different parameter configurations for lz we use the matlab implementation in https for vv we use the matlab implementation in http we use cores of a server with cpu frequency application fundamental limits of language modeling in this section we apply entropy rate estimators to estimate the fundamental limits of language modeling there has been a lot of recent interest and progress in developing probabilistic models of natural languages for applications such as machine translation and search query completion mostly using recurrent neural networks a language model specifies the joint probability distribution of a sequence of words qx n xn it is common to use a k markov assumption to train these models using sequences of k words known as sometimes with latin prefixes unigrams bigrams etc with values of k of up to to measure the efficacy of a model qx n researchers commonly use a metric called perplexity which is the normalized inverse probability under a model of of a test set x n xn s perplexityq x n n n qx x n the logarithm of perplexity also known as the rate can be seen as a logarithmic loss function h i log perplexityq x n log n qx n x n and in particular if a language is a stationary and ergodic stochastic process with entropy rate and x n is drawn from the language with true distribution px n x n then lim inf h i n log lim inf log perplexity x q n qx n x n with equality when q p in this section all logarithms are with respect to base and all entropy are measured in bits the entropy rate of the english language is therefore of significant interest to language model researchers since is a tight lower bound on perplexity this quantity indicates how close a given language model is to the optimum several researchers have presented estimates in bits per character because language models are trained on words these estimates are not directly relevant to the present task in one of the earliest papers on this topic claude shannon gave an estimate of bits per word this latter figure has been comprehensively beaten by recent models for example achieved a perplexity corresponding to a rate of bits per word to produce an estimate of the entropy rate of english we used two linguistic corpora the penn treebank ptb and google s one billion words benchmark results based on these corpora are particularly relevant because of their widespread use in training models we used the conditional approach proposed in this paper with the jvhw estimator describe in section the ptb corpus contains about n million words of which s are unique the corpus contains about n million words of which s million are unique obviously the english language is not a markov process however since for stationary stochastic processes not necessarily markov the entropy rate is the limit lim h xk in the language modeling literature these are typically known as but we use k to avoid conflict with the size of the dataset estimated cond entropy xk ptb best known model for ptb best known model for memory length k figure estimates of conditional entropy based on linguistic corpora we can estimate the entropy rate using estimates of the conditional entropy h xk for successively increasing k successively longer equivalently we can augment the state space to all k and use the estimator relaxing the markov assumption to a kth order markov assumption we did so for k it is worth noting that both of these corpora comprise individual sentences disconnected from each other so the corpora in effect have a markov order of about the length of a sentence our results are shown in figure the estimated conditional entropy xk provides us with a refined analysis of the intrinsic uncertainty in language prediction with context length of only k for using the jvhw estimator on the corpus our estimate is bits per word with current models trained on the corpus having an rate of about bits per word this indicates that language models are still at least bits per word away from the fundamental limit note that since h xk is decreasing in k h similarly for the much smaller ptb corpus we estimate an entropy rate of bits per word compared to models that achieve a rate of about bits per word at least bits away from the fundamental limit since the number of words in the english language our alphabet size is huge in view of the log s result we showed in theory a natural question is whether a corpus as vast as the corpus is enough to allow reliable estimates of conditional entropy a quick answer to this question is that our theory has so far focused on the analysis and as demonstrated below natural language data are much nicer so that the sample complexity for accurate estimation is much lower than what the minimax theory predicts specifically we computed the conditional entropy estimates of figure but this time restricting the sample to only a subset of the corpus a plot of the resulting estimate as a function of sample size is shown in figures and because sentences in the corpus are in randomized order the subset of the corpus taken is randomly chosen to interpret these results first note the number of distinct unigrams words in the corpus is about two million we recall that in the case n ln s samples are necessary even in the worst case a dataset of million words will be more than adequate to provide a reliable estimate of entropy for s million indeed the plot for unigrams with the jvhw estimator in figure supports this in this case the entropy estimates for all figure estimates of conditional entropy versus sample size for unigrams dotted lines are the estimate using the entire corpus the final estimate note the axes table points at which the entropy estimates are within bit of the final estimate k sample size of corpus sample sizes greater than words is within bits of the entropy estimate using the entire corpus that is it takes just of the corpus to reach an estimate within bits of the true value we note also that the empirical entropy rate converges to the same value within two decimal places this is also shown in figure the dotted lines indicate the final entropy estimate of each estimator using the entire corpus of words results for similar experiments with bigrams and trigrams are shown in figure and table since the state space for bigrams and trigrams is much larger convergence is naturally slower but it nonetheless appears fast enough that our entropy estimate should be within on the order of bits of the true value with these observations we believe that the estimates based on the corpus should have enough samples to produce reasonably reliable entropy estimates as one further measure to approximate the variance of these entropy estimates we also ran bootstraps for each memory length k with a bootstrap size of the same size as the original dataset sampling with replacement for the corpus with bootstraps for k and with for k the range of estimates highest less lowest for each memory length never exceeded bit and the standard deviation of estimates was just is the error ranges implied by the bootstraps figure estimates of conditional entropy versus sample size for bigrams and trigrams dotted lines are the estimate using the entire corpus the final estimate table bootstrap estimates of error range k estimate ptb dev range estimate dev range are too small to show legibly on figure for the ptb corpus the range never exceeded bit further details of our bootstrap estimates are given in table auxiliary lemmas lemma for an arbitrary sequence xn x x s define the empirical distribution of the consecutive pairs as xi let be the marginal distribution of and the empirical frequency of state i as x xj i n denote the empirical conditional distribution as j pn xm j i p whenever let h be given in and h is the shannon entropy defined in then we have h h ln min p s n where in for a given transition matrix p p xt the following lemma gives tail bounds for poisson and binomial random variables lemma exercise if x poi or x b n then for any we have p x p x the following lemma is the hoeffding inequality lemma let xn be independent random variables such that xi takes its p value in ai bi almost surely for all i let sn xi we have for any t p e sn t exp pn bi ai proofs of main lemmas proof of lemma we being with a lemma on the concentration of the empirical distribution for reversible markov chains lemma consider a reversible stationary markov chain with spectral gap then for every i i s every constant the event ei happens with probability at most max where ln n s ln n proof of lemma recall the following bernstein inequality for reversible chains theorem for any stationary reversible markov chain with spectral gap t p n we have ln n q ln n if and only if exp ln n we split the proof of into two parts ln n invoking and setting t we have ln n exp ln n exp n ln n invoking and setting t s ln n q ln n we have exp n q ln n ln n exp n now we are ready to prove lemma we only consider gopt and the upper bound on p gemp follows from the same steps by the union bound it suffices to upper bound the probability of the complement of each event in the definition of the good event gopt cf definition for the first part of the definition the probability of bad events eic in are upper bounded by x p eic s n s where as in lemma since we have assumed that we have for the second part of the definition applying lemma the overall probability of bad events hic in are upper bounded by x s s x s x n p hic s x s s s x ln n ln n q ln n n ln n s ln n ln n ln n ln n where d when and ln n the second step follows from the fact that q ln n is increasing proof of lemma we simulate a markov chain sample path with transition matrix tij and stationary distribution from the independent multinomial model as described in section and define the estimator as follows output zero if the event ei does not happen where ei are events defined in definition otherwise we set wij s wij s note that this is a valid definition since ei implies ni mi for any i s as a result pim pim ei c pim ei pim ei it follows from lemma that pim ei c where now it suffices to upper bound pim ei the crucial observation is that the joint distribution of ni s wij s are identical in two models and thus pim ei pmc ei pim ei pmc ei by definition the estimator satisfies pmc ei a combination of the previous inequalities gives pim as desired proof of lemma we can simulate the independent multinomial model from the independent poisson model by conp ditioning on the row sum for each i conditioned on mi cij mi the random vector ci cis follows the multinomial distribution multi mi ti where t t r is the ris transition matrix obtained from normalizing in particular ti ps rij furthermore cs are conditionally independent thus to apply the estimator designed for the independent multinomial model with parameter n that fulfills the guarantee we need to guarantee that mi max ln n s ln n for all i with probability at least here is the constant in definition and ps rij r p p where r rij note that mi poi where j rij due to ln n the assumption that r by the assumption of we have max ln n s ln n then p mi max ln n s ln n p poi a exp b where a follows from lemma b follows from proof ln n ln n n this completes the proof of lemma the dependence diagram for all random variables is as follows u r r c where r r r r is the stationary distribution defined in obtained by normalizing the matrix recall that for i fi denotes the joint distribution on the sufficient statistic c under the prior our goal is to show that tv note that and c are dependent however the key observation is that by concentration the distribution of is close to a fixed distribution on the state space s where s s thus and c are approximately independent for clarity we denote c c by the triangle inequality of the total variation distance we have tv tv c pc tv pc qc tv c qc to upper bound the first term note that c r forms a markov chain hence by the convexity of total variation distance we have tv c pc epc tv epr tv e tv r e r s x e r s p s we start by showing that the row sums of r concentrate let ri rij a a it follows from the hoeffding inequality in lemma that p ps uij ri u i s exp s ln s s n where provided that u s lnns p p next consider the entrywise sum of write r rij b p uii note that e r b s s by then it follows from the hoeffding inequality in lemma that p r s provided that u su exp s ln s n s s ln s n s ln s n henceforth we set s s n u for i s and s hence with probability tending to one i su conditioning on this event for i s we have u u r r e r r s s for i r s r we have r s r e r s therefore in view of we have tv c pc s x u s s s o u o o s as s similarly we also have tv c qc o by it remains to show that tv pc qc o note that pc qc are products of poisson mixtures by the triangle inequality of total variation distance again we have tv e poi uij e poi tv pc qc tv pc qc x we upper bound the individual terms in for the total variation distance between poisson mixtures note that the random variables s uij and s uij match moments up to order d ln s and are both supported on that if s s ln s n s ln s it follows from lemma ln s ln s we have tv e poi uij s where we set d e poi u s ij ln s s d ln s and used the fact that d by tv pc qc s o s s as s establishing the desired lemma proof of lemma let log s of we have where c is the constant from lemma recall that x x log in view s s x x t r rij ri r i s s x rij ri b a b as r r i r s x x b a b as ri uii uij r z r z z where the last step follows from the symmetry of the matrix u for the first term note that b a s s log thus conditioned on we have u r e r where e r s put b a s with probability tending to one we have ln s s for the second term by definition for any i j uij is supported on is supported on lemma that s ln s n ln n s ln s s ln s n for any i j hence it follows from the hoeffding inequality in uij uii s e u x exp x p exp thus uij s ln s n ln n ln s ln snln s p s ln s n ln snln s as s provided that ln n u s ln s n s s put s e u using and the fact that ln snln s we have s ln s n u ln n s ln s cu ln s and for the third term condition on the event in we have ri ri c for some absolute constant put we have cu cu ln s finally combining as well as with probability tending to one t r s s n for some absolute constant c likewise with probability tending to one we have t where s e u s s n in view of lemma we have this completes the proof proof of lemma we only consider the random matrix r rij which is distributed according to the prior the case of is entirely analogous first we lower bound with high probability recall the definition of u in and since u n sln s we have s r and ri for all i s with probability tending to one furthermore s consequently min ri r n ln s as desired next we deal with the spectral gap recall t t r is the normalized version of r given in p p let d diag rs and diag where ri rij r si rij and rri then we have t d furthermore by the reversiblity of t t t rd is a symmetric matrix since t is a similarity transform of t they share the same spectrum let t t recall that t is an s s matrix in view of we have l e r b a a z r e r u e u crucially the choice of a b s in is such that so that e r is a symmetric positive semidefinite matrix thus we have from t d d note that is also a symmetric positive semidefinite matrix let by weyl s inequality eq for i s we have t kd kd ku e u ri here and below k stands for the spectral norm largest singular values so far everything hasqbeen determinimistic next we show that with high probability the rhs of is at most s s n note that u e u is a wigner matrix furthermore uij takes values in s ln s n where is an absolute constant it follows from the standard tail estimate of the spectral norm for the wigner ensemble see corollary that there exist universal constants c c such that p cs ln s ku e u n combining and the absolute spectral gap of t t r satisfies p t r c s s s n as s by union bound we have shown that p r r s q with q as chosen in lemma proof of lemma the representation follows from definition of conditional entropy it remains to show let denote the transition matrix corresponding to the empirical conditional distribution that is then for any transition matrix p pij n x s x s x i xm j ln ln n n n pij s x s ln n pij s x s x ln s x p pij d where in the last step d pkq i pi ln pqii stands for the kl divergence between probability vectors p and q then follows from the fact that the nonnegativity of the kl divergence references jayadev acharya hirakendu das alon orlitsky and ananda theertha suresh a unified maximum likelihood approach for estimating symmetric properties of discrete distributions in international conference on machine learning pages antos and ioannis kontoyiannis convergence properties of functional estimates for discrete distributions random structures algorithms charles bordenave pietro caputo and djalil chafai spectrum of large random reversible markov chains two examples alea latin american journal of probability and mathematical statistics patrick billingsley statistical methods in markov chains the annals of mathematical statistics pages peter brown vincent della pietra robert mercer stephen della pietra and jennifer lai an estimate of an upper bound for the entropy of english comput march nicolo and gabor lugosi prediction learning and games cambridge university press gabriela ciuperca and valerie girardin on the estimation of the entropy rate of finite markov chains in proceedings of the international symposium on applied stochastic models and data analysis thomas cover and roger king a convergent gambling estimate of the entropy of english ieee transactions on information theory haixiao cai sanjeev kulkarni and sergio universal entropy estimation via block sorting ieee trans inf theory thomas cover and joy thomas elements of information theory wiley new york second edition michelle effros karthik visweswariah sanjeev r kulkarni and sergio universal lossless source coding with the burrows wheeler transform ieee transactions on information theory moein falahatgar alon orlitsky venkatadheeraj pichapati and ananda theertha suresh learning markov distributions does estimation trump compression in information theory isit ieee international symposium on pages ieee yarin gal and zoubin ghahramani a theoretically grounded application of dropout in recurrent neural networks in lee sugiyama luxburg guyon and garnett editors advances in neural information processing systems pages curran associates yanjun han jiantao jiao tsachy weissman and yihong wu optimal rates of entropy estimation over lipschitz balls arxiv preprint nov daniel j hsu aryeh kontorovich and csaba mixing time estimation in reversible markov chains from a single sample path in advances in neural information processing systems pages wassily hoeffding probability inequalities for sums of bounded random variables journal of the american statistical association qian jiang construction of transition matrices of reversible markov chains sc major paper department of mathematics and statistics university of windsor daniel jurafsky and james martin speech and language processing edition upper saddle river nj usa jiantao jiao permuter lei zhao kim and weissman universal estimation of directed information information theory ieee transactions on jiantao jiao kartik venkat yanjun han and tsachy weissman minimax estimation of functionals of discrete distributions information theory ieee transactions on jiantao jiao kartik venkat yanjun han and tsachy weissman maximum likelihood estimation of functionals of discrete distributions ieee transactions on information theory oct rafal oriol vinyals mike schuster noam shazeer and yonghui wu exploring the limits of language modeling corr ioannis kontoyiannis paul h algoet yu m suhov and aj wyner nonparametric entropy estimation for stationary processes and random fields with applications to english text information theory ieee transactions on oleksii kuchaiev and boris ginsburg factorization tricks for lstm networks corr john c kieffer sample converses in source coding theory ieee transactions on information theory coco krumme alejandro llorente alex manuel cebrian and esteban moro pentland the predictability of consumer visitation patterns scientific reports sudeep kamath and sergio estimation of entropy rate and entropy rate for markov chains in information theory isit ieee international symposium on pages ieee j kevin lanctot ming li and yang estimating dna sequence entropy in symposium on discrete algorithms proceedings of the eleventh annual symposium on discrete algorithms volume pages david a levin and yuval peres estimating the spectral gap of a reversible markov chain from a short trajectory arxiv preprint ravi montenegro and prasad tetali mathematical aspects of mixing times in markov chains foundations and trends r in theoretical computer science michael mitzenmacher and eli upfal probability and computing randomized algorithms and probabilistic analysis cambridge university press stephen merity caiming xiong james bradbury and richard socher pointer sentinel mixture models corr liam paninski estimation of entropy and mutual information neural computation liam paninski estimating entropy on m bins given fewer than m samples information theory ieee transactions on daniel paulin concentration inequalities for markov chains by marton couplings and spectral methods electronic journal of probability claude shannon prediction and entropy of printed english the bell system technical journal jan paul c shields the ergodic theory of discrete sample paths graduate studies in mathematics american mathematics society noam shazeer azalia mirhoseini krzysztof maziarz andy davis quoc le geoffrey hinton and jeff dean outrageously large neural networks the layer corr chaoming song zehui qu nicholas blumm and limits of predictability in human mobility science terence tao topics in random matrix theory volume american mathematical society providence ri taro takaguchi mitsuhiro nakamura nobuo sato kazuo yano and naoki masuda predictability of conversation partners physical review x tsybakov introduction to nonparametric estimation gregory valiant and paul valiant estimating the unseen an log estimator for entropy and support size shown optimal via new clts in proceedings of the annual acm symposium on theory of computing pages acm gregory valiant and paul valiant the power of linear estimators in foundations of computer science focs ieee annual symposium on pages ieee paul valiant and gregory valiant estimating the unseen improved estimators for entropy and other properties in advances in neural information processing systems pages chunyan wang and bernardo a huberman how random are online social interactions scientific reports yihong wu and pengkun yang minimax rates of entropy estimation on large alphabets via best polynomial approximation ieee transactions on information theory aaron wyner and jacob ziv some asymptotic properties of the entropy of a stationary ergodic data source with applications to data compression ieee trans inf theory ziang xie sida wang jiwei li daniel aiming nie dan jurafsky and andrew ng data noising as smoothing in neural network language models corr jacob ziv and abraham lempel compression of individual sequences via variablerate coding information theory ieee transactions on barret zoph and quoc le neural architecture search with reinforcement learning corr julian zilly rupesh kumar srivastava jan and schmidhuber recurrent highway networks corr
10
arxiv nov predicting rna secondary structures with arbitrary pseudoknots by maximizing the number of stacking pairs samuel abstract the paper investigates the computational problem of predicting rna secondary structures the general belief is that allowing pseudoknots makes the problem hard existing algorithms are heuristic algorithms with no performance guarantee and can only handle limited types of pseudoknots in this paper we initiate the study of predicting rna secondary structures with a maximum number of stacking pairs while allowing arbitrary pseudoknots we obtain two approximation algorithms with approximation ratios of and for planar and general secondary structures respectively for an rna sequence of n bases the approximation algorithm for planar secondary structures runs in o time while that for the general case runs in linear time furthermore we prove that allowing pseudoknots makes it to maximize the number of stacking pairs in a planar secondary structure this result is in contrast with the recent results on psuedoknots which are based on optimizing some general and complicated energy functions introduction ribonucleic acids rnas are molecules that are responsible for regulating many genetic and metabolic activities in cells an rna is and can be considered as a sequence of nucleotides also known as bases there are four basic nucleotides namely adenine a cytosine c guanine g and uracil u an rna folds into a structure by forming pairs of bases paired bases tend to stabilize the rna have negative free energy yet base pairing does not occur arbitrarily in particular and form stable pairs and are known as the base pairs other base pairings are less stable and often ignored an example of a folded rna is shown in figure note that this figure is just schematic in practice rnas are molecules department of computer science yale university new haven ct department of computer science northwestern university evanston il kao this research was supported in part by nsf grant department of computer science the university of hong kong hong kong twlam smyiu this research was supported in part by hong kong rgc grant department of computer science national university of singapore science drive singapore ksung c c c a a g c a a g a u u a a a u g a u g g a a u a u stacking pair u c g g hairpin loop u u g a a internal loop c c a g c g u bulge loop figure example of a folded rna the structure is related to the function of the rna yet existing experimental techniques for determining the structures of rnas are often very costly and time consuming see the secondary structure of an rna is the set of base pairings formed in its structure to determine the structure of a given rna sequence it is useful to determine the corresponding secondary structure as a result it is important to design efficient algorithms to predict the secondary structure with computers from a computational viewpoint the challenge of the rna secondary structure prediction problem arises from some special structures called pseudoknots which are defined as follows let s be an rna sequence sn a pseudoknot is composed of two interleaving base pairs si sj and sk such that i k j see figure for examples if we assume that the secondary structure of an rna contains no pseudoknots the secondary structure can be decomposed into a few types of loops stacking pairs hairpins bulges internal loops and multiple loops see tompa s lecture notes or waterman s book a stacking pair is a loop formed by two pairs of consecutive bases si sj and with i j see figure for an example by definition a stacking pair contains no unpaired bases and any other kinds of loops contain one or more unpaired bases since unpaired bases are destabilizing and have positive free energy stacking pairs are the only type of loops that have negative free energy and stabilize the secondary structure it is also natural to assume that the free energies of loops are independent then an optimal secondary structure can be computed using dynamic programming in o time however pseudoknots are known to exist in some rnas for predicting secondary structures with pseudoknots nussinov et al have studied the case where the energy function is minimized when the number of base pairs is maximized and have obtained an o algorithm for predicting secondary structures based on some special energy functions lyngso and pedersen have proven that determining the optimal secondary g g g a c a a c c u u u u g a a g c g c a c u g c g c c a c c a c a a a g g g a a g c a figure examples of pseudoknots structure possibly with pseudoknots is akutsu has shown that it is to determine an optimal planar secondary structure where a secondary structure is planar if the graph formed by the base pairings and the backbone connections of adjacent bases is planar see section for a more detailed definition rivas and eddy uemura et al and akutsu have also proposed algorithms that can handle limited types of pseudoknots note that the exact types of such pseudoknots are implicit in these algorithms and difficult to determine although it might be desirable to have a better classification of pseudoknots and better algorithms that can handle a wider class of pseudoknots this paper approaches the problem in a different general direction we initiate the study of predicting rna secondary structures that allow arbitrary pseudoknots while maximizing the number of stacking pairs such a simple energy function is meaningful as stacking pairs are the only loops that stabilize secondary structures we obtain two approximation algorithms with ratios of and for planar and general secondary structures respectively the planar approximation algorithm makes use of a geometric observation that allows us to visualize the planarity of stacking pairs on a rectangular grid interestingly such an observation does not hold if our aim is to maximize the number of base pairs this algorithm runs in o time the second approximation algorithm is more complicated and is based on a combination of multiple greedy strategies a straightforward analysis can not lead to the approximation ratio of we make use of amortization over different steps to obtain the desired ratio this algorithm runs in o n time to complement these two algorithms we also prove that allowing pseudoknots makes it to find the planar secondary structure with the largest number of stacking pairs the proof makes use of a reduction from a problem called tripartite matching this result indicates that the hardness of the rna secondary structure prediction problem may be inherent in the pseudoknot structures and may not be necessarily due to the complication of the energy functions this is in contrast to the other results discussed earlier the rest of this paper is organized into four sections section discusses some basic properties sections and present the approximation algorithms for planar and general secondary structures respectively section details the result section concludes the paper with open problems preliminaries let s sn be an rna sequence of n bases a secondary structure p of s is a set of pairs sip sjp where sir sjr for all r p and no two pairs share a base we denote q q consecutive stacking pairs si sj of p by si sj definition given a secondary structure p we define an undirected graph g p such that the bases of s are the nodes of g p and si sj is an edge of g p if j i or si sj is a base pair in definition a secondary structure p is planar if g p is a planar graph definition a secondary structure p is said to contain an interleaving block if p contains three stacking pairs si sj sj sj sj sj where i j j j lemma if a secondary structure p contains an interleaving block p is proof suppose p contains an interleaving block without loss of generality we assume that p contains the stacking pairs and figure a shows the subgraph of g p corresponding to these stacking pairs since this subgraph contains a homeomorphic copy of see figure b g p and p are a b figure interleaving block an approximation algorithm for planar secondary structures we present an algorithm which given an rna sequence s sn constructs a planar secondary structure of s to approximate one with the maximum number of stacking pairs with a ratio of at least this approximation algorithm is based on the subtle observation in lemma that if a secondary structure p is planar the subgraph of g p which contains only the stacking pairs of p can be embedded in a grid with a useful property this property enables us to consider only the secondary structure of s without pseudoknots in order to achieve approximation ratio definition given a secondary structure p we define a stacking pair embedding of p on a grid as follows represent the bases of s as n consecutive grid points on the same horizontal grid line l such that si and i n are connected directly by a horizontal grid edge if si sj is a stacking pair of p si and are connected to sj and respectively by a sequence of grid edges such that the two sequences must be either both above or both below figure shows a stacking pair embedding figure b of a given secondary structure figure a note that do not form a stacking pair with other base pair so is not connected to in the stacking pair embedding similarly is not connected to in the embedding b a figure an example of a stacking pair embedding definition a stacking pair embedding is said to be planar if it can be drawn in such a way that no lines cross or overlap with each other in the grid the embedding shown in figure b is planar lemma let p be a secondary structure of an rna sequence let e be a stacking pair embedding of if p is planar then e must be planar proof if p does not have a planar stacking pair embedding we claim that p contains an interleaving block let l be the horizontal grid line that contains the bases of s in since p does not have a planar stacking pair embedding we can assume that e has two stacking pairs intersect above l see figure a a d g b c e f h i figure stacking pair embedding if there is no other stacking pair underneath these two pairs we can flip one of the pairs below l as shown in figure b so there must be at least one stacking pair underneath these two pairs by checking all possible cases all cases are shown in figures c to i it can be shown that e can not be redrawn without crossing or overlapping lines only if it contains an interleaving block figures h and i so by lemma p is by lemma we can relate two secondary structures having the maximum number of stacking pairs with and without pseudoknots in the following lemma lemma given an rna sequence s let n be the maximum number of stacking pairs that can be formed by a planar secondary structure of s and let w be the maximum number of stacking pairs that can be formed by s without pseudoknots then w proof let p be a planar secondary structure of s with n stacking pairs since p is planar by lemma any stacking pair embedding of p is planar let e be a stacking pair embedding of p such that no lines cross each other in the grid let l be the horizontal grid line of e which contains all bases of let and be the number of stacking pairs which are drawn above and below l respectively without loss of generality assume that now we construct another planar secondary structure p from e by deleting all stacking pairs which are drawn below obviously p is a planar secondary structure of s without pseudoknots since as w w based on lemma we now present the dynamic programming algorithm m axsp which computes the maximium number of stacking pairs that can be formed by an rna sequence s sn without pseudoknots algorithm m axsp define v i j for j i as the maximum number of stacking pairs without pseudoknots that can be formed by si sj if si and sj form a pair let w i j j i be the maximum number of stacking pairs without pseudoknots that can be formed by si sj obviously w n gives the maximum number of stacking pairs that can be formed by s without pseudoknots basis for j i i i or i j n v i j w i j if si sj form a pair recurrence for j i v i j w i j max w i j w i j v i j max if si sj form a pair v i j if form a pair w i k w k j lemma given an rna sequence s of length n algorithm m axsp computes the maximum number of stacking pairs that can be formed by s without pseudoknots in o time and o space proof there are o entries v i j and w i j to be filled to fill an entry of v i j we check at most o n values to fill an entry of w i j o time suffices the total time complexity for filling all entries is o storing all entries requires o space although algorithm m axsp presented in the above only computes the number of stacking pairs it can be easily modified to compute the secondary structure thus we have the following theorem theorem the algorithm m axsp is an algorithm for the problem of constructing a secondary structure which maximizes the number of stacking pairs for an rna sequence an approximation algorithm for general secondary structures we present algorithm greedysp which given an rna sequence s sn constructs a secondary structure of s not necessarily planar with at least of the maximum possible number of stacking pairs the approximation algorithm uses a greedy approach figure shows the algorithm greedysp let s sn be the input rna sequence initially all sj are unmarked let e be the set of base pairs output by the algorithm initially e greedysp s i i repeatedly find the leftmost i consecutive stacking pairs sp find sp sq such that p is as small as possible formed by unmarked bases add sp to e and mark all these bases for k i downto repeatedly find any k consecutive stacking pairs sp formed by unmarked bases add sp to e and mark all these bases repeatedly find the leftmost stacking pair sp formed by unmarked bases add sp to e and mark all these bases figure a algorithm in the following we analyze the approximation ratio of this algorithm the algorithm greedysp s i will generate a sequence of sp s denoted by sph fact for any spj and spk j k the stacking pairs in spj do not share any base with those in spk for each spj sp sq we define two intervals of indexes ij and jj as p p t and q t q respectively in order to compare the number of stacking pairs formed with that in the optimal case we have the following definition definition let p be an optimal secondary structure of s with the maximum number of stacking pairs let f be the set of all stacking pairs of for each spj computed by greedysp s i and ij or jj let sk sw at least one of indexes k k w w is in note that s may not be disjoint lemma s xij xjj proof we prove this lemma by contradiction suppose that there exists a stacking pair sk sw in f but not in any of xij and xjj by definition none of the indexes k k w w is in any of ij and jj this contradicts with step of algorithm greedysp s i definition for each xij let j xij xik xjk and let j xjj xik xjk xij k j k j let be the number of stacking pairs represented by spj let and be the numbers of indexes in the intervals ij and jj respectively lemma let n be the number of stacking pairs computed by algorithm greedysp s i and n be the maximum number of stacking pairs that can be formed by if for all j we have j j then n n s s proof by definition k xik xjk k k k then by fact n p s j thus n r k xik xjk by lemma n r n lemma for each spj computed by greedysp s i we have j j proof there are three cases as follows case spj is computed by greedysp s i in step note that spj sp sq is the leftmost i consecutive stacking pairs p is the smallest possible by definition j j we further claim that j then j j i i as i we prove the claim by contradiction assume that j i that is for some integer t f has consecutive stacking pairs furthermore none of the bases are marked before spj is chosen otherwise suppose one such base says sa is marked when the algorithm chooses for j then an stacking pair adjacent to sa does not belong to j and they belong to or instead therefore is the leftmost i consecutive stacking pairs formed by unmarked bases before spj is chosen as spj is not the leftmost i consecutive stacking pairs this contradicts the selection criteria of spj the claim follows case spj is computed by greedysp s i in step let k let spj sp sq by definition j j k we claim that j j k then j j k k which is at least as k to show that j k by contradiction assume j k thus for some integer t there exist k consecutive stacking pairs similarly to case we can show that none of the bases are marked before spj is chosen thus greedysp s i should select some k or k consecutive stacking pairs instead of the chosen k consecutive stacking pairs reaching a contradiction similarly we can show j k case spj is computed by greedysp s i in step spj is the leftmost stacking pair when it is chosen let spj sp sq by the same approach as in case we can show j j we further claim j then j j to verify j we consider all possible cases with j while there are no two consecutive stacking pairs the only possible case is that for some integers r t both sp sr and sp st belong to j then spj can not be the leftmost stacking pair formed by unmarked bases contradicting the selection criteria of spj theorem let s be an rna sequence let n be the maximum number of stacking pairs that can be formed by any secondary structure of let n be the number of stacking pairs output by greedysp s i then n proof by lemmas and the result follows we remark that by setting i in greedysp s i we can already achieve the approximation ratio of the following theorem gives the time and space complexity of the algorithm theorem given an rna sequence s of length n and a constant k algorithm greedysp s k can be implemented in o n time and o n space proof recall that the bases of an rna sequence are chosen from the alphabet a u g c if k is a constant there are only constant number of different patterns of consecutive stacking pairs that we must consider for any j k there are only different strings that can be formed by the four characters a u g c so the locations of the occurrences of these possible strings in the rna sequence can be recorded in an array of linked lists indexed by the pattern of the string using o n time preprocessing there are at most linked lists for any fixed j and there are at most n entries in these linked lists in total there are at most kn entries in all linked lists for all possible values of j now we fix a constant j to locate all j consecutive stacking pairs we scan the rna sequence from left to right for each substring of j consecutive characters we look up the array to see whether we can form j consecutive stacking pairs by simple bookkeeping we can keep track which bases have been used already each entry in the linked lists will only be scanned at most once so the whole procedure takes only o n time since k is a constant we can repeat the whole procedure for k different values of j and the total time complexity is still o n time in this section we show that it is to find a planar secondary structure with the largest number of stacking pairs we consider the following decision problem given an rna sequence s and an integer h we wish to determine whether the largest possible number of stacking pairs in a planar secondary structure of s denoted sp s is at least below we show that this decision problem is by reducing the tripartite matching problem to it which is defined as follows given three node sets x y and z with the same cardinality n and an edge set e x y z of size m the tripartite matching problem is to determine whether e contains a perfect matching a set of n edges which touches every node of x y and z exactly once the remainder of this section is organized as follows section shows how we construct in polynomial time an rna sequence se and an integer h from a given instance x y z e of the tripartite matching problem where h depends on n and section shows that if e contains a perfect matching then sp se section is the part showing that if e does not contain a perfect matching then sp se combining these three sections we can conclude that it is to maximize the number of stacking pairs for planar rna secondary structures construction of the rna sequence se consider any instance x y z e of the tripartite matching problem we construct an rna sequence se and an integer h as follows let x xn y yn and z zn furthermore let e em where each edge ej xpj yqj zrj recall that an rna sequence contains characters chosen from the alphabet a u g c below we denote ai where i is any positive integer as the sequence of i a s furthermore means a sequence of one or more a s let d max m define the following four rna sequences for every positive integer k k is the sequence u d ak gu d and k is the sequence u ad gu k ad k is the sequence c agc and k is the sequence fragments note that the sequences k and k are each composed of two substrings in the form of u separated by a character each of these two substrings is called a fragment similarly the two substrings of the form c separated by ag in k and the two substrings of the form separated by the character a in k are also called fragments node encoding each node in the three node sets x y and z is associated with a unique sequence for i n let hxi i hyi i hzi i denote the sequences i n i respectively intuitively hxi i is the encoding of the node xi and similarly hyi i and hzi i are for the nodes yi and zi respectively furthermore define hxi i i hyi i n i and hzi i i the node set x is associated with two sequences x ig ghxn i and x hxn ig i let x xi ig ig hxn i and x xi hxn ig ig i where xi is any node in x similarly the node sets y and z are associated with sequences y y and z z respectively edge encoding for each edge ej where j m we define four delimiter sequences namely vj j wj m j vj j and wj m j assume that ej xpj yqj zrj then ej is encoded by the sequence sj defined as ag vj ag wj ag x g y g z g z zrj g y yqj g x xpj vj a wj let be a special sequence defined as ag ag ag z g y g x a in the following discussion each sj is referred to as a region finally we define se to be the sequence sm let and let h n note that se has o n m characters and can be constructed in o time in sections and we show that sp se h if and only if e contains a perfect matching correctness of the this section shows that if e has a perfect matching we can construct a planar secondary structure for se containing at least h stacking pairs therefore sp se first of all we establish several basic steps for constructing stacking pairs on se i or i itself can form d stacking pairs while i and i together can form stacking pairs i and i together can form stacking pairs for any i j i and j together can form stacking pairs lemma if e has a perfect matching then sp se proof let m ejn be a perfect matching without loss of generality we assume that jn define m to obtain a planar secondary structure for se with at least h stacking pairs we consider the regions one by one there are three cases case we consider any region sj such that ej m our goal is to show that stacking pairs can be formed within sj note that there are m n edges not in m thus we can obtain a total of m n stacking pairs in this case details are as follows assume that ej xpj yqj zrj stacking pairs can be formed between vj and vj and between wj and wj stacking pairs can be formed between hxi i and hxi i for all i pj and between hyi i and hyi i for all i qj and between hzi i and hzi i for all i rj hxpj i hyqj i and hzrj i can each form d stacking pairs the total number of stacking pairs that can be formed within sj is n d case we consider the edges ejn in m our goal is to show that each corresponding region accounts for stacking pairs thus we obtain a total of n stacking pairs in this case details are as follows unlike case each region sjk where k n may have some of its bases paired with that of stacking pairs can be formed between wjk in sjk and in stacking pairs can be formed between vjk in sjk and vjk in sjk stacking pairs can be paired between hxi i in sjk and hxi i in sjk for any i pjk and between hyi i in sjk and hyi i in sjk for any i qjk and between hzi i in sjk and hzi i in sjk for any i rjk stacking pairs can be paired between hxi i in sjk and hxi i in for any i pjk and between hyi i in sjk and hyi i in for any i qjk and between hzi i in and hzi i in for any i rjk the total number of stacking pairs charged to sjk is case we consider we can form stacking pairs between and and stacking pairs between and the number of such stacking pairs is combining the three cases the number of stacking pairs that can be formed on se is m n n which is exactly notice that no two stacking pairs formed cross each other thus sp se correctness of the part this section shows that if e has no perfect matching then sp se we first give the framework of the proof in section then some basic definitions and concepts are presented in section the proof of the part is given in section framework of the proof let opt be a secondary structure of se with the maximum number of stacking pairs let opt be the number of stacking pairs in opt that is opt sp se in this section we will establish an upper bound for opt recall that we only consider base pairs a u and c g pairs we define a conjugate of a substring in se as follows conjugates for every substring r sk of se the conjugate of r is where u a g and for example aa s conjugate is u u and u a s conjugate is u a to form a stacking pair two adjacent bases must be paired with another two adjacent bases so we concentrate on the possible patterns of adjacent bases in se in se any two adjacent characters are referred to as a by construction se has only ten different types of u u aa u a gg cc gc ag ga gu and a can only form a stacking pair with its conjugate if they actually form a stacking pair in op t they are said to be paired since the conjugates of ag ga gu and do not exist in se there is no stacking pair in se which involves these we only need to consider aa u u u a gg cc table shows the numbers of occurrences of these in sj j m and the total occurrences of these substrings in se substring t aa uu ua gg cc gc total number of sj j m d d d occurrences of t in se m d m d d m m table number of occurrences of different let aa denote the number of occurrences of in se we use the notation for other types of in se similarly the following fact gives a straightforward upper bound for opt fact opt min aa u u min gg cc u h n note that opt may not pair all with u u let be the number of that are not paired in opt again we use the notaion for other types of fact can be strengthened as follows fact opt min u u u cc u a a gc the upper bound given in fact forms the basis of our proof for showing that opt in the following sections we consider the possible structure of opt for each possible case we show that the lower bounds for some values such as and are sufficiently large so that opt can be shown to be less than in particular in one of the cases we must make use of the fact that e does not have a perfect matching in order to prove the lower bound for a and u we give some basic definitions and concepts in section the lower bounds and the proof are given in section definitions and concepts in this section we give some definitions and concepts which are useful in deriving lower bounds for values we first classify each region sj in se as either open or closed with respect to opt then extending the definitions of fragments and conjugates we introduce conjugate fragments and delimiter fragments finally we present a property of delimiter fragments in open regions open and closed regions with respect to opt a region sj in se is said to be an open region if some u u aa or u in sj are paired with some outside sj otherwise it is a closed region lemma if is a closed region then opt proof has more than u u if is a closed region these are not paired by opt thus by fact opt h n recall that se is a sequence composed of s s s and s each k respectively k consists of two substrings of the form u each of these substrings is called a fragment furthermore each k resp k consists of two substrings of the form c respectively each of these subtrings is also called a fragment conjugate fragments and delimiter fragments consider any fragment f in se another fragment f in se is called a conjugate fragment of f if f is the conjugate of f note that if f is a fragment of a certian k resp k then f appears only in some k respectively k and vice versa by construction if f is a fragment of some delimiter sequence vj or wj then f has a unique conjugate fragment in se which is located in vj or wj respectively however if f is a fragment of some sequence says hxi i then for every instance of hxi i in se f contains one conjugate fragment in hxi i a fragment f is said to be paired with its conjugate fragment f by opt if opt includes all the pairs of bases between f and f for j m the fragment f in vj or wj is called a delimiter fragment note that the delimiter fragment f should be of the form c for k the following lemma shows a property of delimiter fragments in open regions lemma if sj is an open region then both delimiter fragments of either vj or wj must not pair with their conjugate fragments in opt proof we prove the statement by contradiction suppose one fragment of vj and one fragment of wj are paired with their conjugate fragments let sx sy and be some particular stacking pairs in vj and wj respectively since sj is an open region we can identify a stacking pair where and are within and outside sj respectively note that these three stacking pairs form an interleaving block by lemma opt is not planar reaching a contradiction proof of the part by lemma it suffices to assume that is an open region before we give the proof of the part let us consider the following lemma lemma let be the number of delimiter fragments that are not paired with their conjugate fragments then gc proof by construction a must be next to the left end of a delimiter fragment f which is of the form c no other can exist if this is paired the leftmost of f must not be paired as there is no ggc pattern in se thus f must be one of the delimiter fragments that are not paired with their conjugate fragments based on this observation we classify the delimiter fragments into two groups gc s delimiter fragments whose at the left end are paired and gc s delimiter fragments whose at the left end are not paired for each delimiter fragment f c in group since the on the left of f is paired the leftmost of f must not be paired by opt for the remaining k we either find a which is not paired by opt or these k are paired to in some fragment f with k and thus some of f is not paired therefore each delimiter fragment in group introduces either i two unpaired or ii one unpaired and one unpaired hence the total number of unpaired cc and due to delimiter fragments in group gc for each delimiter fragment f c in group consider the in f with a similar argument we can show that each delimiter fragment in group introduces either i one unpaired or ii one unpaired hence the total number of unpaired cc and due to delimiter fragments in group gc in total we have gc now we state a lemma which shows the lower bounds for some values in terms of the number of open regions in opt lemma let be the number of open regions in opt if is an open region then u m max gc if n is an open region and e does not have a perfect matching then either a u m n d b or c a proof statement within each closed region sj where j m s u u can not paired in opt as there are m such closed regions m d u u are not paired in opt thus u m statement by lemma we can identify fragments in vj and wj of all open regions which are not paired with their conjugate fragments then by lemma we have gc thus max gc statement by a similar argument to the proof for statement within the m m n closed regions m n d u u are not paired in opt for the n open regions one of them must be let sjn be the remaining n open regions recall that ejn are the corresponding edges of these n open regions since these n edges can not form a perfect matching some node says xk is adjacent to these n edges more than once thus within sjn we have more hxk i than hxk i therefore at least two of the fragments in all hxk i are not paired with their conjugate fragments let f be one of such fragments note that f is of the form u d ak since f is not paired with its conjugate fragment one of the following three cases occurs in opt case an u u of f is not paired case an of f is not paired case all u u and f are paired in this case u d of f is paired with ad of a fragment f u k ad and ak of f is paired with some substring u k of some fragment f as f and f are not the same fragment the u of both f and f are not paired in summary we have either u or or a based on lemma we prove the part by a case analysis in the following lemma lemma if e does not have a prefect matching then opt proof recall that if is a closed region then opt now suppose that is an open region we show opt h in three cases n n and n case n by lemma u m by fact we can conclude that opt h n n d h n case n by lemma max gc by fact opt h n which is smaller than h because n case n by lemma either a u m n d or b or c a by fact opt h n max gc by lemma we have opt we conclude that if e does not have a prefect matching then opt equivalently if opt h then e has a prefect matching conclusions in this paper we have studied the problem of predicting rna secondary structures that allow arbitrary pseudoknots with a simple free energy function that is minimized when the number of stacking pairs is maximized we have proved that this problem is if the secondary structure is required to be planar we conjecture that the problem is also for the general case we have also given two approximation algorithms for this problem with approximation ratios of and for planar and general secondary structures respectively it would be of interest to improve these approximation ratios another direction is to study the problem using energy function that is minimized when the number of base pairs is maximized it is known that this problem can be solved in cubic time if the secondary structure can be however the computational complexity of the problem is still open if the secondary structure is required to be planar we conjecture that the problem becomes under this additional condition we would like to point out that the observation that have enabled us to visualize the planarity of stacking pairs on a rectangular grid does not hold in case of maximizing base pairs references akutsu dynamic programming algorithms for rna secondary structure prediction with pseudoknots discrete applied mathematics garey and johnson computers and intractability a guide to the theory of freeman new york ny zuker and pedersen internal loops in rna secondary structure prediction in proceedings of the annual international conference on computational molecular biology pages lyon france and pedersen rna pseudoknot prediction in energy based models journal of computational biology zuker and pedersen fast evaluation of internal loops in rna secondary structure prediction bioinformatics meidanis and setubal introduction to computational molecular biology international thomson publishing new york nussinov pieczenik griggs and kleitman algorithms for loop matchings siam journal on applied mathematics rivas and eddy a dynamic programming algorithm for rna structure prediction including pseudoknots journal of molecular biology tompa lecture notes on biological sequence analysis technical report department of computer science and engineering university of washington seattle uemura hasegawa kobayashi and yokomori tree adjoining grammars for rna structure prediction theoretical computer science waterman introduction to computational biology maps sequences and genomes chapman hall new york ny zuker the use of dynamic algorithms in rna secondary structure prediction in waterman editor mathematical methods for dna sequences pages crc press boca raton fl zuker and sankoff rna secondary structures and their prediction bulletin of mathematical biology
5
estimating linear and quadratic forms via indirect observations apr anatoli juditsky arkadi nemirovski abstract in this paper we further develop the approach originating in to statistical estimation via convex focus is on estimating a linear or quadratic form of an unknown signal known to belong to a given convex compact set via noisy indirect observations of the signal classical theoretical results on the subject deal with precisely stated statistical models and aim at designing statistical inferences and quantifying their performance in a closed analytic form in contrast to this traditional highly instructive descriptive framework the approach we promote here can be qualified as operational the estimation routines and their risks are not available in a closed form but are yielded by an efficient computation all we know in advance is that under favorable circumstances the risk of the resulting estimate whether high or low is provably under the circumstances as a compensation for the lack of explanatory power this approach is applicable to a much wider family of observation schemes than those where closed form descriptive analysis is possible we discuss applications of this approach to classical problems of estimating linear forms of parameters of distribution and quadratic forms of partameters of gaussian and discrete distributions the performance of the constructed estimates is illustrated by computation experiments in which we compare the risks of the constructed estimates with numerical lower bounds for corresponding minimax risks for randomly sampled estimation problems introduction this paper can be considered as a to the paper dealing with hypothesis testing for simple families families of distributions specified in terms of upper bounds on their momentgenerating functions in what follows we work with simple families of distributions but our focus is on estimation of linear or quadratic forms of the unknown signal partly parameterizing the distribution in question to give an impression of our approach and results let us consider the subgaussian case where one is given a random observation drawn from a distribution p on rd t eh h ht rd with parameters rd affinely parameterized by signal x rm the goal is given observation stemming from unknown signal x known to belong to a given convex compact set x rm to recover the value at x of a given linear form g rm the estimate gb we build is affine function of observation the coefficients of the function same as an upper bound on the of the estimate on x stem from an optimal solution to an explicit convex optimization ljk grenoble alpes avenue centrale domaine universitaire de france georgia institute of technology atlanta georgia usa nemirovs the first author was supported by the labex and the pgmo grant research of the second author was supported by nsf grants and for the time being given of an estimate on x is defined as the over x x width of interval yielded by the estimate problem and thus can be specified in a computationally efficient fashion moreover under mild structural assumptions on the affine mapping x the resulting estimate is provably nearoptimal in the minimax sense see section for details the latter statement is an extension of the fundamental result of donoho on of affine recovery of a linear form of signal in gaussian observation scheme this paper contributes to a long line of research on estimating linear see and references therein and quadratic among others functionals of parameters of probability distributions via observations drawn from these distributions in the majority of cited papers the objective is to provide closed analytical form lower risk bounds for problems at hand and upper risk bounds for the proposed estimates in good cases matching the lower bounds this paradigm can be referred to as descriptive it relies upon analytical risk analysis and estimate design and possesses strong explanation power it however imposes severe restrictions on the structure of the statistical model restrictions making the estimation problem amenable to complete analytical treatment there exists another operational line of research initiated by donoho in the spirit of the operational approach is perfectly well illustrated by the main result of stating that when recovering the linear form of unknown signal x known to belong to a given convex compact set x via indirect gaussian observation ax n i the worstcase over x x risk of an affine in estimate yielded by optimal solution to an explicit convex optimization problem is within the factor of the minimax optimal risk subsequent operational literature is of similar spirit both the recommended estimate and its risk are given by an efficient computation typically stem from solutions to explicit convex optimization problems in addition in good situations we know in advance that the resulting risk whether large or small is nearly minimax optimal the explanation power of operational results is almost nonexisting as a compensation the scope of operational results is usually much wider than the one of analytical results for example the just cited result of donoho imposes no restrictions on a and x except for convexity and compactness of x in contrast all known to us analytical results on the same problem subject a x to severe structural restrictions in terms of the outlined descriptive operational dichotomy our paper is operational for instance in the problem of estimating linear functional of signal x affinely parameterising the parameters of distribution we started with we allow for quite general affine mapping x and for general enough signal set x the only restrictions on x being convexity and compactness technically the approach we use in this paper combines the machinery developed in and the techniques for the risk of an affine estimate developed in on the other hand this approach can also be viewed as extension of theoretical results on cramer tests supplied by in conjunction with techniques of which exploits the most attractive in our opinion feature of this line of research potential applicability to a wide variety of observation schemes and convex signal sets x the rest of the paper is organized as follows in section we following describe the families of distributions we are working with we present the estimate construction and study its general properties in section then in section we discuss applications to estimating linear forms of subgaussian distributions in section we apply the proposed construction to estimating quadratic forms of parameters of gaussian and discrete distributions to illustrate the performance of the proposed approach we describe results of some preliminary numerical experiments in which we compare the bounds on the risk of estimates supplied by our machinery with numerically computed lower bounds on the minimax risk to streamline the presentation all proofs are collected in the appendix to handle the case of estimates quadratic in observation we treat them as affine functions of quadratic lifting t of the actual observation notation in what follows rn and sn stand for the spaces of real vectors and real symmetric n n matrices respectively both spaces are equipped with the standard inner products xt y tr xy relation a b a b means that a b are symmetric matrices of the same size such that is positive semidefinite positive definite we denote s sn s and int s sn s we use matlab notation xk means vertical concatenation of matrices xk of the same width and xk means horizontal concatenation of matrices xk of the same height in particular for reals xk xk is a column vector with entries xk for probability distributions pk pk is the product distribution on the direct product of the corresponding probability spaces when pk we denote pk by p k or p k given positive integer d rd we denote by sg the family of all with parameters probability distributions that is the family of all borel probability distributions p on rd such that rd ln exp f t f t f t we use shorthand notation sg to express the fact that the probability distribution of random vector belongs to the family sg simple families of probability distributions let f int f be a closed convex set in rm symmetric the origin m be a closed convex set in some rn h f m r be a continuous function convex in h f and concave in following we refer to f m satisfying the above restrictions as to regular data regular data f m define the family s s f m of borel probability distributions p on such that m f ln t exp h p r h we say that distributions satisfying are simple given regular data f m we refer to s f m as to simple family of distributions associated with the data f m standard examples of simple families are supplied by good observation schemes as defined in and include the families of gaussian poisson and discrete distributions for other instructive examples and an algorithmic calculus of simple families the reader is referred to we present here three examples of simple families which we use in the sequel distributions let f rd m be a closed convex subset of the set gd rd and let h h ht in this case s f m contains all distributions p on rm with parameters from m m sg s f m in particular s f m contains all gaussian distributions n with quadratically lifted gaussian observations let v be a nonempty convex compact subset of this set gives rise to the family pv of distributions of quadratic liftings t of random vectors n with rd and our goal now is to build regular data such that the associated simple family of distributions contains pv to this end we select and such that for all v one has and ik where k k is the spectral norm under these restrictions the smaller are and the better observe that for all v we have i hence k t k k and we lose nothing when assuming from now on that the required regular data are given by the following proposition in the just described situation let z z h sd h and let f rd v z we set h h z h h h z h h h z ln det i k h t tr z h h tr h h h t h h h then i f form a regular data and for every rd v it holds for all h h f n t t ln eh h h t ii besides this function h h z is coercive in the convex argument whenever z hi hi f and k hi hi k as i we have hi hi z for proof see appendix quadratically lifted discrete observations consider a random variable rd taking values ei i d where ei are standard basic orths in rd we identify the probability distribution of such variable with a point from p the probabilistic simplex where prob ei let now k with drawn independently across k from and let k k k x k k i j we are about to point our regular data such that the associated simple family of distributions contains the distributions of the quadratic lifts k of random vectors k proposition let f sd x z sd zij j zij i j and let z d be a set of all positive semidefinite matrices from denote m x h z ln zij exp hij sd r i so that is on sd we set h z m z m then for m m k ln k exp tr k h in other words the simple family s f z d contains distributions of all random variables k with for proof see appendix estimating linear forms situation and goal consider the situation as follows given are euclidean spaces ef em ex along with regular data f ef m em f m r a nonempty set x contained in a convex compact set x ex an affine mapping x a x ex em such that a x m this is nothing more than a convenient way of thinking of a discrete random variable taking values in a set a vector g ex and a constant c specifying the linear form g x hg xi c ex r a tolerance let p be the family of all borel probability distributions on ef given a random observation p where p p is associated with unknown signal x known to belong to x association meaning that hf f ln e p f a x ef we want to recover the quantity g x given we call an estimate a borel function gb ef r if for all pairs x x p p satisfying it holds g g x if is the infimum of those for which estimate gb is then clearly gb is we refer to as the of the estimate gb the data g x and a f m b g x a f m min x p x p prob g g x r hf ln e p f a x f when g x a f m are clear from the context we shorten b g x a f m to b g in the setting of this section we are about to build in a computationally efficient fashion an affine estimate gb along with such that the estimate is the construction let us set f f f ef f f so that f is a nonempty convex set in ef and let f sup f a x g x f r f sup a x g x f r so that are convex functions on f recall that is and continuous on f m while a x is a compact subset of m these functions give rise to convex functions b ef r given by b f inf f ln f f b f inf f ln f f and to convex optimization problem n b opt min f f h io b f b f with our approach a presumably good estimate of g x and its risk are given by an optimal or nearly so solution to the latter problem the corresponding result is as follows from now on hu vi denotes the inner product of vectors u v belonging to a euclidean space what is this space it always will be clear from the context proposition in the situation of section let satisfy the relation then b f inf f ln f f max inf f f a x g x ln b f inf f ln f f max inf f a x g x ln b are convex furthermore a feasible solution to the system and the functions of convex constraints b f b f in variables f induces estimate gb b g x a f m of g x x x with at most relation and thus the risk bound clearly holds true when is a candidate solution to problem and h i b b b as a result by properly selecting we can make an upper bound on the of estimate arbitrarily close to opt and equal to opt when optimization problem is solvable for proof see appendix estimation from repeated observations assume that in the situation described in section we have access to k observations sampled independently of each other from a probability distribution p and are allowed to build our estimate based on these k observations rather than on a single observation we can immediately reduce this new situation to the previous one simply by redefining the data specifically given f ef m em f m r x x ex a g x hg xi c see section and a positive integer k let us replace f ef with f k z f efk ef ef and replace z k k p k f m r with f k fk k fi f m it is immediately seen that the updated data satisfy all requirements imposed on the data in section furthermore for all f k fk f k whenever a borel probability distribution p on ef and x x satisfy the distribution p k of sample k drawn from p and x are linked by the relation z x hf k k i k k ln e p ln ehfi i p f k a x efk ef i applying to our new data the construction from section we arrive at repeated observations version of proposition note that the resulting convex are symmetric permutations of the components fk of f k implying that we lose nothing when restricting ourselves with collections f k with equal to each other components it is convenient to denote the common value of these components f with these observations proposition becomes the statements as follows we use the assumptions and the notation from the previous section proposition in the situation described in section let satisfy the relation and let a b ef r positive integer k be given then functions b f inf f k ln f f inf f f a x g x k ln b f inf f k ln f f inf f a x g x k ln are convex and real valued furthermore let be a feasible solution to the system of convex constraints b f b f in variables f then setting gb k xk k we get an estimate of g x x x via independent observations p i k with at most meaning that whenever a borel probability distribution p is associated with x x in the sense of one has k k g k g x relation clearly holds true when is a candidate solution to the convex optimization problem n h io b b f b f opt min f f and b h i b b as a result properly selecting we can make an upper bound on the of estimate gb arbitrarily close to opt and equal to opt when optimization problem is solvable from now on if otherwise is not explicitly stated we deal with observations to get back to case it suffices to set k application estimating linear form of parameters of distributions situation we are about to apply construction form section in the situation where our observation is subgaussian with parameters affinely parameterized by signal x and our goal is to recover a linear function of x specifically consider the situation described in section with the data as follows f ef rd m em rd h m ht ht m h rd rd r so that s f m is the family of all distributions on rd x x ex rnx is a nonempty convex compact set and a x m x where a is matrix and m x is affinely depending on x symmetric d d matrix such that m x is when x x g x is an affine function on ex same as in section our goal is to recover the value of a given linear function g y g t y c at unknown signal x x via observation k with drawn independently across i from a distribution p which is associated with x which now means is with parameters ax a m x we refer to gaussian case as to the special case of the just described problem where the distribution p associated with signal x is exactly n ax a m x in the case in question m so that takes place and the left hand sides in the constraints are b f inf f t ax a f t m x f k ln g x o t ln f m x f f t ax a g x b f inf t ax a f t m x f k ln g x o ln f t m x f f t ax a g x thus system reads o ln f t m x f f t ax g x n o f max ln f t m x f f t ax g x at f max we arrive at the following version of proposition proposition in the situation described above given let be a feasible solution to the convex optimization problem h io n b f b f b opt min f f where o ln f t m x f f t ax g x at f o b f max ln f t m x f f t ay g y at b f max let us set i h b b b then the of the affine estimate k x gb f k k taken the data listed in the beginning of this section is at most t it is immediately seen that optimization problem is solvable provided that ker m x and an optimal solution to the problem taken along with i h b b yields the affine estimate k x t k with the data listed in the beginning of this section at most opt consistency we can easily answer the natural question when the proposed estimation scheme is consistent meaning that for every it allows to achieve arbitrarily small provided that k is large enough specifically if we denote g x g t x c from proposition it is immediately seen that a sufficient condition for consistency is the existence of rd such that ax g t x for all x x x or equivalently that g is orthogonal to the intersection of the kernel of a with the linear span of x x indeed under this assumption for every fixed we clearly have b implying that opt with b and opt given by the condition in question is necessary for consistency as well since when the condition is violated we have for properly selected x with g g making low risk recovery of g x x x impossible already in the case of zero noise observations those where the observation stemming from signal x x is identically equal to ax a direct product case further simplifications are possible in the direct product case where in addition to what was assumed in the beginning of section ex eu ev and x u v with convex compact sets u eu rnu and v ev rnv a x u v au a m v u v rd sd with m v for v v g x u v g t u c depends solely on u and it is immediately seen that in the direct product case problem reads t t t opt min a f g f g max ln f m v f f note that in the gaussian case with m x depending on x the above condition is in general not necessary for consistency since a nontrivial information on x and thus on g x can in principle be extracted from the covariance matrix m x which can be estimated from observations where h max ut t assuming ker m v the problem is solvable and its optimal solution gives rise to the affine estimate x t k g at g at c k i with opt in addition to the assumption that we are in the direct product case assume for the sake of simplicity that m v whenever v v in this case reads n o opt min max f v at f g f g ln f t m v f f whence taking into account that f v clearly is convex in f and concave in v while v is a convex compact set by theorem we get also opt max opt v min at f g f g ln f t m v f f now consider the problem of recovering g t u from observation i k independently of each other sampled from n au a m v where unknown u is known to belong to u and v v is known let v be the minimax of the recovery n o v inf n m v k k g k g t u g b where inf is taken over all borel functions gb rkd invoking proposition it is immediately seen that whenever one has qn v p opt v ln where qn s is the of the standard normal distribution since the family of all with parameters m v u u v v distributions on rd contains all gaussian distributions n au a m v induced by u v u v we arrive at the following conclusion proposition in the just described situation the minimax optimal riskopt g k inf b g b of recovering g t u from with parameters m v u v u random observations is within a moderate factor of the upper bound opt on the taken the same data of the affine estimate yielded by an optimal solution to namely p ln opt riskopt qn ln with the factor qn as it is worth mentioning that in a more general setting of good observation schemes described in the numerical illustration in this section we consider the problem of estimating a linear form of signal x known to belong to a given convex compact subset x via indirect observations ax affected by relative specifically our observation is sg ax m x where n x x x x rn xj j j n m x xj here a and j n are given matrices in other words we are in the situation where small signal results in low observation noise the linear form to be recovered from observation is g x g t x the entities g a and reals degree of smoothness noise intensity are parameters of the estimation problem we intend to process parameters g a are generated as follows g is selected at random and then normalized to have max g t x we consider the case of n d deficient observations the d nonzero singular values of a were set to i d where condition number is a parameter the orthonormal systems u and v of the first d left and respectively right singular vectors of a were drawn at random from rotationally invariant distributions positive semidefinite d d matrices are orthogonal projectors on randomly selected subspaces in rd of dimension in all experiments we deal with case k note that x possesses point whence m x m whenever x x as a result subgaussian distributions with matrix parameter m x x x can be thought also to have matrix parameter m one of the goals of the present experiment is to compare the risk of the affine estimate in the above model to its performance in the envelope model sg ax m where the fact that small signals result in observations is ignored we present in figure the results of the experiment in which for a given set of parameters d n and we generate random estimation problems collections g a j d for each problem we compute of two affine in estimates of g t x as yielded by optimal solution to the first for the problem described above the left boxplot in each group and the second for the aforementioned direct product envelope of the problem where the mapping c x m the right boxplot note the noise amplification x m x is replaced with x m effect the risk is about times the level of the observation noise and significant variability of risk across the experiments seemingly both these phenomena are due to deficient observation model n d combined with random interplay between the directions of coordinate axes in rm along these directions x becomes more and more thin and the orientation of the kernel of opt of the affine estimate constructed following the rules in section satisfies the bound opt ln riskopt ln where riskopt is the corresponding minimax risk figure empirical distribution of the of affine estimation over estimation problems d n for and in each group distribution of risks for the problem with sg ax m x on the left for the problem with sg ax m on the right quadratic lifting and estimating quadratic forms in this section we apply the approach in section to the situation where given an sample k rd with distribution px of depending on an unknown signal x x our goal is to estimate a quadratic functional q x xt qx ct x of the signal we consider two situations the gaussian case where px is a gaussian distribution with parameters affinely depending on x and discrete case where px is a discrete distribution corresponding to the probabilistic vector ax a being a given stochastic matrix our estimation strategy is to apply the techniques developed in section to quadratic liftings of actual observations in the gaussian case so that the resulting estimates are affine functions of s we first focus on implementing this program in the gaussian case estimating quadratic forms gaussian case in this section we focus on the problem as follows given are a nonempty bounded set u rm and a nonempty convex compact set v rk an affine mapping v m v rk sd which maps v onto convex compact subset v of an affine mapping u a u rm rd where a is a given d m matrix a functional of interest f u v u t q u q t v rm rk r where q and q are known m m symmetric matrix and vector respectively a tolerance we observe an sample k rd with gaussian distribution pu v of depending on an unknown signal u v known to belong to u v pu v n a u m v our goal is to estimate f u v from observation k the b g of a candidate estimate gb a borel function on rkd is defined as the smallest such that g k f u v u v u v k v k construction our course of actions is as follows we specify convex compact subset z such that u u u t z z z matrix sd and real such that and v and ik cf section we set x u v v u u t and x v u u t u u v v so that x x v z ex rk we select and set d d d h sd h f r ef r s m v bzb t em sd b a where being the m canonic basis vector of when adding to the above entities function as defined in we conclude by proposition that m f and form a regular data such that for all u v u v and h h f ln v exp h h h t i h h m v b u u t b t where the inner product on ef is defined as h h h g g i ht g tr hg so that h h h t i ht t observe that a x v u u t m v bzb t is an affine mapping which maps x into z z m and g x ex r g x tr qz q t v u t q u q t v is a linear functional on ex as a result of the above steps we get at our disposal entities ex em ef f m x x a g and participating in the setup described in section and it is immediately seen that these entities meet all the requirements imposed by this setup the bottom line is that the estimation problem stated in the beginning of this section reduces to the problem considered in section the result when applying to the resulting data proposition which is legitimate since in clearly satisfies we arrive at the result as follows proposition in the just described situation let us set t g v z b h h max inf h m v bzb v z b h h max v z inf t h m v bzb k ln g v z k ln d d b so that the functions h h r s r are convex furthermore whenever form a feasible solution to the system of convex constraints b h h b h h in variables h h rd sd r r setting gb k k t h k we get an estimate of the functional of interest f u v u t q u q t v via k independent observations n a u m v i k with not exceeding u v u v k n a u m v k u v gb k in particular setting for h h rd sd h i b h h b h h h i b h h b h h we obtain an estimate with not exceeding for proof see section remark in the situation described in the beginning of this section let a set w u be given and assume we are interested in recovering functional of interest at points u v w only when reducing the domain of interest to w we hopefully can reduce the of recovery assuming that we can point out a convex compact set w v z such that u v w v u u t it can be straightforwardly verified that in this case the conclusion of proposition remains valid when the set v z in is replaced with w and the set u v in is replaced with w this modification enlarges the feasible set of and thus reduces the attainable risk bound discussion when estimating quadratic forms from observations k with we applied literally the construction of section thus restricting ourselves with estimates affine in quadratic liftings of s as an alternative to such basic approach let us consider estimates which are affine in the full quadratic lifting k k k t of k thus extending the family of candidate estimates what is affine in is affine in but not vice versa unless k note that this alternative is covered by our approach all we need is to replace the original components d m v a of the setup of this section with their extensions kd m v diag m v m v z k v m v diag m v m v v v a a and set k to it is easily seen that such modification can only reduce the risk of the resulting estimates the price being the increase in design dimension and thus in computational complexity of the optimization problems yielding the estimates to illustrate the difference between two approaches consider the situation to be revisited in section where we are interested to recover the energy ut u of a signal u rm from observation u n where is unknown diagonal matrix with diagonal entries from the range and a priori information about u is that r for some known assume that cf section m ln where is a given reliability tolerance and that under these assumptions one can easily verify that in the case the of both the estimate t and of the estimate yielded by the proposed approach p are up to absolute constant factors the same as the optimal namely o r r m ln now let us look at the case k where we observe two independent copies and of observation here the of the naive estimate and of the estimate obtained by applying our basic approach with k are just by absolute constant factors better than in the case both these risks still are o in contrast to this an intelligent estimate has risk p o r m ln whenever r which is much smaller than r when m ln and p r ln it is easily seen that with the outlined alternative implementation our approach p also results in estimate with correct o r m ln consistency we are about to present a simple sufficient condition for the estimator suggested by proposition to be consistent in the sense of section specifically assume that v is a singleton such that m which allows to satisfy with m and same as allows to assume that f u v u t q u g x v z tr qz the first m columns of the d m matrix a are linearly independent the consistency of our estimation procedure is given by the following simple statement proposition in the just described situation and under assumptions given consider the estimate k x t gbk k k where h i b b bk and bk goes to as k are given by then the of g for proof see section numerical illustration direct observations the problem our first illustration is deliberately selected to be extremely simple given direct noisy observation of unknown signal u rm known to belong to a given set u we want to recover the energy ut u of u what we are interested in is the quadratic in estimate with as small on u as possible here is a given design parameter note that we are in the situation where the dimension d of the observation is equal to the dimension m of the signal underlying observation the details of our setup are as follows u is the spherical layer u u rm ut u where r r r r are given as a result the main ingredient of constructions in section the convex compact subset z of z containing all matrices u u t u u see can be specified as z z tr z n with matrix known to be diagonal with diagonal entries satisfying i d m with known and in terms the setup of section we are in the case where v v rm vi i m and m v diag vm the functional of interest is f u v ut u is given by with q im and q processing the problem it is easily seen that in the situation in question the construction in section boils down to the following we lose nothing when restricting ourselves with estimates of the form gb t with properly selected scalars and and are supplied by the convex optimization problem with just variables n h i o b b b min where b ln m max i hh max t ln b ln m max i i hh max t ln with quantity specifically the of a feasible solution to augmented by the i hb b b yields estimate with on u not exceeding the energy estimation problem where n im with known to belong to a given range is well studied in the literature available results investigate analytically the interplay between the dimension m of signal the range of noise intensity and the parameters r r and offer provably optimal up to absolute constant factors estimates for example consider the case with r and and assume for the sake of definiteness that m otherwise already the trivial identically zero estimate is near optimal and that we are in high dimensional regime m ln it p is well known that in this case the optimal up to absolute constant factor is m ln and is achieved again up to absolute constant factor at the estimate x b t it is easily seen that under the circumstances similar risk bound holds true for the estimate yielded by the optimal solution to a nice property of the proposed approach is that automatically takes care of the parameters and results in estimates with seemingly performance as is witnessed by the numerical results we present below numerical results in the experiments we are reporting on we compute for different sets of parameters m r r and in all experiments the attainable by the proposed estimators in the gaussian case the optimal values of the problem along with suboptimality ratios of such risks to the lower bounds on the best possible under circumstances to compute these lower bounds we use the following construction consider the problem of estimating u u u r r given observation n u with same as in section the optimal riskopt for this problem is defined as the infimum of the over all estimates now let us select somehow the r r and and let and be two distributions of observations as follows is the distribution of random vector where and are independent is uniformly distributed over the sphere and n im it is immediately seen that if there is no test which can decide on the hypotheses and via observation with total risk defined as the sum over our two hypotheses of probabilities to reject the hypothesis when it is true the quantity is a lower bound on the optimal riskopt in other words denoting by the density of we have z min riskopt rd now the densities are spherically symmetric whence denoting by the univariate density of the energy t of observation we have z z min min s s ds rd and we conclude that z min s s ds riskopt on a closest inspection is the convolution of two univariate densities representable by explicit formulas implying that given we can check numerically whether the premise in indeed takes place whenever this is the case the quantity is a lower bound on riskopt in our experiments we used a simple search strategy not described here aimed at crude maximizing this bound in and used the resulting lower bounds on riskopt to compute the suboptimality ratios in figures we present some typical simulation results illustrating dependence of risks on problem dimension m figure on ratio figure and on parameter figure different curves in each plot correspond to different values of the parameter r varying in other parameters being fixed we believe that quite moderate values of the optimality ratios presented in the figures these results are typical for a much larger series of experiments we have conducted attest a rather good performance of the proposed apparatus figure estimation risks as functions of problem dimension m and r different curves other parameters and left plot estimation risks right plot suboptimality ratios numerical illustration indirect observations the problem the estimation problem we address in this section is as follows our observations are p u where p is a given d m matrix with m d observations u rm is a signal known to belong to a given compact set u n is the observation noise is positive semidefinite d d matrix known to belong to a given convex compact set v the reader should not be surprised by the singular numerical spectrum of optimality ratios our lower bounding scheme was restricted to identify actual optimality ratios among the candidate values i figure estimation risks as functions of the ratio and r different curves other parameters m and left plot estimation risks right plot suboptimality ratios figure estimation risks as functions of and r different curves other parameters m and left plot estimation risks right plot suboptimality ratios our goal is to estimate the energy f u m of the signal given a single observation in our experiment the data is specified as follows we assume that u rm is a discretization of a smooth function x t of continuous argument t ui x mi i m and use in the role of u ellipsoid u rm with s selected u a natural version of the ball x x r to make x x t dt d m matrix p is of the form u dv t where u and v are randomly selected d d and m m orthogonal matrices and the d diagonal entries in diagonal d m matrix d are of the form i d the condition number of p is a design parameter the set v of allowed values of the covariance matrices is the set of all diagonal matrices with diagonal entries varying in with the noise intensity being a design parameter processing the problem our estimating problem clearly is covered by the setups considered in section in terms of these setups we specify as id v as v and m v as the identity mapping of sd onto itself the mapping u a u becomes the mapping u p u while the set z which should be a convex compact subset of the set z containing all matrices of the form u u t u u becomes the set z z tr zdiag s t s m as suggested by proposition linear in lifted observation t estimates of f u stem from the optimal solution to the convex optimization problem b h h b h h opt min h h m b given by as applied with k the resulting estimate is with b b t and the of the estimate is by opt problem is a saddle point problem and as such is beyond the immediate scope of the standard convex programming software toolboxes primarily aimed at solving convex minimization problems however applying conic duality one can easily eliminate in the inner maxima over v z ro arrive at the reformulation which can be solved numerically by cvx and this is how was processed in our experiments numerical results to quantify the performance of the proposed approach we present along with the upper risk bounds simple lower bounds on the best achievable under the circumstances the origin of these lower bounds is as follows let w u with t w kp and let where qn is the standard normal quantile qn p p then for w max w we have w w u and kp w p the latter due to the origin of implies that there is no test which decides on the hypotheses u w and u via observation p u n id with risk as an immediate consequence the quantity w w is a lower bound on the on u of a whatever estimate of we can now try to maximize the resulting lower risk bound over u thus arriving at the lower bound lwbnd max w on a closest inspection the latter problem is not a convex one which does not prevent us from building its suboptimal solution note that in our experiments even with fixed design parameters d m we still deal with families of estimation problems differing from each other by their sensing matrices p orientation of the system of right singular vectors of p with respect to the axes of u is random so that these matrices varies essentially from simulation to simulation which affects significantly the attainable estimation risks we display in figure typical results of our experiments we see that the theoretical upper bounds on the of our estimates while varying significantly with the parameters of the experiment all the time stay within a moderate factor from the lower risk bounds a b figure empirical distribution of the over random estimation problems a upper risk bound opt as in b corresponding suboptimality ratios estimation of quadratic functionals of a discrete distribution in this section we consider the situation as follows we are given an d m sensing matrix p a which is stochastic with columns belonging to the probabilistic simplex v rd v i vi and a nonempty closed subset u of along with a observation k with i k drawn independently across i from the discrete distribution where is an unknown probabilistic vector signal known to belong to u we always assume that k we treat a discrete distribution on set as a distribution on the d vertices ed of so that possible values of are basic orths ed in rd with ej our goal is to recover from observation k the value at of a given quadratic form f u ut qu t u construction observe that for u we have u uut where is the vector in rm this observation allows to rewrite f u as a homogeneous quadratic form f u ut q q t our goal is to construct an estimate gb k of f u specifically estimate of the form gb k tr k where k is the quadratic lifting of observation k cf k k k x k k i j k and h sm and r are the parameters of the estimate to this end we set x u uut with x uut u u and specify a convex compact subset x of the intersection of the symmetric matrix simplex sm see and the cone sm of positive m d semidefinite matrices such that x x ex s we put f ef s and m thus ax at m em sd by proposition f m and as defined in form a regular data such that setting m for all u u and h sd it holds k i t at ln e exp hh h auu m u h i h sd r h z m ln z exp m ij ij i j where hh wi tr hw is the frobenius inner product on sd observe that for x ex x a x axat is an affine mapping from x into m and setting g x xi ex r we get a linear functional on ex such that we ensure that g uut uut i f u the relation z m being obvious proposition combines with proposition to yield the following result proposition in the situation in question given let m m k and let h max axat tr sd r h max axat tr sd r b h b h inf h ln max inf axat tr ln max inf axat tr ln m inf h ln max inf axat tr ln max inf axat tr ln m m m b are real valued and convex on sm and every candidate solution to the convex the functions optimization problem h io n b h b h b opt min h h induces the estimate k tr k h b h b h b of the functional of interest via observation k with on u not exceeding u u k u k numerical illustration to illustrate the above construction consider the following problem we observe independent across k k realizations of discrete random variable taking values the distribution p of is linearly parameterized by signal u which itself is a probability distribution on discrete square m x pi ap rs urs i p here ai rs are known coefficients such that i ai rs for all r s now given two sets i and j consider the events i i and j j our objective is to quantify the deviation of these events the probability distribution on being u from independence specifically to estimate via observations the quantity x fij x urs urs urs r s r s r s which is a quadratic function of u in the experiments we report below this estimation was carried out via a straightforward implementation of the construction presented earlier in this section our setup was as follows we use d sensing matrix a corresponding to the observations is generated according to a d with d d matrix d being our control parameter d was selected at random by normalizing columns of a matrix with independent entries drawn from the uniform distribution on we set x x sd xrs s m x x xrs s which is the simplest convex outer approximation of the set uut u we use i j m d we present in figure the results of experiments for taking values in other things being equal the smaller the larger is the condition number cond a of the sensing matrix and thus the larger is the upper bound on the risk of our estimate the optimal value of note that the variation of fij over x is exactly so the maximal risk is it is worthy to note that simple if compared to much more involved results of bounds in proposition for laplace functional of u distribution result in fairly good approximations of the risk of our estimate cf the boxplots of empirical distributions of the estimation error in the right plot of figure we identify the m m discrete square with d which allows to treat a probability distribution u on as a vector from a b figure estimation of independence a upper risk bound value opt in of linear estimate as a function of condition number cond a data for k and b risk of linear estimation as function of k along with boxplots of empirical error distributions for simulations cond a references bickel and ritov estimating integrated squared density derivatives sharp best order of convergence estimates the indian journal of statistics series a pages vitesses maximales de des erreurs et tests optimaux zeitschrift wahrscheinlichkeitstheorie und verwandte gebiete sur un de minimax et son application aux tests probab math approximation dans les espaces et de l estimation zeitschrift wahrscheinlichkeitstheorie und verwandte gebiete model selection via testing an alternative to penalized maximum likelihood estimators in annales de l institut henri poincare b probability and statistics volume pages elsevier and massart estimation of integral functionals of a density the annals of statistics pages butucea and comte adaptive estimation of linear functionals in the convolution model and applications bernoulli butucea and meziani quadratic functional estimation in inverse problems statistical methodology cao nemirovski xie guigues and juditsky change detection via affine and quadratic detectors electronic journal of statistics donoho statistical estimation and optimal recovery the annals of statistics donoho and liu geometrizing rates of convergence ii the annals of statistics pages donoho and liu geometrizing rates of convergence iii the annals of statistics pages donoho liu and macgibbon minimax risk over hyperrectangles and implications the annals of statistics pages donoho and nussbaum minimax quadratic estimation of a quadratic functional journal of complexity efromovich and low on optimal adaptive estimation of a quadratic functional the annals of statistics efromovich and low adaptive estimates of linear functionals probability theory and related fields j fan on the estimation of quadratic functionals the annals of statistics pages gayraud and tribouley wavelet methods to estimate an integrated quadratic functional adaptivity and asymptotic law statistics probability letters goldenshluger juditsky and nemirovski hypothesis testing by convex optimization electronic journal of statistics grant and boyd the cvx users guide release http hasminskii and i ibragimov some estimation problems for stochastic differential equations in stochastic differential systems filtering and control pages springer and exponential inequalities with constants for of order two in stochastic inequalities and applications pages springer huang and j fan nonparametric estimation of quadratic regression functionals bernoulli i ibragimov and khas minskii estimation of linear functionals in gaussian noise theory of probability its applications i ibragimov nemirovskii and khas minskii some problems on nonparametric estimation in gaussian white noise theory of probability its applications juditsky and nemirovski nonparametric estimation by convex programming the annals of statistics juditsky and nemirovski hypothesis testing via affine detectors electronic journal of statistics sharp adaptive estimation of quadratic functionals probability theory and related fields klemela and a tsybakov sharp adaptive estimation of linear functionals annals of statistics pages laurent estimation of integral functionals of a density and its derivatives bernoulli laurent adaptive estimation of a quadratic functional of a density by model selection esaim probability and statistics laurent and massart adaptive estimation of a quadratic functional by model selection annals of statistics pages lepski some new ideas in nonparametric estimation arxiv preprint lepski and willer estimation in the convolution structure density model part i oracle inequalities arxiv preprint lepski and spokoiny optimal pointwise adaptive methods in nonparametric estimation the annals of statistics pages levit conditional estimation of linear functionals a problemy peredachi informatsii proofs from now on we use the notation z u uut proof of proposition proposition is nothing but proposition i to make the paper we reproduce the proof below we start with proving item i of proposition for any h rd and h sd such that i we have h h ln exp ht t n ln i exp ht t h ln det i ht h t i h h h ln det i t ht h i t h h t i h h observe that for h we have i h h so that implies that for all rd v and h h f h h t t h h ln det i h h h h h ht z p h h ln det i tr p h h z ln det i h h z we need the following lemma let be a d d symmetric positive definite matrix let and let v be a closed convex subset of such that v ik o cf let also ho h sd h then for all h h v ln det i h where h ln det i tr h k here k k is the spectral and k kf the frobenius norm of a matrix in addition h is continuous function on ho v which is convex in h h o and concave in fact affine in v proof for h ho and v fixed we have k k t k k k d h with d h for h ho we have used the fact that implies that kabkf kakkbkf a similar computation yields kf kf d h k noting besides this setting f x ln det x int r and equipping sd with the frobenius inner product we have x so that with and we have for properly selected and f i f i f i i f i h i f i hi h i i we conclude that f i f i tr ki i kf denoting by the eigenvalues of and noting that k max k k d h we get d h therefore the eigenvalues of i i satisfy d h whence ki i kf kf d h noting that kf max kf kf d h see we conclude that ki i kf d h d h and when substituting into we get f i f i tr d h d h furthermore because by the matrix d i satisfies kdk i d i dt dt dt z z consequently kf dt kf dt kf kf kf d h this combines with and the relation tr tr tr h to yield f i f i tr h d h and we arrive at it remains to prove that h is and continuous on ho the only component of this claim which is not completely evident is convexity of the function in h ho to see that it is indeed the case note that ln det is concave on the interior of the semidefinite cone function f u v is convex and nondecreasing in u v in the convex domain u v u v and the function k is obtained from f by convex substitution of variables h kf k mapping ho into combining and the origin of see we conclude that for all rd v and h h f rd ln exp ht t h h z to complete the proof of i all we need is to verify the claim that f is regular data which boils down to checking that f r is continuous and let us verify and continuity recalling that h f v r indeed is and continuous the verification in question reduces to checking that h h z is and continuous on rd z continuity and concavity in z being evident all we need to prove is that whenever z z the function h h h h z is convex in h h f rd by the schur complement lemma we have h h t h h ht g h h g g p h h h h g h h h implying that g is convex now since z due to z z we have h h h h h h h h h h g p h h tr zg and because g is convex so is the epigraph of as claimed item i of proposition is proved it remains to verify item ii of proposition stating that is coercive in h let v z z and hi hi rd with k hi hi k as i and let us prove that hi hi z looking at the expression for hi hi z it is immediately seen that all terms in this expression except for the terms coming from hi hi z remain bounded as i grows so that all we need to verify is that hi hi z as i observe that the sequence hi i is bounded due to hi implying that khi as i denoting by e the last basic orth of and taking into satisfy h for some positive due account that the matrices i d d hi to hi observe that t hi hi t hi hi hi hi hi hti hi hi ee ri t hi z z khi pi where and kri kf c khi as a result hi hi z tr zpi tr z khi eet ri khi tr zeet kri kf c khi kzkf z and the concluding quantity tends to as i due to khi i proof of proposition continuity and of and are obvious let us verify relations let us fix and let and k let us denote sk the set of all permutations of k and let k m x k sk m by the symmetry argument we clearly have m x x k n x x k where n is the number of permutations sk such that a particular pair i j i j k is met among the pairs k m comparing the total number of in the left and the right hand sides of the latter equality we get card sk m n k k which combines with the equality itself to imply that k k x k m x x k card sk m x k card sk let be the identity permutation of due to we have x k exp tr k k exp tr k card sk by the inequality y h k exp tr k sk because k are equally distributed k exp tr k m y by definition of k exp tr k m im h since are exp tr the distribution of the random variable clearly is pz so that ln exp tr ln ew exp tr w d x ln em hij z i the latter relation combines with to imply proof of proposition let us first verify the identities and the function f x f a x g x ln f x r is and continuous and x is compact hence by theorem b f inf f ln f f inf f f x inf f f x inf f f a x g x ln b is as required in as we know f is a continuous function on f so that convex on ef provided that the function is now let x and let be a subgradient of f f a taken at f for f ef and all such that f f we have f f a g ln a f g ln f i g we have used therefore f is below bounded on the set f f in addition b is and this set is nonempty since f contains a neighbourhood of the origin thus b convex on ef verification of and of the fact that f is a convex function on ef is completely similar now given a feasible solution to let us select somehow taking into account b we can find and such that the definition of f and ln f and ln implying that the collection is a feasible solution to we need the following statement lemma given let be a feasible solution to the system of convex constraints a b f f ln f f f ln f in variables f then the of the estimate gb is at most proof let satisfy the premise of lemma and let x x p satisfy we have g x g x b g g x e p f a x thus ln b g g x a x g x by definition of and due to x x by ln and we conclude that b g g x similarly b g g x e p e x a x e g x whence ln b g g x a x g x by definition of and due to x x by ln so that b g g x when invoking lemma we get g g x for all x x p p satisfying since can be selected arbitrarily close to gb indeed is a estimate proof of proposition under the premise of the proposition let us fix u u v v so that x v z u u u t denoting by p pu v the distribution of t with n a u m v and invoking we see that for just defined x p relation takes place applying proposition we conclude that k n a u m v k g k g x it remains to note that by construction it holds g x v z u q t qz u q t q u u t q t u t q u f u v the in particular part of proposition is immediate with and given by h h clearly satisfy proof of proposition by the columns of d m matrix b see are linearly independent so that we can find m d matrix c such that cb let us define rd sd from the relation c t qc o t where for d d matrix s s o is the matrix obtained from s by replacing the entry with zero let us fix setting h i bk bk and invoking proposition all we need to prove is that in the case of one has h i k bk b lim sup to this end note that in our current situation and simplify to h h z ln det i h h h h h b z p h h k b h h inf max z tr qz k ln h k b h h inf max z tr qz k ln h h t z b ht tr h t hence h i k bk b inf max tr tr ln inf max ln det i ln tr q p p inf max ln det i ln tr q tr b t b b tr b t t t z t t t tr b b by we have b t b b t c t qc j b where the only nonzero entry if any in the d d matrix j is due to the structure of b see we conclude that the only nonzero element if any in b t jb is and that t b cb t q cb q recall that cb now whenever z z one has whence tr b t tr q b tr q tr j implying that the quantity t in is zero provided consequently becomes h i b k b k inf max ln det i ln t tr b t h tr b t t b now for appropriately selected independent of k real c we have for c ln det i and tr t t b t t b tr b b for all z recall that z is bounded consequently given we can find large enough to ensure that and which combines with to imply that h i k bk b ln and follows
10
apr stochastic variance reduction for nonconvex optimization sashank reddi sjakkamr carnegie mellon university ahmed hefny ahefny carnegie mellon university suvrit sra suvrit massachusetts institute of technology bapoczos carnegie mellon university alex smola alex carnegie mellon university original circulated date february abstract we study nonconvex problems and analyze stochastic variance reduced gradient svrg methods for them svrg and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent sgd but their theoretical analysis almost exclusively assumes convexity in contrast we prove rates of convergence to stationary points of svrg for nonconvex optimization and show that it is provably faster than sgd and gradient descent we also analyze a subclass of nonconvex problems on which svrg attains linear convergence to the global optimum we extend our analysis to variants of svrg showing theoretical linear speedup due to in parallel settings introduction we study nonconvex problems of the form n min f x fi x n where neither f nor the individual fi i n are necessarily convex just lipschitz smooth lipschitz continuous gradients we use fn to denote all functions of the form we optimize such functions in the incremental oracle ifo framework agarwal bottou defined below definition for f fn an ifo takes an index i n and a point x rd and returns the pair fi x x algorithm sgd gradientdescent svrg msvrg nonconvex o convex o gradient dominated o o o o n o min o log o n o min o n log fixed step size table table comparing the ifo complexity of different algorithms discussed in the paper the complexity is measured in terms of the number of oracle calls required to achieve an solution see definition here by fixed step size we mean that the step size of the algorithm is fixed and does not dependent on or alternatively t the total number of iterations the complexity of gradient dominated functions refers to the number of ifo calls required to obtain solution for a dominated function see section for the definition for sgd we are not aware of any specific results for gradient dominated functions also f f and k where is the initial point and is an optimal solution to are assumed to be constant for a clean comparison the results marked in red are the contributions of this paper ifo based complexity analysis was introduced to study lower bounds for problems algorithms that use ifos are favored in applications as they require only a small amount information at each iteration two fundamental models in machine learning that profit from ifo algorithms are i empirical risk minimization which typically uses convex models and ii deep learning which uses nonconvex ones the prototypical ifo algorithm stochastic gradient descent sgd has witnessed tremendous progress in the recent years by now a variety of accelerated parallel and faster converging versions are known among these of particular importance are variance reduced vr stochastic methods schmidt et johnson zhang defazio et which have delivered exciting progress such as linear convergence rates for strongly convex functions as opposed to sublinear rates of ordinary sgd robbins monro nemirovski et similar but not same benefits of vr methods can also be seen in smooth convex functions the svrg algorithm of johnson zhang is particularly attractive here because of its low storage requirement in comparison to the algorithms in schmidt et defazio et despite the meteoric rise of vr methods their analysis for general nonconvex problems is largely missing johnson zhang remark on convergence of svrg when f fn is locally strongly convex and provide compelling experimental results fig in johnson zhang however problems encountered in practice are typically not even locally convex let alone strongly convex the current analysis of svrg does not extend to nonconvex functions as it relies heavily on convexity for controlling the variance given the dominance of stochastic gradient methods in optimizing deep neural nets and other large nonconvex models theoretical investigation of faster nonconvex stochastic methods is much needed convex vr methods are known to enjoy the faster convergence rate of gradientdescent but with a much weaker dependence on n without compromising the rate like sgd however it is not clear if these benefits carry beyond convex problems prompting the central question of this paper for nonconvex functions in fn can one achieve convergence rates faster than both sgd and gradientdescent using an ifo if so then how does the rate depend on n and on the number of iterations performed by the algorithm perhaps surprisingly we provide an affirmative answer to this question by showing that a careful selection of parameters in svrg leads to faster convergence than both sgd and gradientdescent we use incremental gradient and stochastic gradient interchangeably though we are only interested in problems to our knowledge ours is the first work to improve convergence rates of sgd and gradientdescent for nonconvex optimization main contributions we summarize our main contributions below and also list the key results in table we analyze nonconvex stochastic variance reduced gradient svrg and prove that it has faster rates of convergence than gradientdescent and ordinary sgd we show that svrg is faster than gradientdescent by a factor of see table we provide new theoretical insights into the interplay between iteration complexity and convergence of nonconvex svrg see corollary for an interesting nonconvex subclass of fn called gradient dominated functions polyak nesterov polyak we propose a variant of svrg that attains a global linear rate of convergence we improve upon many prior results for this subclass of functions see section to the best of our knowledge ours is the first work that shows a stochastic method with linear convergence for gradient dominated functions we analyze nonconvex svrg and show that it provably benefits from specifically we show theoretical linear speedups in parallel settings for large sizes by using a of size b we show that nonconvex svrg is faster by a factor of b theorem we are not aware of any prior work on stochastic methods that shows linear speedup in parallel settings for nonconvex optimization our analysis yields as a byproduct a direct convergence analysis for svrg for smooth convex functions section we examine a variant of svrg called msvrg that has faster rates than both gradientdescent and sgd related work convex bertsekas surveys several incremental gradient methods for convex problems a key reference for stochastic convex optimization for min ez f x z is nemirovski et faster rates of convergence are attained for problems in fn by vr methods see defazio et johnson zhang schmidt et et zhang defazio et asynchronous vr frameworks are developed in reddi et agarwal bottou lan zhou study for convex problems shalevshwartz prove linear convergence of stochastic dual coordinate ascent when the individual fi i n are nonconvex but f is strongly convex they do not study the general nonconvex case moreover even in their special setting our results improve upon theirs for the high condition number regime nonconvex sgd dates at least to the seminal work robbins monro and since then it has been developed in several directions poljak tsypkin ljung bottou kushner clark in the nonsmooth setting sra considers proximal splitting methods and analyzes asymptotic convergence with nonvanishing gradient errors hong studies a distributed nonconvex incremental admm algorithm these works however only prove expected convergence to stationary points and often lack analysis of rates the first nonasymptotic convergence rate analysis for sgd is in ghadimi lan who show that sgd ensures in o iterations a similar rate for parallel and distributed sgd was shown recently in lian et gradientdescent is known to ensure in o iterations nesterov chap the first analysis of nonconvex svrg seems to be due to shamir who considers the special problem of computing a few leading eigenvectors for pca see also the follow up work shamir finally we note another interesting example stochastic optimization of locally functions hazan et wherein actually a o convergence in function value is shown background problem setup we say f is if there is a constant l such that x y k lkx yk x y rd throughout we assume that the functions fi in are so that x y k lkx yk for all i n such an assumption is very common in the analysis of methods here the lipschitz constant l is assumed to be independent of a function f is called convex if there is such that f x f y y x yi kx y rd the quantity is called the condition number of f whenever f is and convex we say f is convex when f is convex we also recall the class of gradient dominated functions polyak nesterov polyak where a function f is called dominated if for any x rd f x f x where is a global minimizer of f note that such a function f need not be convex it is also easy to show that a convex function is dominated we analyze convergence rates for the above classes of functions following nesterov ghadimi lan we use x to judge when is iterate x approximately stationary contrast this with sgd for convex f where one uses f x f or kx as a convergence criterion unfortunately such criteria can not be used for nonconvex functions due to the hardness of the problem while the quantities x and f x f or kx are not comparable in general see ghadimi lan they are typically assumed to be of similar magnitude throughout our analysis we do not assume n to be constant and report dependence on it in our results for our analysis we need the following definition definition a point x is called if x a stochastic iterative algorithm is said to achieve in t iterations if e xt where the expectation is over the stochasticity of the algorithm we introduce one more definition useful in the analysis of sgd methods for bounding the variance definition we say f fn has a gradient if x k for all i n and x rd nonconvex sgd convergence rate stochastic gradient descent sgd is one of the simplest algorithms for solving algorithm lists its pseudocode by using a uniformly randomly chosen with replacement index it from n sgd algorithm sgd input rd sequence for t to t do uniformly randomly pick it from n xt x end for uses an unbiased estimate of the gradient at each iteration under appropriate conditions ghadimi lan establish convergence rate of sgd to a stationary point of f their results include the following theorem q theorem suppose f has gradient let t where c f x and x is an optimal solution to then the iterates of algorithm satisfy r f f l t min e x k t for completeness we present a proof in the appendix note that our choice of step size requires knowing the total number of iterations t in advance a more practical approach is to use a t or a bound on ifo calls made by algorithm follows as a corollary of theorem corollary suppose function f has gradient then the ifo complexity of algorithm to obtain an solution is o as seen in theorem sgd has a convergence rate of o t this rate is not improvable in general even when the function is convex nemirovski yudin this barrier is due to the variance introduced by the stochasticity of the gradients and it is not clear if better rates can be obtained sgd even for convex f fn nonconvex svrg we now turn our focus to variance reduced methods we use svrg johnson zhang an algorithm recently shown to be very effective for reducing variance in convex problems as a result it has gained considerable interest in both machine learning and optimization communities we seek to understand its benefits for nonconvex optimization for reference algorithm presents svrg s pseudocode observe that algorithm operates in epochs at the end of epoch s a full gradient is calculated at the point requiring n calls to the ifo within its inner loop svrg performs m stochastic updates the total number of ifo calls for each epoch is thus m n for m the algorithm reduces to the classic gradientdescent algorithm suppose m is chosen to be o n typically used in practice then the total ifo calls per epoch is n to enable a fair comparison with sgd we assume that the total number of inner iterations across all epochs in algorithm is t also note a simple but important implementation detail as written algorithm requires storing all the iterates t m this storage can be avoided by keeping a running average with respect to t the probability distribution pi m algorithm attains linear convergence for strongly convex f johnson zhang for nonstrongly convex functions rates faster than sgd can be shown by using an indirect perturbation xiao zhang we first state an intermediate result for the iterates of nonconvex svrg to ease exposition we define t l for some parameters and to be defined shortly our first main result is the following theorem that provides convergence rate of algorithm theorem let f fn let cm and ct such that for t m define the quantity mint further let pi for i m and pm and let t be a multiple of then for the output xa of algorithm we have e xa where is an optimal solution to f f t algorithm svrg t m pi m input rd epoch length m step sizes s dt discrete probability distribution pi m for s to s do xsmp s g n for t to m do uniformly randomly pick it from n g t xt end forp m pi xi end for output iterate xa chosen uniformly random from t furthermore we can also show that nonconvex svrg exhibits expected descent in objective after every epoch the condition that t is a multiple of m is solely for convenience and can be removed by slight modification of the theorem statement note that the value above can depend on to obtain an explicit dependence we simplify it using specific choices for and as formalized below theorem suppose f fn let and m c and t is some multiple of then there exists universal constants such that we have the following in theorem and f f where is an optimal solution to the problem in and xa is the output of algorithm e xa by rewriting the above result in terms ifo calls we get the following general corollary for nonconvex svrg corollary suppose f fn then the ifo complexity of algorithm with parameters from theorem for achieving an solution is o n if ifo calls o n if corollary shows the interplay between step size and the ifo complexity we observe that the number of ifo calls is minimized in corollary when this gives rise to the following key results of the paper corollary suppose f fn let m c and t is some multiple of then there exists universal constants such that we have the in theorem and following e xa f f t where is an optimal solution to the problem in and xa is the output of algorithm corollary if f fn then the ifo complexity of algorithm with parameters in corollary to obtain an solution is o n note the rate of o in the above results as opposed to slower o t rate of sgd theorem for a more comprehensive comparison of the rates refer to section algorithm k t m pi m m input rd k epoch length m step sizes discrete probability distribution pi for k to k do xk svrg t m pi m end for output xk gradient dominated functions before ending our discussion on convergence of nonconvex svrg we prove a linear convergence rate for the class of dominated functions for ease of exposition assume that a property analogous to the high condition number regime for strongly convex functions typical in machine learning note that gradient dominated functions can be nonconvex theorem suppose f is dominated where then the iterates of algorithm with t e m c for all t m and pm and pi for all i m satisfy e xk here and are the constants used in corollary in fact for dominated functions we can prove a stronger result of global linear convergence theorem if f is dominated then with t e m c for t m and pm and pi for all i m the iterates of algorithm satisfy e f xk f f f here are as in corollary is an optimal solution an immediate consequence is the following corollary if f is dominated the ifo complexity of algorithm with parameters from theorem to compute an solution is o n log note that gradientdescent can also achieve linear convergence rate for gradient dominated functions polyak however gradientdescent requires o n log ifo calls to obtain an solution as opposed to o n log for svrg similar but not the same gains can be seen for svrg for strongly convex functions johnson zhang also notice that we did not assume anything except smoothness on the individual functions fi in the above results in particular the following corollary is also an immediate consequence corollary if f is convex and the functions fi are possibly nonconvex then the number of ifo calls made by algorithm with parameters from theorem to compute an solution is o n log recall that here denotes the condition number for a convex function corollary follows from corollary upon noting that convex function is dominated theorem generalizes the linear convergence result in johnson zhang since it allows nonconvex fi observe that corollary also applies when fi is strongly convex for all i n though in this case a more refined result can be proved johnson zhang finally we note that our result also improves on a recent result on sdca in the setting of corollary when the condition number is reasonably large a case that typically arises in machine learning more precisely for empirical loss minimization show that sdca requires o n log iterations when the fi s are possibly nonconvex but their sum f is strongly convex in comparison we show that algorithm requires o n log iterations which is an improvement over sdca when convex case in the previous section we showed nonconvex svrg converges to a stationary point at the rate o a natural question is whether this rate can be improved if we assume convexity we provide an affirmative answer for convex functions this yields a direct analysis not based on strongly convex perturbations for svrg while we state our results in terms of stationarity gap x for the ease of comparison our analysis also provides rates with respect to the optimality gap f x f see the proof of theorem in the appendix theorem if fi is convex for all i n pi for i m and pm then for algorithm we have f f t e xa where is optimal for and xa is the output of algorithm we now state corollaries of this theorem that explicitly show the dependence on n in the convergence rates corollary if m n and n in theorem then we have the following bound e xa l n f f t where is optimal for and xa is the output of algorithm the above result uses a step size that depends on for the convex case we can also use step sizes independent of the following corollary states the associated result corollary if m n and in theorem then we have the following bound e xa l n f f t where is optimal for and xa is the output of algorithm we can rewrite these corollaries in terms of ifo complexity to get the following corollaries corollary if fi is convex for all i n then the ifo complexity of algorithm with parameters from corollary to compute an solution is o n corollary if fi is convex for all i n then the ifo complexity of algorithm with parameters from corollary to compute solution is o these results follow from corollary and corollary and noting that for m o n the total ifo calls made by algorithm is o n it is instructive to quantitatively compare corollary and with a step size independent of n the convergence rate of svrg has a dependence that is in the order of n corollary but this dependence can be reduced to n by either carefully selecting a step size that diminishes with n corollary or by using a good initial point obtained by say running o n iterations of sgd we emphasize that the convergence rate for convex case can be improved significantly by slightly modifying the algorithm either by adding an appropriate strongly convex perturbation xiao zhang or by using a choice of m that changes with epoch zhu yuan however it is not clear if these strategies provide any theoretical gains for the general nonconvex case nonconvex svrg in this section we study the version of algorithm is a popular strategy especially in multicore and distributed settings as it greatly helps one exploit parallelism and reduce the communication costs the pseudocode for nonconvex svrg algorithm is provided in the supplement due to lack of space the key difference between the svrg and algorithm lies in lines to to use we replace line with sampling with replacement a it n of size b lines to are replaced with the following updates x g t t it t xt when b this reduces to algorithm is typically used to reduce the variance of the stochastic gradient and increase the parallelism lemma in section g of the appendix shows the reduction in the variance of stochastic gradients with size b using this lemma one can derive the equivalents of lemma theorem and theorem however for the sake of brevity we directly state the following main result for svrg theorem let n denote the following quantity n min l where cm ct l l for t m suppose m c and t is some multiple of then for the version of algorithm with size b there exists universal constants such b that we have the following n ln and e xa f f bt where is optimal for it is important this result with sgd for a batch size of b sgd obtains a rate of o dekel et obtainable by a simple modification of theorem specifically sgd has a b dependence on the batch size in contrast theorem shows that svrg has a much better dependence of on the batch size hence compared to sgd svrg allows more efficient more formally in terms of ifo queries we have the following result corollary if f fn then the ifo complexity of the version of algorithm with parameters from theorem and size b to obtain an solution is o n corollary shows an interesting property of svrg first note that b ifo calls are required for calculating the gradient on a of size b hence svrg does not gain on ifo complexity by using however if the b gradients are calculated in parallel then this leads to a theoretical linear speedup in multicore and distributed settings in contrast sgd does not yield an efficient strategy as it requires o ifo calls for achieving an solution li et thus the performance of sgd degrades with comparison of the convergence rates in this section we give a comprehensive comparison of results obtained in this paper in particular we compare key aspects of the convergence rates for sgd gradientdescent and svrg the comparison is based on ifo complexity to achieve an solution dependence on n the number of ifo calls of svrg and gradientdescent depend explicitly on in contrast the number of oracle calls of sgd is independent of n theorem however this comes at the expense of worse dependence on the number of ifo calls in gradientdescent is proportional to but for svrg this dependence reduces to for convex corollary and for nonconvex corollary problems whether this difference in dependence on n is due to nonconvexity or just an artifact of our analysis is an interesting open problem dependence on the dependence on or alternatively t follows from the convergence rates of the algorithms sgd is seen to depend as o on regardless of convexity or nonconvexity in contrast for both convex and nonconvex settings svrg and gradientdescent converge as o furthermore for gradient dominated functions svrg and gradientdescent have global linear convergence this speedup in convergence over sgd is especially significant when medium to high accuracy solutions are required is small assumptions used in analysis it is important to understand the assumptions used in deriving the convergence rates all algorithms assume lipschitz continuous gradients however sgd requires two additional subtle but important assumptions gradients and advance knowledge of t since its step sizes depend on t on the other hand both svrg and gradientdescent do not require these assumptions and thus are more flexible step size learning rates it is valuable to compare the step sizes used by the algorithms the step sizes of sgd shrink as the number of iterations t undesirable property on the other hand the step sizes of svrg and gradientdescent are independent of t hence both these algorithms can be executed with a fixed step size however svrg uses step sizes that depend on n see corollary and corollary a step size independent of n can be used for svrg for convex f albeit at cost of worse dependence on n corollary gradientdescent does not have this issue as its step size is independent of both n and t dependence on initial point and svrg is more sensitive to the initial point in comparison to sgd this can be seen by comparing corollary of svrg to theorem of sgd hence it is important to use a good initial point for svrg similarly a good can be beneficial to svrg moreover not only provides parallelism but also good theoretical guarantees see theorem in contrast the performance gain in sgd with is not very pronounced see section best of two worlds we have seen in the previous section that svrg combines the benefits of both gradientdescent and sgd we now show that these benefits of svrg can be made more pronounced by an appropriate step size under additional assumptions in this case the ifo complexity of svrg is lower than those of sgd and gradientdescent this variant of svrg msvrg chooses a step size based on the total number of iterations t or alternatively for our discussion below we assume that t theorem let f fn have gradients let max t is the f universal constant from corollary m c and c further let t be a multiple of m pm and pi for i then the output xa of algorithm satisfies e xa r n f f l f f o min t t where is a universal constant is the universal constant from corollary and is an optimal solution to corollary if f fn has gradients the ifo complexity of algorithm with parameters from theorem to achieve an solution is o min sgd svrg x t k grad n sgd svrg grad n grad n sgd svrg grad n sgd svrg grad n sgd svrg training loss sgd svrg x t k training loss training loss test error grad n figure neural network results for mnist and datasets the top row represents the results for dataset the bottom left and middle figures represent the results for mnist dataset the bottom right figure represents the result for an almost identical reasoning can be applied when f is convex to get the bounds specified in table hence we omit the details and directly state the following result corollary suppose fi is convex for i n and f has gradients then the ifo complexity of algorithm with step size max l t n m n and pi for i m and pm to achieve an solution is o min msvrg has a convergence rate faster than those of both sgd and svrg though this benefit is not without cost msvrg in contrast to svrg uses the additional assumption of gradients furthermore its step size is not fixed since it depends on the number of iterations t while it is often difficult in practice to compute the step size of msvrg theorem it is typical to try multiple step sizes and choose the one with the best results experiments we present our empirical results in this section for our experiments we study the problem of multiclass classification using neural networks this is a typical nonconvex problem encountered in machine learning experimental setup we train neural networks with one hidden layer of nodes and softmax output nodes we use for training we use and datasets for our experiments these datasets are standard in the neural networks literature the regularization is for and mnist and for the features in the datasets are normalized to the interval all the datasets come with a predefined split into training and test datasets we compare sgd the algorithm for training neural networks against nonconvex svrg the step size or learning rate is critical for sgd we set the learning rate of sgd using the popular schedule where and are chosen so that sgd gives the best http https performance on the training loss in our experiments we also use this results in a fixed step size for sgd for svrg we use a fixed step size as suggested by our analysis again the step size is chosen so that svrg gives the best performance on the training loss initialization initialization is critical to training of neural networks we use the p normalized initialization in glorot bengio where parameters are chosen uniformly p from ni no ni no where ni and no are the number of input and output layers of the neural network respectively for svrg we use n iterations of sgd for and minst and iterations of sgd for before running algorithm such initialization is standard for variance reduced schemes even for convex problems johnson zhang schmidt et as noted earlier in section svrg is more sensitive than sgd to the initial point so such an initialization is typically helpful we use of size in our experiments sgd with is common in training neural networks note that training is especially beneficial for svrg as shown by our analysis in section along the lines of theoretical analysis provided by theorem we use an epoch size m in our experiments results we report objective function training loss test error classification error on the test set and xt convergence criterion throughout our analysis for the datasets for all the algorithms we compare these criteria against the number of effective passes through the data ifo calls divided by this includes the cost of calculating the full gradient at the end of each epoch of svrg due to the sgd initialization in svrg and the svrg plots start from value of for and mnist and for figure shows the results for our experiment it can be seen that the xt for svrg is lower compared to sgd suggesting faster convergence to a stationary point furthermore the training loss is also lower compared to sgd in all the datasets notably the test error for is lower for svrg indicating better generalization we did not notice substantial difference in test error for mnist and see section h in the appendix overall these results on a network with one hidden layer are promising it will be interesting to study svrg for deep neural networks in the future discussion in this paper we examined a vr scheme for nonconvex optimization we showed that by employing vr in stochastic methods one can perform better than both sgd and gradientdescent in the context of nonconvex optimization when the function f in is gradient dominated we proposed a variant of svrg that has linear convergence to the global minimum our analysis shows that svrg has a number of interesting properties that include convergence with fixed step size descent property after every epoch a property that need not hold for sgd we also showed that svrg in contrast to sgd enjoys efficient attaining speedups linear in the size of the minibatches in parallel settings our analysis also reveals that the initial point and use of are important to svrg before concluding the paper we would like to discuss the implications of our work and few caveats one should exercise some caution while interpreting the results in the paper all our theoretical results are based on the stationarity gap in general this does not necessarily translate to optimality gap or low training loss and test error one criticism against vr schemes in nonconvex optimization is the general wisdom that variance in the stochastic gradients of sgd can actually help it escape local minimum and saddle points in fact ge et al add additional noise to the stochastic gradient in order to escape saddle points however one can reap the benefit of vr schemes even in such scenarios for example one can envision an algorithm which uses sgd as an exploration tool to obtain a good initial point and then uses a vr algorithm as an exploitation tool to quickly converge to a good local minimum in either case we believe variance reduction can be used as an important tool alongside other tools like momentum adaptive learning rates for faster and better nonconvex optimization references agarwal alekh and bottou leon a lower bound for the optimization of finite sums bertsekas dimitri incremental gradient subgradient and proximal methods for convex optimization a survey in sra nowozin wright ed optimization for machine learning mit press bottou stochastic gradient learning in neural networks proceedings of defazio aaron bach francis and simon saga a fast incremental gradient method with support for convex composite objectives in nips pp defazio aaron j caetano s and domke justin finito a faster permutable incremental gradient method for big data problems dekel ofer ran shamir ohad and xiao lin optimal distributed online prediction using the journal of machine learning research january issn ge rong huang furong jin chi and yuan yang escaping from saddle points online stochastic gradient for tensor decomposition in proceedings of the conference on learning theory colt pp ghadimi saeed and lan guanghui stochastic and methods for nonconvex stochastic programming siam journal on optimization doi glorot xavier and bengio yoshua understanding the difficulty of training deep feedforward neural networks in in proceedings of the international conference on artificial intelligence and statistics hazan elad levy kfir and shai beyond convexity stochastic optimization in advances in neural information processing systems pp hong mingyi a distributed asynchronous and incremental algorithm for nonconvex optimization an admm based approach arxiv preprint johnson rie and zhang tong accelerating stochastic gradient descent using predictive variance reduction in nips pp jakub and peter gradient descent methods jakub liu jie peter and martin gradient descent in the proximal setting kushner harold joseph and clark dean stochastic approximation methods for constrained and unconstrained systems volume springer science business media lan guanghui and zhou yi an optimal randomized incremental gradient method li mu zhang tong chen yuqiang and smola alexander j efficient training for stochastic optimization in proceedings of the acm sigkdd international conference on knowledge discovery and data mining kdd pp acm lian xiangru huang yijun li yuncheng and liu ji asynchronous parallel stochastic gradient for nonconvex optimization in nips ljung lennart analysis of recursive stochastic algorithms automatic control ieee transactions on nemirovski juditsky lan and shapiro a robust stochastic approximation approach to stochastic programming siam journal on optimization nemirovski arkadi and yudin problem complexity and method efficiency in optimization john wiley and sons nesterov yurii introductory lectures on convex optimization a basic course springer nesterov yurii and polyak boris cubic regularization of newton method and its global performance mathematical programming poljak bt and tsypkin ya pseudogradient adaptation and training algorithms automation and remote control polyak gradient methods for the minimisation of functionals ussr computational mathematics and mathematical physics january reddi sashank hefny ahmed sra suvrit poczos barnabas and smola alex j on variance reduction in stochastic gradient descent and its asynchronous variants in nips pp robbins and monro a stochastic approximation method annals of mathematical statistics schmidt mark roux nicolas le and bach francis minimizing finite sums with the stochastic average gradient shai sdca without duality corr shai and zhang tong stochastic dual coordinate ascent methods for regularized loss the journal of machine learning research shamir ohad a stochastic pca and svd algorithm with an exponential convergence rate shamir ohad fast stochastic algorithms for svd and pca convergence properties and convexity sra suvrit scalable nonconvex inexact proximal splitting in nips pp xiao lin and zhang tong a proximal stochastic gradient method with progressive variance reduction siam journal on optimization zhu zeyuan allen and yuan yang univr a universal variance reduction framework for proximal stochastic gradient method corr appendix a nonconvex sgd convergence rate proof of theorem q and theorem suppose f has gradient let t where c f x is an optimal solution to then the iterates of algorithm satisfy q l min e xt f x t proof we include the proof here for completeness please refer to ghadimi lan for a more general result the iterates of algorithm satisfy the following bound e f e f xt xt xt l kx xt t t e f x e x k e f xt e xt t e x k the first inequality follows from lipschitz continuity of the second inequality follows from the update in algorithm and since eit xt xt unbiasedness of the stochastic gradient the last step uses our assumption on gradient boundedness rearranging equation we obtain e xt t e f x f summing equation from t to t and using that is constant we obtain min e xt t t xt e kf xt t t e f x f x t f x f x t f f c lc the first step holds because the minimum is less than the average the second and third steps t are obtained from equation and the fact that f x f x respectively the final inequality follows upon using t by setting r f f in the above inequality we get the desired result b nonconvex svrg in this section we provide the proofs of the results for nonconvex svrg we first start with few useful lemmas and then proceed towards the main results lemma for ct suppose we have ct let and be chosen such that in equation the iterate in algorithm t satisfy the bound e t where e f ct for s s t t proof since f is we have e f i t e f xt xt l t using the svrg update in algorithm and its unbiasedness the right hand side above is further upper bounded by e f t t k kvt consider now the lyapunov function e f ct t t for bounding it we will require the following s e t k e xt e k t xt i t xt e t e i t t e t i h e t t t the second equality follows from the unbiasedness of the update of svrg the last inequality follows from a simple application of and young s inequality plugging equation and equation into we obtain the following bound k kvt s kxt k e f t t e h i s k e t kx k t t t xt k e f xt t e e t to further bound this quantity we use lemma to bound e so that upon substituting it in equation we see that e f t l e t t t e t l e t the second inequality follows from the definition of ct and thus concluding the proof proof of theorem theorem let f fn let cm and ct such that for t m define the quantity mint further let pi for i m and pm and let t be a multiple of then for the output xa of algorithm we have e xa f f t where is an optimal solution to proof since for t m using lemma and telescoping the sum we obtain e t rm this inequality in turn implies that e t e f f since cm pm and pi for i m e f where we used that rm m e f s s and that e f since as pm and pi for i m now sum over all epochs to obtain xx f f e t t t the above inequality used the fact that using the above inequality and the definition of xa in algorithm we obtain the desired result proof of theorem theorem suppose f fn let and m c and t is some multiple of then there exists universal constants such that we have the following in theorem and e xa f f where is an optimal solution to the problem in and xa is the output of algorithm l m proof for our analysis we will require an upper bound on we observe that where this is obtained using the relation ct and the fact that cm using the specified values of and we have n n the above inequality follows since and n using the above bound on we get l m l m l c l e wherein the second inequality follows upon noting that l is increasing for l and l l e here e is the euler s number now we can lower bound as l min t l where is a constant independent of the first inequality holds since ct decreases with the second inequality holds since a is upper bounded by a constant independent of n as e follows from equation b l and c e follows from equation by choosing independent of n appropriately one can ensure that for some universal constant for example choosing we have with substituting the above lower bound in equation we obtain the desired result proof of corollary corollary suppose f fn then the ifo complexity of algorithm with parameters from theorem for achieving an solution is o n if ifo calls o n if proof this result follows from theorem and the fact that m suppose then m o n however n ifo calls are invested in calculating the average gradient at the end of each epoch in other words computation of average gradient requires n ifo calls for every m iterations of the algorithm using this relationship we get o n in this case on the other hand when the total number of ifo calls made by algorithm in each epoch is n since m hence the oracle for calculating the average gradient per epoch is of lower order leading to o n ifo calls c proof of theorem theorem suppose f is dominated where then the iterates of algorithm with t e m c for all t m and pm and pi for all i m satisfy e xk here and are the constants used in corollary proof corollary shows that the iterates of algorithm satisfy e xk e f f t substituting the specified value of t in the above inequality we have e xk e f f e the second inequality follows from dominance of the function f proof of theorem theorem if f is dominated then with t e m c for t m and pm and pi for all i m the iterates of algorithm satisfy e f xk f f f here are as in corollary is an optimal solution proof the proof mimics that of theorem now we have the following condition on the iterates of algorithm e xk e f f however f is dominated so e xk e f xk f which combined with equation concludes the proof d convex svrg convergence rate proof of theorem theorem if fi is convex for all i n pi for i m and pm then for algorithm we have e xa f f t where is optimal for and xa is the output of algorithm proof consider the following sequence of inequalities e x k e kxt e e t i t e e t f f t e e f f t t e f f e e f f t t e f f e f f t the second inequality uses unbiasedness of the svrg update and convexity of f the third inequality follows from lemma defining the lyapunov function p s e kxsm e f f and summing the above inequality over t we get x e f f p s p t algorithm svrg input rd epoch length m step sizes s dt discrete probability distribution pi m size b for s to s do xsmp s g n for t to m do choose a p uniformly random with replacement it n of size b g t it xt b t xt end forp m pi xi end for output iterate xa chosen uniformly random from t this due is to the fact that p e f m x k e f e m x k x e f f t the above equality uses the fact that pm and pi for i summing over all epochs and telescoping we then obtain e f xa f p the inequality also uses the definition of xa given in alg on this inequality we use lemma which yields e xa f xa f f f t it is easy to see that we can obtain convergence rates for e f xa f from the above reasoning this leads to a direct analysis of svrg for convex functions e minibatch nonconvex svrg proof of theorem the proofs essentially follow along the lines of lemma theorem and theorem with the added complexity of we first prove few intermediate results before proceeding to the proof of theorem lemma suppose we have rt e f ct t t ct t b b for s s and t m and the parameters and are chosen such that l then the iterates in the version of algorithm algorithm with t size b satisfy the bound e t rt l proof using essentially the same argument as the proof of lemma until equation we have e f t t t e t e t we use lemma in order to bound e in the above inequality substituting it in equat tion we see that e f t l e t h i tb e tb t l e rt t the second inequality follows from the definition of ct and rt thus concluding the proof our intermediate key result is the following theorem that provides convergence rate of svrg theorem let n denote the following quantity n min l suppose and for all t m cm ct tb tb for t m and n further let pm and pi for i then for the output xa of version of algorithm with size b we have e xa f f t where is an optimal solution to proof since for t m using lemma and telescoping the sum we obtain e t rm this inequality in turn implies that e t e f f where we used that rm e f since cm pm and pi for i m m e f s s and that e f since as pm and pi for i m now sum over all epochs and using the fact that we get the desired result we now present the proof of theorem using the above results theorem let n denote the following quantity n min l where cm ct l l for t m suppose m c and t is some multiple of then for the version of algorithm with size b there exists universal constants such b that we have the following n ln and e xa f f bt where is optimal for proof of theorem we first observe that using the specified values of and we obtain b b b b n n n the above inequality follows since and n for our analysis we will require the following bound on l m bl m l e wherein the first equality holds due to the relation ct tb tb and the inequality follows upon again noting that l is increasing for l and l now we can lower bound n as n min l t l where is a constant independent of the first inequality holds since ct decreases with the second one holds since a is upper bounded by a constant independent of n as due to equation b l as b and c e again due to equation and the fact b by choosing an appropriately small constant independent of n one can ensure that n for some universal constant for example choosing we have n with substituting the above lower bound in theorem we get the desired result f msvrg convergence rate proof of theorem theorem let f fn have gradients let q max t is the f universal constant from corollary m c and c further let t be a multiple of m pm and pi for i then the output xa of algorithm satisfies e xa r n f f l f f o min t t where is a universal constant is the universal constant from corollary and is an optimal solution to proof first we observe that the step size is chosen to be max t where r f f suppose we obtain the convergence rate in corollary now lets consider the case where t in this case we have the following bound e e t e t e t the first inequality follows from lemma with r the second inequality follows from a gradient property of f and b the fact that for a random variable e e e the rest of the proof is along exactly the lines as in theorem this provides a convergence rate similar to theorem more specifically using step size t we get r f f l e kf xa k t the only thing that remains to be proved is that with size choice of max t the minimum of two bounds hold consider the case t in this case we have the following q l f x t p f f f ln t max where is the constant in corollary this inequality holds since t rearranging the above inequality we have r f f l f f t t in this case the left hand side of the above inequality is precisely the bound obtained by using step size t see equation similarly when t the inequality holds in the other direction using these two observations we have the desired result g key lemmatta lemma for the intermediate iterates computed by algorithm we have the following e e t t proof the proof simply follows from the proof of lemma with it it we now present a result to bound the variance of svrg lemma let be computed by the version of algorithm algorithm with t size b then e t t b e kxt proof for the ease of exposition we use the following notation x t it we use the definition of to get t e e t e t t e t x e e t t b it the first inequality follows from lemma with r and the fact that e t from the above inequality we get e t e t t b e t t b the first inequality follows from the fact that the indices it are drawn uniformly randomly and independently from n and noting that for a random variable e e e the last inequality follows from of fit h experiments figure shows the remaining plots for mnist and datasets as seen in the plots there is no significant difference in the test error of svrg and sgd for these datasets sgd svrg grad n sgd svrg test error sgd svrg x t k test error grad n grad n figure neural network results for mnist and the leftmost result is for mnist the remaining two plots are of i other lemmas we need lemma for our results in the convex case lemma johnson zhang let g rd r be convex with continuous gradient then x y g x g y y x yi for all x y rd proof consider h x g x g y y x yi for arbitrary y rd observe that is also continuous note that h x since h y and y or alternatively since h defines a bregman divergence from which it follows that min h x x min h x x h x x k x k rewriting in terms of g we obtain the required result lemma bounds the variance of svrg for the convex case please refer to johnson zhang for more details lemma johnson zhang suppose fi is convex for all i n for the updates in algorithm we have the following inequality e f f f f t proof the proof follows upon observing the following s e e t k t t f f f f t the first inequality follows from and young inequality the second one from e e e and the third one from lemma lemma for random variables zr we have e zr re kzr
9
may automating embedded analysis capabilities and managing software complexity in multiphysics simulation part ii application to partial differential equations roger pawlowski eric phipps andrew steven owen christopher siefert and matthew staten sandia national april keywords generic programming templating operator overloading automatic differentiation partial differential equations finite element analysis optimization uncertainty quantification abstract a generic programming approach was presented in a previous paper that separates the development effort of programming a physical model from that of computing additional quantities such as derivatives needed for embedded analysis algorithms in this paper we describe the implementation details for using the generic programming approach for simulation and analysis of partial differential equations pdes we detail several of the hurdles that we have encountered and some of the software infrastructure developed to overcome them we end with a demonstration where we present shape optimization and uncertainty quantification results for a pde application introduction computational science has the potential to provide much more than numerical solutions to a set of equations the set of analysis opportunities beyond simulation include parameter studies stability analysis optimization and uncertainty quantification these capabilities demand more from the application code than required for a single simulation typically in the form of extra derivative information in addition computational design and analysis will often entail modification of the governing equations such as refinement of a model or a hierarchy of fidelities in our previous paper we described the generic programming tbgp approach that paper provides the conceptual framework upon which this paper builds and thus is a prerequisite for the work described here we showed how assembly and templatebased automatic differentiation technology can work together to deliver a flexible assembly engine corresponding author sandia national laboratories numerical analysis and applications department po box albuquerque new mexico usa tel fax agsalin sandia national laboratories is a laboratory managed and operated by sandia corporation a wholly owned subsidiary of lockheed martin corporation for the department of energy s national nuclear security administration under contract where model equations can be rapidly composed from basic building blocks and where only the residual needs to be explicitly programmed the approach is based on templating of the scalar operations within a simulation and instantiation of this template code on various data types to effect the code transformations needed for embedded analysis through operator overloading often application of operator overloading in this manner is assumed to introduce significant runtime overhead into the simulation however we have demonstrated that careful implementation of the overloaded operators using techniques such as expression templates can completely eliminate this overhead this results in a single templated code base that must be developed tested and that when combined with appropriate seeding and extracting of these speciallydesigned overloaded data types see and section for definitions of these terms allows all manner of additional quantities to be generated with no additional software development time in this paper we extend the description of this approach to the simulation and analysis of partial differential equations pdes as discussed in the previous paper a number of projects have implemented embedded analysis capabilities that leverage a domain specific language specifically for finite elements the fenics and sundance projects have demonstrated this capability with respect to derivative evlauation pdes provide additional challenges with regards to data structures and scalability to large systems in this paper we deal specifically with a galerkin finite element approach though the approach will follow directly to other assemblies and by analogy to assemblies in section we discuss where the approach begins and ends and how it relates to the global and local data structures in section we present many details of the approach for finite element assembly in particular the seed compute and extract phases section addresses some more advanced issues that we have dealt with in our codes that use this approach specifically this includes the infrastructure for exposing model parameters as needed for continuation bifurcation optimization and uncertainty quantification approaches for dealing with a templated code stack and approaches for dealing with code that can not be templated finally in section we demonstrate the whole process on an example pde application the sliding electromagnetic contact problem we show results for shape optimization and embedded uncertainty quantification critical to the main message of this paper is the fact that the infrastructure for computing the extra quantities needed for these analysis capabilities has been implemented independently from the work of implementing the pde model this infrastructure includes the seed and extract phases for the approach it also includes all of the solver libraries that have been implemented in the trilinos framework such as the linear nonlinear transient optimization and uq solvers once in place application codes for new pdes can be readily generated born with analytic derivatives and embedded analysis capabilities the novel interface exposed to computational scientists by allowing for templated data types to be passed through the equation assembly has tremendous potential while this interface has been exploited for derivatives operation counting and polynomial propagation we expect that developers will find innovative ways to exploit this interface beyond what we currently imagine we note that transforming a legacy implementation to use templates in this manner does involve significant effort and thus we would consider this approach most appropriate for new development efforts however the transformations necessary are just type specifications in function and variable declarations approach for finite element codes in the first paper in this series we explained the generic programming approach and included an illustrative demonstration on how it can be applied to an ode problem in this section we present the basic details on how this approach is used in the context of pde applications some of our implementation details are restricted to discretization strategies with assembly kernels such as finite element fem and control volume finite element cvfem methods some details of the approach would need to be adapted for discretizations such as finite difference methods or integral equations or for methods extending the approach from odes to pdes gives rise to many issues the core design principle is still the same that the evaluation of the equations is separated into three phases seed compute and extract the seed and extract phases need to be specialized for each template type where extra information in the data types such as derivative information must be initialized and retrieved the compute phase where the equations are implemented can be written on a fully generic fashion there are also issues with regard to data structures sparse matrices parallelism the use of discretization libraries and the potential dependency on libraries for property data these issues will be addressed in the following sections assembly a primary issue that arises when using the generic programming tbgp approach for pdes is the sparsity of the derivative dependencies the automatic differentiation approach to computing the jacobian matrix using the sacado package requires all relevant variables to be a sacado forward automatic differentiation data type which includes a dense array of partial derivatives with respect to the independent variables in the problem as problem sizes can easily extend into to the millions and beyond yet nonzero entries per row stay bounded at o it is not feasible to adopt the same approach a second issue is the requirement for the ability to run the codes on parallel architectures adding layers within the ad infrastructure would also be challenging these two issues are circumvented by invoking the generic programming at a local level for fem methods this is the single element the entire pde assembly phase is performed by summing contributions over individual elements within each element it is typically not a bad assumption that the local jacobian often referred to as the element stiffness matrix is dense so for jacobian matrices the ad is performed at the element level where the array of partial derivatives is sized to be the number of degrees of freedom in an element the dense contributions to each row of the matrix is subsequently scattered to the global sparse matrix structure similarly other quantities that can be computed with the generic programming approach can also be calculated element by element and summed into a global data the choice of implementing the generic programming at a local level also nullifies the second issue to do with distributed memory parallelism in a typical distributed memory implementation information from neighboring elements often called ghost overlap or halo data is the templating infrastructure and loop falls below the messagepassing layer in our implementation no communication is performed within the templated code note that most ad tools including sacado compute the residual along with the jacobian allowing these quantities to be computed simultaneously in general evaluation of the nth derivative also involves simultaneous evaluation of derivatives of order up to n as well table embedded analysis algorithms require a variety of quantities to be computed in the pde assembly this table shows a list of linear algebra quantities that can be computed as well as the required inputs in this table x is the solution vector is the time derivative of x v is one or more vectors chosen by the analysis algorithm p is one or more system parameters for is one or more random variables f is the residual vector of the discretized pde system and f is the stochastic expansion of the residual vector evaluation residual steady jacobian transient jacobian directional derivative sensitivity dx stochastic galerkin residual stochastic galerkin jacobian input vector s x x x x v x other input p p p p p x v x x p output vector f output matrix df dx df dx df dp df dx v f f df dx we note that the local approach is not the only solution to these problems through the use of sparse derivative arrays or compression techniques see for an overview of both of these approaches and references to the relevant literature automatic differentiation can be applied directly at the global level furthermore message passing libraries for distributed memory parallelism can be augmented to support communication of derivative quantities however due to the extra level of indirection introduced the use of sparse derivative arrays can significantly degrade performance moreover compression techniques require first computing the derivative sparsity pattern and then solving an optimization problem to compress the sparse derivative into a nearly dense one in practice only approximate solutions to this optimization problem can be attained however the solution to this problem is in fact known a priori it is precisely equivalent to the local approach assuming the element derivative is dense thus we have found the local approach to be significantly simpler than a global one particularly so as pde discretization software tools that support templated data types have been developed such as the intrepid package in trilinos data structures the purpose of the pde assembly engine is to fill linear algebra objects primarily vectors and sparse matrices these are the data structures used by the solvers and analysis algorithms for instance a newton based solver will need a residual vector and a jacobian matrix a algorithm will need a directional derivative an explicit time integration algorithm will need the forcing vector f x polynomial chaos propagation creates a vector of vectors of polynomial coefficients a sensitivity solve computes multiple vectors or a single multivector or dense column matrix of derivatives with respect to a handful of design parameters the input to the pde assembly is also vectors the solution vector x a vector of design parameters p and coefficient vectors for polynomial expansions of parameters it is critical to note that the tbgp machinery is not applied to the linear algebra structures used by the solvers and analysis algorithms none of the or expression templating infrastructure comes into play at this level the tbgp is applied locally on a single element in the assembly process used to fill the linear algebra data structures for example in our implementations the vectors and matrices are objects from the epetra or tpetra libraries these are convenient because of their support for parallelism and their compatibility with all the solvers in trilinos both linear solvers and analysis algorithms however none of the subsequent implementation of the tbgp code is dependent on this choice inside the pde assembly for finite element codes it is natural to have storage layout all of the computations of the discretized pde equations operate on arrays mdarrays of data which can be accesses with local indexing local nodes local quadrature points local equation number the current mdarray domain model is specified and implemented in the shards package in trilinos while the tbgp computations occur locally within an element the assembly of element contributions to the linear algebra objects is done on local blocks of elements called worksets a workset is a homogeneous set of elements that share the same local bookkeeping and material information while all the computations within each element in a workset are independent the ability to loop over a workset amortizes the overhead of function calls and gives flexibility to obtain speedups through vectorization cache utilization threading and based parallelism by restricting a workset of elements to be homogeneous we can avoid excessive conditional if tests or indirect addressing within the workset loops the number of elements in a workset ne can be chosen based on a number of criteria including runtime performance optimization or memory limitations the other dimensions of the mdarrays can include number of local nodes nn number of quadrature points nq number of local equations or unknowns neq and number of spatial dimensions nd for instance a nodal basis function mdarray has dimensions ne nn nq while the gradient of the solution vector evaluated at quadrature points is dimensioned ne nq neq nd all of the mdarrays are templated on the scalar data type called scalart in our code examples depending on what specific scalar type they are instantiated with they will not only hold the value but can also hold other information such as the derivatives in the case of jacobian evaluations sensitivities or polynomial chaos coefficients at this point we hope the reader has an understanding of the generic programming approach from the previous paper with the paradigm this current section has motivated the application of the tbgp approach at the local or element level and has defined the distinction between global linear algebra objects matrices and vectors that span the mesh and typically are of double data type and mdarrays data structures of quantities that are templated on the scalar type with this foundation the main concept in this paper can be now presented in the following section template based element assembly the generic programming approach requires a seed phase where the scalar data types are initialized appropriately as described above in section there are different data structures that need to exist in the solution phase from the assembly phase notably a gather routine is needed to pull in global information such as the solution vector from the nonlinear solver to the local element data structures such as the solution values at local nodes in an element in our design we perform the gather and seed operations in the same routine when we pull global data into a local data storage we not only copy it into local storage but also seed the scalar data types as needed the seeding is dependent on the scalar data type so the gather operation must be template specialized code for example for a jacobian evaluation the partial derivative array associated with the solution vector is seeded with the identity matrix the inverse is also true at the end of the pde assembly there are contributions to the global residual vector and depending on the scalar type information for the jacobian polynomial chaos expansion or other evaluation types contained in the data structure as well these quantities need to be extracted from these data structures as well as scattered back from the local to global data storage containers we combine the scatter and extract operations into a single step which again require code for the jacobian example the derivative array associated with the residual entries are rows of the element stiffness matrix the compute phase operates solely on local mdarray data structures with data templated on the scalar type this phase can be written entirely on the generic template type we have attempted to capture this concept schematically in figure the phase must take global data and depending on the template type seed the local arrays appropriately the compute phase broken into five distinct evaluations in this cartoon the blue boxes performs the element level finite element calculations for the specific pdes and is written just on the generic template type the uniqueness of the gather coordinates box will be addressed later in section the phase takes the results of the assembly and loads the data into the appropriate global quantities as dictated by the specific evaluation type the execution of the phases is initiated by the application code and handled by the phalanx package by traversing the evaluation kernels in the directed acyclic graph more details on the three phases for a typical finite element assembly will be described in the subsequent sections finite element assembly process under tbgp framing the finite element assembly process in terms of the generic programming concept is best explained by example here we apply the galerkin finite element method to a generic scalar multidimensional conservation equation see for example q s where u is the unknown being solved for the degree of freedom and its time derivative the flux q and source term s are functions of u time and position while the exact form of the flux is not important we comment that if the flux is strongly convecting then additonal terms such as supg may be required to damp oscillations to simplify the analysis we ignore such terms here this is valid for systems where convection is not dominant such as low reynolds number flows or heat conduction in a solid equation is put into variational form which after integration by parts and ignoring boundary contributions for the sake of simplicity yields the residual equations z i i ru q s figure a schematic of the template based generic programming model for pdes the gather seed evaluator takes global data and copies it into local storage seeding any embedded data types in template specialized codes the scatter extract phase does the inverse the compute phase can be written on the generic template type operating on local data in is the domain over which the problem is solved and are the finite element basis functions the unknown its time derivative and its spatial derivative are computed using nu x ui nu x n u x i u where ui are the unknown coefficients of the discretization of u xj is the coordinate direction and nu is the number of basis functions the integrations in are performed using numerical quadrature nq ne x x i q s wq where ne is the total number of elements in the domain nq is the number of quadrature points in an element for the integration order is the determinant of the jacobian of transformation from the physical space to the element reference space and wq the quadrature weights with the finite element assembly algorithm defined by the process can be redefined in terms of the operations the assembly algorithm in loops over the elements in the domain and sums the partial contributions to form the residual equations the complete set of residual equations constitute the global residual f reformulating in terms of the workset concept the assembly process for evaluating a residual is defined as f x nw x fk nw x skt k gk x here nw is the number of worksets and fk is the partial residual associated with the finite element contributions for the elements in workset gk is the gather operation that maps the global solution vector x to the local solution vector for workset as mentioned above in the software implementation the gather routine also performs the seeding of scalar types skt is the scatter operation that maps the local element residual for elements of workset k into the global residual contribution fk skt k as noted above in the software implementation the extraction process occurs during the scatter k are the element residual contributions to that come from the elements in workset k as a function of the local workset solution vector gk x k nq ne x x q s wq ne is the number of elements in workset the important point to note is that while all of the code written above was used for evaluating a residual the bulk of the code can now be reused for other evaluation types such as jacobians parameter sensitivities stochastic residuals etc this is accomplished merely by writing an additional specialization for the gather gk and scatter skt operations only all of the code for the residual evaluation k is written once for a generic template argument for the scalar type and is reused for each evaluation type in the following sections we now show examples and further explain each of the assembly steps seed gather phase template specialization in this first phase the approach is to do the gather operation gk pulling quantities from a global vector and the seed phase initializing the template type for the desired embedded operation in the same block of code in this example which is an adaptation of working code the phalanx evaluator called gathersolution is where this operation occurs and within the evaluatefields method in particular as described in the previous paper the trilinos package phalanx is used to build the governing equations where separate pieces of the computation are broken into phalanx evaluator objects the field to be evaluated is called local x the solution vector at the local nodes of each element it depends on global x which is the solution vector in the vector data layout in figure the gathersolution class is specialized to the residual evaluation type this routine simply copies the values from one data structure to another with the use of a bookkeeping function connectivitymap in figure the code for the gathersolution class specialized to the jacobian evaluation type is shown in addition to the gather operation to load the value of x into the void gathersolution evaluationtype evaluatefields note is a mdarray of dimension numberofelements numberoflocalnodes of data type double for int elem numberofelements for int node numberoflocalnodes elem node connectivitymap elem node figure seed and gather code for residual evaluation the connectivitymap function is the degree of freedom or connectivity map that gets the global id from the element number and local node number the seed phase is trivial just a copy of the value local data structure there is also a seed phase to initialize the partial derivatives with the identity matrix here the independent variables are defined by initializing the partial derivative array of the sacado automatic differentiation data type two nested loops over the local nodes are dxi used to set dx to when i j and to otherwise j as the number of output quantities to be produced by the finite element assembly increases such as those defined by the rows in table so does the number of template specialized implementations of the gathersolution object need to be written the syntax here is dependent on the implementation in the sacado package in trilinos but the concept of seeding automatic differentiation calculations is general compute phase generic template the compute phase that computes the local contributions k for a pde application operates on data that exists in the local data structures this code is written entirely on the generic evaluation template type evalt one must just write the code needed to evaluate the residual equation but using the scalart data type corresponding to the evaluation type evalt instead of raw double data type as shown in section of the overloaded data type together with specializations in the seed and extract phases enable the same code to compute all manner of quantities such as the outputs in table in this section we give two examples of the evaluatefields method of a phalanx evaluator class the first is shown in figure which calculates a source term in a heat equation s where s is the source term from equation and are parameters in this model and u is the solution field the code presupposes that the field u has been computed in another evaluator and is an mdarray over elements and quadrature points the factors and in this example are scalar values that do not vary over the domain we will discuss later in section how to expose and as parameters for design or analysis note again that this code is templated on the generic evalt evaluation type and only one implementation is needed for general pde codes it is common and efficient to use discretization libraries to perform common operations for a finite element code this includes basis function calculations calculating the transformation between reference and physical elements and supplying quadrature schemes void gathersolution evaluationtype evaluatefields note is a mdarray of dimension numberofelements numberoflocalnodes of data type sacado with allocated space for numberoflocalnodes partial derivatives for int elem numberofelements for int node numberoflocalnodes elem node connectivitymap elem node loop over all nodes again and seed for int numberoflocalnodes if node elem node else elem node figure seed and gather code for jacobian evaluation the gather operation is the same as the residual calculation the seed phase involves local x which is now an automatic differentiation data type the method local accesses the value and local i accessing the ith partial derivative the example here assumes one equation and one unknown per local node to the extent that these operations occur within the loop they must support templating for the tbgp approach to work in trilinos the intrepid discretization library serves all these roles and was written to be templated on the generic scalart data type figure shows an example evaluator method for the final assembly of the residual equation for a heat balance this code uses the integrate method of the intrepid finite element library to accumulate the summations over quadrature points of each of the four terms the integrals are from the variational formulation of the pdes where the terms are matched with the galerkin basis functions adapted from equation but using common notation for heat transfer z z z i i q where are the basis functions the three terms on the right hand side correspond to the diffusive source and accumulation terms the source term s in this equation can be a function of the solution a function of the position or a simple constant the dependencies must be defined in the source term evaluator however these dependencies do not need to be described in the heat equation residual this same piece of code will accurately propagate any derivatives that were seeded in the gather phase and accumulated in the source term evaluator we should also note that it is possible to write code for the compute phase if for instance one would like to the jacobian fill for efficiency or to leave out terms for a preconditioner once could simply write a function heatequationresidual evaluationtype void sourceterm evalt evaluatefields note this evaluator depends on properties alpha and beta and scalar quantity u to compute the scalar source s for int elem numberofelements for int qp numberofquadpoints source elem qp alpha beta u elem qp u elem qp figure example of an evaluation kernel from the compute phase note that the code is templated on the generic evalt evaluation type this code will propagate the auxiliary information contained in evalt data type for any of the embedded capabilities no template specialization is needed void heatequationresidual evalt evaluatefields note this evaluator depends on several precomputed fields flux source tdot time derivative wbf basis function with quadrature and transformation weights and wgradbf gradient of basis functions with weights the result is the tresidual field the element contribution to the heat equation residual typedef intrepid fst fst scalart tresidual flux wgradbf fst scalart tresidual source wbf fst scalart tresidual tdot wbf figure evaluator for the final assembly of the heat equations the terms correspond to those in eq in order the variable tresidual is being accumulated in each step all variables are mdarrays depending on the template parameter evalt and the corresponding data type scalart this same block of code is used for accumulating the residual jacobian or any of the output quantities listed in table void scatterresidual evaluationtype evaluatefields note is a mdarray of dimension numberofelements numberoflocalnodes of data type double for int elem numberofelements for int node numberoflocalnodes connectivitymap elem node elem node figure extract and scatter code for residual evaluation the connectivitymap function is also called the degree of freedom map and gets the global id from the element number and local node number the extract phase is trivial for this evaluation type just a copy of the value where the generic template type evalt is replaced by the template specialized evaluation type of jacobian extract scatter phase template specialization this section closely mimics section but with the transpose operations here each local element s contributions to the finite element residual is scattered into the global data structure at the same time if additional information is stored in the templated data types it is extracted and scattered into the global linear algebra objects in this example the evaluator object called scatterresidual is where this operation occurs and within the evaluatefields method in particular the culmination of all the compute steps above have resulted in the computation of the field local f which represents the local element s contribution k to the global residual vector but may also contain additional information in the scalart data type in figure the scatterresidual class is specialized to the residual evaluation type this routine simply copies the values from the element data structure to the global data structure with the use of the same bookkeeping function connectivitymap that was used in section in figure the code for the scatterresidual class specialized to the jacobian evaluation type is shown here the local dense stiffness matrix is extracted from the sacado automatic dfi differentiation data type two nested loops over the local nodes are used to extract dx and load j them into the global sparse matrix a set of template specialized implementations of the scatterresidual object need to be written to match those in the gathersolution class just two are shown here these implementations are specific to the interface to the global data structures being used which here are encapsulated in the connectivitymap and addsparsematrixentry methods a central point to this paper and the concept of generic programming is that the implementations in the seed gather and the extract scatter sections can be written agnostic to the physics being solved while the work of correctly programming these two phases for all evaluation types is not at all trivial the development effort is completely orthogonal to the work of adding terms to pdes once a code is set up with implementations for a new evaluation type it is there for any pdes assembled in the compute phase we note that while the examples shown here are trivial the design of the assembly engine is void scatterresidual evaluationtype evaluatefields note is a mdarray of dimension numberofelements numberoflocalnodes of data type sacado with allocated space for numberoflocalnodes partial derivatives for int elem numberofelements for int node numberoflocalnodes int elem node for int numberoflocalnodes int elem double val elem node addsparsematrixentry row col val figure extract and scatter code for jacobian evaluation the extract phase involves local f which is now an automatic differentiation data type the method dx i on this data type accesses the ith partial derivative a fictitious method called addsparsematrixentry int row int col int df value shows how this jacobian information is scattered into the global and sparse dx storage very general and allows for complex multiphsyics problems in particular unknowns are not all bound to the same basis mixed basis problems have been demonstrated with the phalanx assembly engine the design of each evaluator in the graph is completely controlled by the user thus allowing for any algorithm local to the workset to be implemented we further note that for all of the evaluation routines above the loops were explicitly written in the evaluator in general this is not ideal since it is likely to be repeated across multiple evaluators and can introduce additional points for error these loops could be eliminated using utility functions or expression templates this will be a future area of research furthermore optimizing the ordering of the nested loops and the corresponding data layouts of the field data and embedded scalar types are also areas of future research finally we note that during the design of the phalanx package great care was given to designing the library so that users with little template experiience could easily add new physics we feel this has been extremely successful in that there exists over distinct physics applications using the tbgp packages in trilinos the drawback however is that the initial setup of the tbgp process requires a programmer with a strong background with templates in contrast codes such as sundance and fenics automate the entire assembly process for users lowering the barrier for adoption we feel that the extra work in setting up the tbgp machinery is worth the effort as we have very quickly extended the embedded analysis support to new types such as the stochastic galerkin methods extensions to tbgp for finite element code design the basic implementation details for generic programming approach for finite element code were described in the previous section as we have implemented this approach in application codes we have run across many issues and implemented solutions to them some of the most important of these are described in the following sections infrastructure for exposing one of the main selling points of the generic programming approach is the ability to perform design and analysis involving system parameters continuation sensitivity analysis optimization and uq all require that model parameters be manipulated by the analysis algorithms these parameters are model specific and commonly include a value of a boundary condition a dimensionless group such as the reynolds number a model parameter such as an arrhenius rate coefficient or a shape parameter such as the radius of some cylindrical part in this section we briefly describe our infrastructure for exposing parameters infrastructure for dealing with parameters should try to meet the following design requirements a simple interface for model developers to expose new parameters integration into the templatebase approach so derivatives with respect to parameters are captured and seamless exposure of the parameters to design algorithms such as optimization and uq the approach that has been successful in our codes has been to use the parameterlibrary class in the sacado package of trilinos this utility stores the available parameters by string name and value and can handle the multiple data types needed by the approach the developer can register parameters in the parameter library identified by strings by simply calling the register method during the construction phase to expose the and parameters in the above example by labels alpha and beta the constructor for the sourceterm evaluator simply needs to add the lines alpha this beta this assuming that a parameterlibrary object is in scope at the end of the problem construction the parameter library can be queried for a list of registered parameters and will include these two in addition to those registered elsewhere the analysis algorithms can then manipulate the values of these parameters in the parameterlibrary there is a choice of using a push or a pull paradigm when the value is changed in the parameter library is it immediately pushed to the evaluator where it will eventually be used or is it up to the model to pull the parameter values from the parameter library when needed we have chosen the push approach since with this choice there is no performance penalty for exposing numerous parameters as potential design variables parameters are only pushed to one location so parameters that are used in multiple evaluators must have a root evaluator where they are registered and other evaluators must have a dependency on that one any evaluator class that registers a parameter must inherit from an abstract parameteraccessor class which has a single method called scalart getvalue std name any parameter that gets registered with the parameterlibrary needs to send a pointer to a parameteraccessor class so the parameter library can push new values of the parameter when manipulated by an scalart sourceterm evalt std name if alpha return alpha elseif beta return beta figure example implementation of getvalue method which provides a hook for analysis algorithms to manipulate design parameters analysis algorithm in the example above this is handled by the this argument in the registration call for this example the getvalue method can be simply implemented as shown in figure assuming parameters alpha and beta are member data of generic template type scalart sensitivities of the residual equation with respect to parameters are calculated with automatic differentiation when evaluated with the tangent evaluation type like the jacobian evaluation type the associated scalar data type is a sacado type however the length of the derivative array is the number of parameters and the seed and extract phases require different specializations shape optimization a second scalar type quantities in the pde assembly that might have nonzero partial derivatives with respect to an independent variable whether it be a parameter or part of the solution vector must be a have a templated data type in this way derivatives can be propagated using the object overloading approach constants such as can be hardwired to the realtype data type to avoid the expense of propagating partial derivatives that we know are zero for the bulk of our calculations the coordinates of the nodes in our finite element mesh are fixed all the quantities that are solely a function of the coordinates such as the basis function gradients and the mapping from an element to the reference element can be set to realtype however when we began to do shape optimization the coordinates of the node could now have nonzero derivatives with respect to the shape parameter in the tangent sensitivity evaluation to simply make all quantities that are dependent on the coordinates to be the scalart type would trigger an excessive amount of computations particularly for the jacobian calculations where the chain rule would be propagating zeroes through a large part of the finite element assembly the solution was to create a second generic data type meshscalart for all the quantities that have derivatives with respect to the coordinates but have no dependency on the solution vector the traits class defined in the previous paper is extended to include meshscalart as well as scalart as follows struct usertraits public phx scalar types typedef double realtype typedef sacado fadtype evaluation types with default scalar type struct residual typedef realtype scalart typedef realtype meshscalart struct jacobian typedef fadtype scalart typedef realtype meshscalart struct tangent typedef fadtype scalart typedef fadtype meshscalart if in the future we decide to do a moving mesh problem such as where the coordinate vector for the heat equation does depend on the current displacement field as calculated in the elasticity equation then the jacobian evaluation could be switched in this traits class to have typedef fadtype meshscalart automatically the code would calculate the accurate jacobian for the fully coupled moving mesh formulation while it can be complicated to pick the correct data type for all quantities with this approach the sacado implementation of these data types has a useful feature the code will not compile if you attempt to assign a derivative data type to a real type the casting away of derivative information must be done explicitly and can not be done by accident this is illustrated in the following code fragment as annotated by the comments realtype fadtype f r all derivatives set to zero r f will report an error template infrastructure generic programming places a number of additional requirements on the code base building and manipulating the objects can be intrusive additionally the compile times can be excessive as more and more template types are added to the infrastructure here we address these issues extensible infrastructure the infrastructure for a assembly process must be designed for extensibility the addition of new evaluation types scalar types should be minimally invasive to the code to support this requirement a template manager class has been developed to automate the construction and manipulation of a templated class given a list of template types a template manager instantiates a particular class for each template type using a user supplied factory for the class the instantiated objects are stored in a std inside the manager to be stored as a vector the class being instantiated must inherit from a base class once the objects are instantiated the template manager provides functionality similar to a std it can return iterators to the base class objects it allows for random access based on a template type using templated accessor methods returning an object of either the base or derived class the list of types a template manager must build is fixed at compile time through the use of template metaprogramming techniques implemented in the boost mpl library and sacado these types are defined in a traits class this object is discussed in detail in we note that the template manager described here is a simplified version of the tuple manipulation tools supplied by the boost fusion library in the future we plan to transition our code to using the fusion library compile time efficiency for each class templated on an evaluation type the compiler must build the object code for each of the template types this can result in extremely long compile times even for very minor code changes for that reason explicit template instantiation is highly recommended for all classes that are templated on an evaluation type in our experience not all compilers can support explicit instantiation therefore both the inclusion model and explicit template instantiation are supported in our objects see chapter of for more details the downside to such a system is that for each class that implements explicit instantiation the declaration and definition must be split into separate header files and a third file must also be added to the code base incorporating code in some situations code may provide based implementations of some analysis evaluations for example it is straightforward to differentiate some fortran codes with source transformation tools such as adifor to provide analytic evaluations for first and higher derivatives however clearly the resulting derivative code does not use the procedure or the sacado operator overloading library similarly some libraries have derivatives in either case some mechanism is necessary to translate the derivative evaluation governed by sacado into one that is provided by the library providing such a translation is a relatively straightforward procedure using the template specialization techniques already discussed briefly a phalanx evaluator should be written that wraps the code into the phalanx evaluation hierarchy this evaluator can then be specialized for each evaluation type that the library provides a mechanism for evaluating this specialization extracts the requisite information from the corresponding scalar type derivative values and copies them into whatever data structure is specified by the library for evaluating those quantities in some situations the layout of the data in the given scalar type matches the layout required by the library in which case a copy is not necessary sacado provides a forward ad data type with a layout that matches the layout required by adifor in which case a pointer to the derivative values is all that needs to be extracted however this is not always the case so in some situations a copy is necessary such an approach will work for all evaluation types that the library provides some mechanism for evaluating however clearly situations can arise where the library provides no mechanism for certain evaluation types in this case the specializations must be written in such a way as to generate the required information for example if the library does not provide derivatives these can be approximated through a scheme in a jacobian evaluation for example the jacobian specialization for the evaluator for this library would make several calls to the library for each perturbation of the input data for the library combine these derivatives with those from the inputs dependent fields for the evaluator using the chain rule and place them in the derivative arrays corresponding to the outputs evaluated fields of the evaluator similarly polynomial chaos expansions of code can be computed through spectral projection mesh morphing and importing coordinate derivatives the shape optimization capability that will be demonstrated in section requires sensitivities of the residual equation with respect to shape parameters our implementation uses an external library for moving the mesh coordinates as a function of shape parameters mesh morphing this is an active research area and a paper has just been prepared detailing six different approaches on a variety of applications briefly the desired capability is for the application code to be able to manipulate shape parameters such as a length or curvature of part of a solid model and for the mesh morphing utility to provide a mesh that conforms to that geometry to avoid changes in data structures and discontinuities in an objective function calculation it is desirable for the mesh topology or connectivity to stay fixed the algorithm must find a balance between maintaining good mesh quality and preserving the grading of the original mesh such as anisotropy in the mesh designed to capture a boundary layer large shape changes that require remeshing are beyond the scope of this work and would need to be accommodated by remeshing and restarting the optimization run a variety of mesh morphing algorithms have been developed and investigated at one end of the spectrum is the smoothing approach where the surface nodes are moved to accommodate the new shape parameters and the resulting mesh is smoothed until the elements regain acceptable quality at the other end of the spectrum is the femwarp algorithm where a finite element projection is used to warp the mesh requiring a global linear solve to determine the new node locations in this paper we have used a weighted residual method where the new node coordinates are based on how boundary nodes in their neighborhood have moved we chose to always morph the mesh from the original meshed configuration to the chosen configuration even if an intermediate mesh was already computed at nearby shape parameters so that the new mesh was uniquely defined by the shape parameters since the mesh morphing algorithm is not a local calculation on each element but operates across the entire mesh at one time the derivatives can not be calculated within a phalanx evaluator using the approach nor using the methods described in section our approach for calculating sensitivities of the residual vector f with respect to shape parameters p is to use the chain rule where x is the coordinate vector outside of the section of templated code we the mesh sensitivities with a finite difference algorithm around the mesh morphing algorithm this is fed into the residual calculation as global data as part of the typical assembly there is a gathercoordinates evaluator that takes the coordinate vector from the mesh database and gathers it into the local mdarray data structure for shape sensitivities the coordinate vector is a sacado data type and the gather operation not only imports the values of the coordinates but also seeds the derivative components from the vectors this is shown schematically in figure where the gathercoordinates box is shown to have a template specialized version for shape optimization in addition to a generic implementation for all other evaluation types when the rest of the calculation proceeds the directional derivative of in the direction of is computed the same implementation for extracting described in section works for this case as when p is a set of model parameters as demonstration sliding electromagnetic contact the generic programming approach is demonstrated here on a prototype pde application the sliding electromagnetic contact problem the geometry of this problem is shown in figure where for the nominal design this geometry is simply extruded into the third dimension a slider light blue is situated between two conductors green and yellow with some given shapes of the contact pads thin red and blue regions when a potential difference over the device is prescribed an electrical current flows through system in the general direction of the dashed red line the electrical current generates a magnetic field that in turn propels the slider forward in addition the current generates heat figure a front view of the geometry for the sliding electromagnetic contact application the imposed gradient in potential causes electric current joule heating and a magnetic field that propels the slider which is not currently modeled the design problem to be investigated is to find the shape of the slider with a given volume that minimizes the maximum temperature achieved inside of it governing equations and objective function in this demonstration we simplify the system by decoupling the magnetics and solve a model where the slider velocity v is given the model is then reduced to two coupled pdes the first is a potential equation for the electric potential the second is a heat balance that accounts for conduction convection and joule heating source term that depends on the current v the electrical conductivity varies as a function of the local temperature field based on a simplified version of knoepfel s model t t where can take different values in the slider the conductor and in the pads this dependency results in a coupled pair of equations by choosing the frame of reference that stays with the slider we impose a fixed convective velocity in the beams and v in the slider the dirichlet boundary conditions for the potential and temperature are shown in figure with all others being natural boundary conditions since these equations and geometry are symmetric about the mid plane a vertical line in this figure we only solve for half of the geometry and impose along this axis this pde model was implemented using phalanx evaluators by specifying the dependencies with the evaluators such as t the evaluation tree is automatically constructed the full graph for this problem is shown in figure as with the ode example in the previous paper only the gather and scatter functions need to be written with template specialization for the seed and extract phases all the intermediate compute quantities can be written once on a generic evaluation type as an example of how to interpret this graph the oval marked tq computes the temperature field at the quadrature points using the basis functions and tn the temperature field at the nodes implementing equation tq is subsequently used to compute and in another evaluator which implements equation the objective function g which is to be minimized by the design problem is simply g t kt the design parameters p modify the shape of the slider its shape is fixed to match the rectangular pads at each end and its volume is fixed to be that of the rectangular box between them in between the shape is allowed to vary parabolically for a optimization problem the matched parabolic profiles of the top and bottom of the slider are free to vary by a single maximum deflection parameter because of the symmetry of the problem this defines the parabola for optimization problem these two parabolas are allowed to vary independently and a third the bulge of the slider out of the plane of the figure is adjusted so that the volume constraint is met optimization the optimization problem is solved using a optimization algorithm from the dakota framework dakota can be built as part of trilinos using the build system and adaptors in the trikota package the goal is to minimize the objective function g p from equation as a function of the shape parameters in addition the problem is constrained so that the discretized pdes are figure the dependency graph for the phalanx evaluators that build the equation set is shown each box represents a separate class the quantities at the bottom must be computed before those above derivatives are automatically propagated by the chain rule using the sacado automatic differentiation data types here x is the coordinate vector t the temperature and the potential and subscript n and q indicating to node and quadrature point data satisfied f x where f represents the finite element residuals for the equation set specified in equations and the solution vector x is the combined vector of the discretized potential and temperature fields the shape parameters do not appear explicitly in the objective function or even in the governing equations but instead effect the geometry of the problem they appear in the discretization and can be written as f x x p where x is the vector of coordinates of nodes in the mesh in addition to the objective function g p the algorithm depends on the reduced gradient of the objective function with respect to the parameters the formula for this term can be expanded as dg t dx dp dp each of these terms is computed in a different way starting at the end the xp dx dp is computed with finite differences around the mesh morphing algorithm as described in section the sensitivity of the residual vector with respect to the shape parameters is the directional tive in the direction xp this is computed using automatic differentiation using the infrastructure described in section where the sacado automatic derivative data type for the coordinate vector x is seeded with the derivative vectors xp the result is a fp the jacobian matrix is computed with automatic differentiation all the sacado data types are allocated with derivative arrays of length which is the number of independent variables in a hexahedral element with trilinear basis functions and two degrees of freedom per node the local element solution vector x is seeded with when i j the action of the inverse of the j jacobian on fp is performed with a preconditioned iterative linear solver using the belos and ifpack packages in trilinos the gradient of the objective function is computed by hand since g is the max operator on the half of x corresponding to the temperature unknown the of the max operator with respect to changes in parameter can in general be an issue however no problems were encountered in this application since the location of the maximum did nor move significantly over itrations finally the term was identically zero because the parameters did not appear explicitly in the objective function first a optimization problem was run to find the parabolic deflection of the top and bottom of the slider that minimized the maximum temperature in addition to the gradientbased optimization algorithm described above a continuation run was performed using the loca package in trilinos the results are shown in figure the continuation run shows the smooth response surface for a wide range if deflections both positive and negative the optimization iteration rapidly converges to the minimum the optimum occurs for a small positive value of the deflection parameter corresponding to a shape that is slightly arched upwards but rather near the nominal shape of a rectangular box figure results for a continuation run with loca and a minimization run using dakota for a shape optimization problem a optimization run was also performed where the top and bottom parabolas were freed to vary independently and the bulge of the slider was adjusted to conserve the volume of the mesh figure shows the initial configuration with a rather arched bottom surface and nearly flat upper surfaces and a moderate bulge the optimal shape is shown in figure as with the case the optimal shape was found to be close to a rectangular box the temperature contouring of the two figures which share a color map shows a noticeable reduction in the temperature at the optimal shape figure initial mesh configuration and temperature profiles for the optimization problem the deflections of the top and bottom surface of the slider are varied independently and the deflection bulge out of the plane is chosen to constrain the volume to that of a rectangular brick uq results in addition to optimization runs embedded uncertainty quantification was performed on the same model the scalart data type for the stochastic galerkin residual evaluation hold polynomial coefficients for the expansion of all quantities with a spectral basis by nesting the stochastic galerkin and automatic differentiation data types a jacobian for the stochastic galerkin expansions can also be calculated the seed and extract phases of this computation as well as the subsequent nonlinear solve of the stochastic fem system required significant development however with the generic programming approach this work is completely orthogonal to the implementation of the pdes so there was no additional coding needed to perform embedded uq for this application over that needed for the residuals for the governing equations as a demonstration we chose the electrical conductivity in the pad as the uncertain variable the pad region is the thin rectangular region at the edge of the slider with fixed shape in this run was chosen to be a uniform distribution within of the nominal value of figure final configuration and color map for the optimization problem the maximum temperature was significantly decreased here the p variables are legendre polynomials the computation was run with polynomial basis a newton iteration was performed on the nonlinear system from the discretized domain fem in space stochastic dimension with a spectral basis the resulting probability distribution on the maximum temperature unknown was computed to be tm ax figure shows the mean temperature profile for this distribution figure show the variation of the temperature with respect to this parameter note the reduced range of the color bar the results show that the variation of the electrical conductivity in the pad region has a large effect on the temperature in the middle of the slider which has a large dependence on the total current and not in the pad region itself which is strongly controlled by the convective cooling from the beam due to the moving frame of reference conclusions in this paper we have related our experience in using the generic programming tbgp approach for pdes in a finite element code we have used this approach at a local element level where the dependencies for jacobian evaluations are small and dense by combining the gather phase of a finite element calculation with the seed phase of the tbgp approach and the scatter with extract the infrastructure for tbgp is well contained once this infrastructure is in place transformational analysis capabilities such as optimization and embedded uq are immediately available for any new pde modes that are implemented figure the results of an embedded uncertainty quantification using stokhos the electrical conductivity in the thin pad region is given as a distribution in this figure the temperature profile for the mean solution is shown we have also presented some of the implementation details in our approach this includes infrastructure for dealing with parameters for dealing with a templated code stack and dealing with code we demonstrated this approach on an example sliding electromagnetic contact problem which is a pair of coupled nonlinear equations we performed optimization algorithms with embedded gradients and also embedded stochastic finite element calculations as this paper is to appear in a special issue along with other trilinos capabilities we would like to mention explicitly which trilinos packages underlined were used in these calculations this paper centered on the use of phalanx assembly engine sacado for automatic differentiation and stokhos for embedded uq for linear algebra we used the epetra data structures ifpack preconditioners and belos iterative solver the linear algebra was accessed through the stratimikos linear algebra strategy layer using the thyra abstraction layer the stk mesh package was used for the parallel mesh database and the stk io packages together with ioss exodus and seacas were used for io and partitioning of the mesh the intrepid package was used for the finite element discretization operating on multidimensional arrays from the shards package the utility packages teuchos was used for parameter list specification and memory management the piro package managed the solver and analysis algorithms and makes heavy use of the epetraext model evaluator abstraction piro in turn calls the nox nonlinear solver the loca library of continuation algorithms the trikota interface to the dakota optimization algorithms and stokhos for presenting the stochastic galerkin system as a single nonlinear problem these results also relied on several products outside of trilinos including the cubit mesh generator and associated mesh morphing software the dakota framework paraview visualization package and figure the variation in the temperature field is shown for the same embedded uncertainty quantification calculation as the previous figure netcdf mesh library acknowledgements this work was funded by the us department of energy through the nnsa advanced scientific computing and office of science advanced scientific computing research programs references abrahams and gurtovoy template metaprogramming concepts tools and techniques from boost and beyond bischof carle khademi and mauer adifor automatic differentiation of fortran programs ieee computational science engineering bochev edwards kirby peterson and ridzal solving pdes with intrepid scientific programming in brooks and hughes streamline formulations for convection dominated flows with particular emphasis on the incompressible equations comp meth appl mech and dawes and abrahams http donea and huerta finite element methhods for flow problems wiley edwards http ghanem and spanos polynomial chaos in stochastic finite elements journal of applied mechanics jan ghanem and spanos stochastic finite elements a spectral approach springerverlag new york isbn griewank evaluating derivatives principles and techniques of algorithmic differentiation number in frontiers in appl math siam philadelphia pa isbn heroux bartlett howle hoekstra hu kolda lehoucq long pawlowski phipps salinger thornquist tuminaro willenbring a williams and stanley an overview of the trilinos package acm trans math http hughes and brooks a multidimensional upwind scheme with no diffusion in hughes ed finite element methods for convection dominated flows amd vol asme new york knoepfel pulsed high magnetic fields physical effects and generation methods concerning pulsed fields up to the megaoersted level publishing company amsterdam logg automating the finite element method arch comput methods logg mardal wells et al automated solution of differential equations by the finite element method springer isbn long kirby and van bloemen waanders unified embedded parallel finite element computations via frechet differentiation siam sci of a unified implementation of the finite spectral element methods in and in applied parallel computing state of the art in scientific computing volume of lecture notes in computer science pages springer pawlowski http pawlowski phipps and salinger automating embedded analysis capabilities and managing software complexity in multiphysics simulation part i generic programming scientific programming in press phipps and pawlowski efficient expression templates for operator automatic differentiation in forth hovland utke and walther editors recent advances in algorithmic differentiation springer phipps http prud homme a domain specific embedded language in for automatic differentiation projection integration and variational formulations scientific programming reagan najm ghanem and knio uncertainty quantification in simulations through spectral projection combustion and flame jan doi salinger burroughs pawlowski phipps and romero bifurcation tracking algorithms and software for large scale applications int j bifurcat chaos shontz and vavasis analysis of and workarounds for element sal for a finite algorithm for warping triangular and tetrahedral meshes bit numerical mathematics staten owen shontz salinger and coffey a comparison of mesh morphing methods for shape optimization proceedings of the international meshing rountable submitted vandevoorde and josuttis templates the complete guide veldhuizen expression templates report wiener the homogeneous chaos am j math jan xiu and karniadakis the polynomial chaos for stochastic differential equations siam j sci comput jan
5
faster algorithms for svp and cvp in the norm divesh aggarwal priyanka mukhopadhyay jan january abstract blomer and naewe modified the randomized sieving algorithm of ajtai kumar and sivakumar to solve the shortest vector problem svp the algorithm starts with n n randomly chosen vectors in the lattice and employs a sieving procedure to iteratively obtain shorter vectors in the lattice the running time of the sieving procedure is quadratic in we study this problem for the special but important case of the norm we give a new sieving procedure that runs in time linear in n thereby significantly improving the running time of the algorithm for svp in the norm as in we also extend this algorithm to obtain significantly faster algorithms for approximate versions of the shortest vector problem and the closest vector problem cvp in the norm we also show that the heuristic sieving algorithms of nguyen and vidick and wang can also be analyzed in the norm the main technical contribution in this part is to calculate the expected volume of intersection of a unit ball centred at origin and another ball of a different radius centred at a uniformly random point on the boundary of the unit ball this might be of independent interest introduction a lattice l is the set of all integer combinations of linearly independent vectors bn rd n x zi bi zi z l l bn we call n the rank of the lattice and d the dimension of the lattice the matrix b bn is called a basis of l and we write l b for the lattice generated by b a lattice is said to be if n in this work we will only consider lattices unless otherwise stated the two most important computational problems on lattices are the shortest vector problem svp and the closest vector problem cvp given a basis for a lattice l rd svp asks us to compute a vector in l of minimal length and cvp asks us to compute a lattice vector at centre for quantum technologies and school of computing national university of singapore singapore dcsdiva centre for quantum technologies national university of singapore singapore a minimum distance to a target vector typically the is defined in terms of the norm for some p such that kxkp for p and max the most popular of these and the most well studied is the euclidean norm which corresponds to p starting with the seminal work of algorithms for solving these problems either exactly or approximately have been studied intensely such algorithms have found applications in factoring polynomials over rationals integer programming cryptanalysis checking the solvability by radicals and solving problems more recently many powerful cryptographic primitives have been constructed whose security is based on the hardness of these or related lattice problems ducas et al have proposed a signature scheme based on the modulesis problem in the the security of their cryptosystem the authors choose parameters under the assumption that svp in the norm for an appropriate dimension is infeasible due to lack of sufficient work on the complexity analysis of svp in the norm they choose parameters based on the best known algorithms for svp in the norm which are variants of the algorithm from the rationale for this is that svp in norm is likely harder than in the norm our results in this paper show that this assumption by ducas et al is correct and perhaps too generous in particular we show that the time and space complexity of the version of is at least n which is significantly larger than the best known algorithms for svp in the norm the closest vector problem in the norm is particularly important since it is equivalent to the integer programming problem the focus of this work is to study the complexity of the closest vector problem and the shortest vector problem in the norm given the importance of these problems their complexity is quite well studied prior work algorithms in the euclidean norm the fastest known algorithms for solving these problems run in time where n is the rank of the lattice and c is some constant the first algorithm to solve svp in time exponential in the dimension of the lattice was given by ajtai kumar and sivakumar who devised a method based on randomized sieving whereby exponentially many randomly generated lattice vectors are iteratively combined to create shorter and shorter vectors eventually resulting in the shortest vector in the lattice a sequence of works have given improvements of their sieving technique thereby improving the constant in the exponent the current fastest provable algorithm for exact svp runs in time n and the fastest algorithm that gives a constant approximation runs in time n there are heuristic algorithms that run in time the cvp is considered a harder problem than svp since there is a simple dimension and preserving reduction from svp to cvp based on a technique due to kannan ajtai kumar and sivakumar gave a sieving based algorithm that gives a approximation of cvp in time o n later exact exponential time algorithms for cvp were discovered the current fastest algorithm for cvp runs in time n and is due to algorithms in other norms generalized the aks algorithm to give exact algorithms for svp that run in time n they in fact gave exact algorithm for a more general problem called the subspace avoidance problem sap in particular showed that several lattice problems in particular the approximate svp and the approximate cvp are easily reducible to approximate sap thus give approximation algorithms for cvp for all norms that run in time o n for the special case when p eisenbrand et al gave a n log n algorithm for cvp hardness results the first np hardness result for cvp in all norms and svp in the norm was given by van emde boas subsequent results have shown np hardness of approximating cvp up to a factor of log log n for all norms also hardness of svp with similar approximating factor have been obtained under plausible but stronger complexity assumptions very recently showed that for almost all p cvp in the norm can not be solved in time under the strong exponential time hypothesis a similar hardness result has also been obtained for svp in the norm our contribution provable algorithms we modify the sieving algorithm by for svp and approximate cvp for the norm that results in substantial improvement over prior results before describing our idea we give a brief description of the sieving procedure of the algorithm starts by randomly generating a set s of n n vector pairs such that the vector difference for each such pair is a lattice vector of length at most n it then runs a sieving procedure usually a polynomial number of times in the ith iteration the algorithm maintains and updates a list of centre pairs c which is initialized to be the empty set the second vector in each centre pair is usually referred to as centre then for each vector y such that e y s the algorithm checks whether there is some c such that ec c c and it is at a distance of at most say from y if there exists such a centre pair then the vector pair e y is replaced in s by e y c ec otherwise it is deleted from s and added to this results in lattice vectors which are shorter than those at the beginning of a sieving iteration where is the number of lattice vectors at the end of i sieving iterations thus continuing in this manner we eventually obtain the shortest vector a crucial step in this algorithm is to find a vector c that is close to y this problem is called the nearest neighbor search nns problem and has been well studied especially in the context of heuristic algorithms for svp a trivial bound on the running time for this is but the aforementioned heuristic algorithms have spent considerable effort trying to improve this bound under reasonable heuristic assumptions since they require heuristic assumptions such improved algorithms for the nns have not been used to improve the provable algorithms for svp we make a simple but powerful observation that for the special case of the norm if we partition the ambient space r n into n then it is easy to see that each such partition will contain at most one centre thus to find a centre at distance from a given vector y we only need to find the partition in which y belongs and then check whether this partition contains a centre this can be easily done by checking the interval in which each of y belongs this drastically improves the running time for the sieving procedure in the svp algorithm from to this idea can also be used to obtain significantly faster approximation algorithms for both svp and cvp it must be noted here that the prior provable algorithms using aks sieve lacked an explicit value of the constant in the exponent for both space and time complexity and they used a quadratic sieve our modified sieving procedure is linear in the size of the input list and thus yields a better compared to the prior algorithms in order to get the best possible running time we optimize several steps specialized to the case of norm in the analysis of the algorithms see theorems and for explicit running times and a detailed description just to emphasise that our results are nearly the best possible using these techniques notice that for a large enough constant we obtain a running time and space close to for svp to put things in context the best algorithm for a constant approx svp in the norm runs in time and space their algorithm crucially uses the fact that is the best known upper bound for the kissing number of the lattice which is the number of shortest vectors in the lattice in norm however for the norm the kissing number is for zn so if we would analyze the algorithm from for the norm without our improvement we would obtain a space complexity but time complexity heuristic algorithms in each sieving step of the algorithm from the length of the lattice vectors reduce by a constant factor it seems like if we continue to reduce the length of the lattice vectors until we get vectors of length where is the length of the shortest vector we should obtain the shortest vector during the sieving procedure however there is a risk that all vectors output by this sieving procedure are copies of the zero vector and this is the reason that the aks algorithm needs to start with much more vectors in order to provably argue that we obtain the shortest vector nguyen and vidick observed that this view is perhaps too pessimistic in practice and that the randomness in the initial set of vectors should ensure that the basic sieving procedure should output the shortest vector for most if not all lattices the main ingredient to analyze the space and time complexity of their algorithm is to compute the expected number of centres necessary so that any point in s is at a distance of at most from one of the centres note that in this heuristic setting unlike the aks algorithm s stores lattice vectors instead of vector pairs this number is roughly the reciprocal of the fraction of the ball b of radius centred at the origin covered by a ball of radius centred at a uniformly random point in b here is the maximum length of a lattice vector in s after i sieving iterations in this work we show that the heuristic algorithm of can also be analyzed for the norm under similar assumptions the main technical contribution in order to analyze the time and space complexity of this algorithm is to compute the expected fraction of an ball b of radius centered at the origin covered by an ball of radius centered at a uniformly random point in b in order to improve the running time of the nv sieve a modified sieve was introduced by wang et al here they first partition the lattice into sets of vectors of larger norm and then within each set they carry out a sieving procedure similar to we have analyzed this in the norm and obtain algorithms significantly faster than the provable algorithms in particular our sieve algorithm runs in time we would like to mention here that our result does not contradict the near lower bound for svp obtained by under the strong exponential time hypothesis the reason for this is that the lattice obtained in the reduction in is not a lattice and has a dimension significantly larger than the rank n of the lattice organization of the paper in section we give some basic definitions and results used in this paper in section we introduce our sieving procedure and apply it to provably solve exact svp in section we describe approximate algorithms for svp and cvp using our sieving technique some results which are used in the analysis have been given in section but the reader can look at these when referenced in section we talk about heuristic sieving algorithms for svp preliminaries notations we write ln for natural logarithm and log for logarithm to the base the dimension may vary and will be specified we use bold lower case letters vn for vectors and bold upper case letters for matrices we may drop the dimension in the superscript whenever it is clear from the context sometimes we represent a matrix as a vector of column vectors mn where each mi ispan vector the ith of v is denoted by vi or v i given a vector x xi mi with xi q the representation size of x with respect to m is the maximum of n and the binary lengths of the numerators and denominators of the coefficients xi for any set of vectors s sn and a norm let ksk ksi denotes volume of a if it is a geometric body and cardinality if it is a set norm n p definition norm the norm of a vector v rn is defined by kvkp i for p and max i n for p fact for x rn kxkp nkxkp for p and kxkp kxkp for p definition ball a ball is the set of all points within a fixed distance or radius defined by a metric from a fixed point or centre more precisely we define the closed ball centered at x rn p with radius r as bn x r y rn ky xkp r p p the boundary of bn x r is the set bd bn x r y rn ky xkp r we may drop the first argument when the ball is centered at the origin and drop both the arguments for unit ball centered at origin p p p let bn x bn x bn x y rn ky xkp we drop the first argument if the spherical shell or corona is centered at origin p p fact x c r cn x r for all c the algorithm of dyer frieze and kannan selects almost uniformly a point in any convex body in polynomial time if a membership oracle is given for the sake of simplicity we will ignore the implementation detail and assume that we are able to uniformly select a point p in bn x r in polynomial time definition a lattice l is a discrete additive subgroup of rd each lattice has a basis b bn where bi rd and l l b n nx x i bi x i z for o for algorithmic purposes we can assume that l qd we call n the rank of l and d as the dimension if d n the lattice is said to be though our results can be generalized to arbitrary lattices in the rest of the paper we only consider full rank lattices definition for any lattice basis b we define the fundamental parallelepiped as p b bx x n if y p b then kykp nkbkp as can be easily seen by triangle inequality for any z rn there exists a unique y p b such that z y l b this vector is denoted by y z mod b and it can be computed in polynomial time given b and z definition for i n the ith successive minimum is defined as the smallest real number r such that l contains i linearly independent vectors of length at most r p l inf r dim span l bn p r i thus the first successive minimum of a lattice is the length of the shortest vector in the lattice p l min kvkp v l we consider the following lattice problems in all the problems defined below c is some arbitrary approximation factor usually specified as subscript which can be a constant or a function of any parameter of the lattice usually rank for exact versions of the problems c we drop the subscript p definition shortest vector problem svpc given a lattice l find a vector v such that kvkp ckukp for any other u l p definition closest vector problem cvpc given a lattice l with rank n and a target vector t rn find v l such that kv tkp ckw tkp for all other w p lemma the lll algorithm can be used to solve in polynomial time p proof let l is a lattice and l is the length of the shortest vector it has been shown in that the lll algorithm can be used to obtain an estimate f l l l of the length of the shortest vector satisfying l using fact we get p l l p l l l and p f p l l l l l n for p for p hence the result follows p the following result shows that in order to solve it is sufficient to consider the case p when l this is done by appropriately scaling the lattice lemma lemma in for all norms if there is an algorithm a that for all p p lattices l with l solves in time t t n b then there is an algorithm p that solves for all lattices in time o nt b volume estimates in the infinity norm in this section we prove some results about volume of intersection of balls which will be used in our analysis later the reader may skip this section and look at it when referenced j lemma let l is a lattice and r r then r proof note that for any integers in the region in contains at most one lattice the values of ij for any j n such that this region intersects with bn r are the result follows in the following lemma we derive the expected volume of intersection of bn assuming the centre r is uniformly distributed in bn lemma let vr r bn where r bn then h i h b vr and hence b u r with bn n u n vr bn i proof vr is a hyperrectangle or an and therefore its volume is the product of its edges let ei is the event when since r bn so due to symmetry pr ei for all i thus er vr n x pr ei er vr er vr since r bn so let zi is the variable denoting the length of the hyperrectangle in the direction of the ith then n y eri zi lim lim er vr n y eri zi lim now vr bn r bn y ky and y max ri yi min ri if then and thus now let us consider eri zi note this expression is same for all i n eri zi pr ri eri zi pr ri eri zi pr ri eri zi r h i eri ri h i eri ri ri h i eri ri r i r i i r so qn lim eri zi ih r h lim h ih ln lim q similarly eri zi thus lim er vr and the theorem follows next we deduce a similar result except that now we consider the volume of intersection of a big ball of radius with the unit ball when the big ball is centred at a uniformly distributed point on the corona bn lemma let vr r bn lim u bn where r bn then h h i v r and hence lim u bn h v i r bn h i n proof the proof is similar to lemma and we use similar notations here r bn so and vr y max ri yi min ri n y lim er vr lim eri zi n y lim eri zi if then and thus eri zi pr ri eri zi pr ri eri zi pr ri eri zi so lim n y eri zi ih h lim h ih ln lim h ln ih lim h we can also bound the above expression as lim n y eri zi h ln ih lim h h q thus we can conclude that eri zi h q similarly eri zi h and the lemma follows hence er vr the following result gives a bound on the size of intersection of two balls of a given radius in the norm lemma let v vn rn and let a be such that let d bn a bn v a then n y proof it is easy to see that the intersection of two balls in the norm hyperrectangles is also a hyperrectangle for all i the length of the side of this hyperrectangle is the result follows a faster algorithm for svp in this section we present an algorithm for svp that uses the framework of aks algorithm but uses a different sieving procedure that yields a faster running time using lemma we can obtain an estimate of l such that l l thus if we try polynomially many different values of for i then for one of them we have l l for the rest of this section we assume that we know a guess of the length of the shortest vector in l which is correct upto a factor aks algorithm initially samples uniformly a lot of perturbation vectors e bn d where d r and for each such perturbation vector maintains a vector y close to the lattice y is such that y e l thus initially we have a set s of many such pairs e y bn d bn for some n the desired situation is that after a polynomial number of such sieving iterations we are left with a set of vector pairs such that l bn o l finally we take differences of the lattice vectors corresponding to the remaining vector pairs and output the one with the smallest norm it was shown in that with overwhelming probability this is the shortest vector in the lattice one of the main and usually the most expensive step in this algorithm is the sieving procedure where given a list of vector pairs e y bn d bn r in each iteration it outputs a list of vector pairs bn d bn where r in each sieving iteration a number of vector pairs usually exponential in n are identified as centre pairs the second element of each such centre pair is referred to as centre by a map each of the remaining vector pair is associated to a centre pair such that after certain operations like subtraction on the vectors we get a pair with vector difference yielding a lattice vector of norm less than if we start an iteration with say n vector pairs and identify number of centre pairs then the output consists of n vector pairs in the original aks algorithm and most of its variants the running time of this sieving procedure which is the dominant part of the total running time of the algorithm is roughly quadratic in the number of sampled vectors to reduce the running time in norm we use a different sieving approach below we give a brief description of the sieving procedure algorithm the details can be found in algorithm and its two algorithm and algorithm k j we partition the interval r into intervals of length the intervals are r note that thejlast may be n smaller than the rest the ball r n can thus be partitioned into regions such that no two vectors in a region are at a distance greater than in the norm a list c of pairs is maintained where the first entry of each pair is an or array and the second one initialized as emptyset is for storing a centre pair we can think of this tuple as an index and call it the the intuition is the following we want to associate a vector pair e y to a centre pair ec c such that ky note that if ci for each i n then this condition is satisfied since yi r for each y so we partitioned r into intervals of length given y we map it to its iy in linear time this iy is such that iy i for i n and indicates the interval in which yi belong we can access c iy say in constant time for each e y in the list s if there exists a ec c c iy iy ic implying ky then we add e y c ec to the output list s else we add vector pair e y to c iy as a centre pair finally we return s lemma let r and r r the number of centre pairs in algorithm always algorithm an exact algorithm for svp input i a basis b bn of a lattice l ii iii iv l v n n output a shortest vector of l s for i to n do ei yi sample b using algorithm s s ei yi end r n maxi kbi l nr for j to k s sieve s r using algorithm r end compute the vector in yi ei yj ej ei yi ej yj s with the smallest norm return do algorithm sample input i a basis b bn of a lattice l ii d r output a pair e y bn d p b such that y e l e bn d y e mod p b return e y algorithm a faster sieve for norm input i set s ei yi i i bn bn r such that i yi ei l ii a triplet r output a set s i i i i bn bn such that i i i l r max e y j k s s s c in for e y s do if then s s e y else for i n do find the integer j such that j i i j end if ec c i then s s e y c ec else s c i c i e y end end end return s yi j j satisfies n where cc log proof for each of the n we partitioned the range r into j k intervals of length there can be such intervals in each j and thus in c the number of the index set is of cardinality hence the theorem follows claim the following two invariants are maintained in algorithm e y s y e e y s proof the first invariant is maintained at the beginning of the sieving iterations in algorithm due to the choice of y at step of algorithm since each centre pair ec c once belonged to s so c ec thus at step of the sieving procedure algorithm we have e y c ec the second p invariant is maintained at step of algorithm because y p b and hence kbi n maxi kbi we claim that this invariant is also maintained in each iteration of the sieving procedure consider a pair e y s and let iy is its let ec c is its associated centre pair by algorithm we have iy ic ci for i n so ky and hence ky c ec ky kec the claim follows by of variable r at step in algorithm in the following lemma we bound the length of the remaining lattice vectors after all the sieving iterations are over lemma at the end of k iterations in algorithm the length of lattice vectors ky n r k nr proof let rk is the value of r after k iterations where nr then rk k r k x i h n nr thus after k iterations rk and hence after k iterations ky k rk i h n nr n n using lemma and assuming we get an upper bound on number of j c n vectors of length at most b where cb log the above lemma along with the invariants imply that at the beginning of step in algorithm we have short lattice vectors vectors with norm bounded by we want to start with sufficient number of vector pairs so that we do not end up with all zero vectors at the end of the sieving iterations for this we work with the following conceptual modification proposed by regev let u l such that l where l bn bn and bn bn u define a bijection on bn that maps to to and bn to itself u if e e e u if e e else for the analysis of the algorithm we assume that for each perturbation vector e chosen by our algorithm we replace e by e with probability and it remains unchanged with probability we call this procedure tossing the vector further we assume that this replacement of the perturbation vectors happens at the step where for the first time this has any effect on the algorithm in particular at step in algorithm after we have identified a centre pair ec c we apply on ec with probability then at the beginning of step in algorithm we apply to e for all pairs e y the distribution of y remains unchanged by this procedure because y e mod p b and y e a somewhat more detailed explanation of this can be found in the following result of lemma theorem in the modification outlined above does not change the output distribution of the actual procedure note that since this is just a conceptual modification intended for ease in analysis we should not be concerned with the actual running time of this modified procedure even the fact that we need a shortest vector to begin the mapping does not matter the following lemma will help us estimate the number of vector pairs to sample at the beginning of the algorithm lemma lemma in let n n and q denote the probability that a random point in bn is contained in if n points xn are chosen uniformly at random there are at least qn in bn then with probability larger than qn points xi xn with the property xi from lemma we have where cs log q n thus with probability at least iterations such that ei qn we have at least n n pairs ei yi before the sieving lemma if n n then with probability at least algorithm outputs a shortest vector in l with respect to norm proof of the n vector pairs e y sampled at step of algorithm we consider those such that e we have already seen there are at least qn such pairs with probability at least we remove vector pairs in each of the k sieve iterations so at step of algorithm qn we have n n pairs e y to process by lemma each of them is contained within a ball of radius which can have at most n lattice vectors so there exists at least one lattice vector w for which the perturbation is in and it appears twice in s at the beginning of step with probability it remains w or with the same probability it becomes either w u or w u thus after taking difference at step with probability at least we find the shortest vector theorem let and let given a full rank lattice l qn there is a randomized algorithm for svp with success probability at least space complexity at most n and running time at most n where j cspace cs max c cb j and ctime max cspace where cc log cs log and cb log proof if we start with n pairs as stated in lemma then the space complexity is at most n with cspace cs max cc cb n in each j kiteration of the sieving algorithm it takes time to initialize and index c where for each vector pair e y s it takes time at most n to calculate its iy so the time taken to process each vector pair is at most n thus total time taken per iteration of algorithm is at most n n which is at most n and there are at most poly n such iterations if n n then the time complexity for the computation of the pairwise differences is at most n n so the overall time complexity is at most n where ctime max cspace improvement using the birthday paradox we can get a better running time and space complexity if we use the birthday paradox to decrease the number of sampled vectors but get at least two vector pairs corresponding to the same lattice vector after the sieving iterations for this we have to ensure that the vectors are independent and identically distributed before step of algorithm so we incorporate the following modification cb assume we start with n n sampled pairs after the initial sampling for each of the k sieving iterations we fix pairs to be used as centre pairs in the following way let r n kyi we maintain k lists ck of pairs where each list is similar to what has been already described before for the ith list we partition the range ri where ri r into intervals of length for each e y s we first calculate to check in which list it can potentially belong say c then we map it to its iy as has already been described before we add e y to c iy if it was empty before else we subtract vectors as in step of algorithm now using an analysis similar to we get the following improvement in the running time theorem let and let given a full rank lattice l qn there is a randomized algorithm for svp with success probability at least space complexity at most cb ctime n where c n and running most time space cs max c c and j max cspace cb where cc log cs log and cb log in particular for and the algorithm runs in time sponding space requirement of at most n n with a faster approximation algorithms algorithm for approximate svp notice that algorithm at the end of the sieving procedure obtains lattice vectors of length at most o so as long as we can ensure that one of the vectors obtained at the end of the sieving procedure is we obtain a o of the shortest vector consider a new algorithm a that is identical to algorithm except that step is replaced by the following find a vector in yi ei ei yi s we now show that if we start with sufficiently many vectors we must obtain a vector lemma if n then with probability at least algorithm a outputs a vector in l of length at most o with respect to norm proof of the n vector pairs e y sampled at step of algorithm a we consider those such that e we have already seen there are at least qn such pairs we remove vector pairs in each of the k sieve iterations so at step of algorithm we have n pairs e y to process with probability e and hence w y e is replaced by either w u or w u thus the probability that this vector is the zero vector is at most we thus obtain the following theorem let and given a full rank lattice l qn there is a randomized with success probability at least algorithm that for o approximates j space and time complexity cs n where cc log and cs log in particular for o the algorithm runs in time algorithm for approximate cvp given a lattice l and a target vector t let d denote the distance of the closest vector in l to just as in section we assume that we know the value of d within a factor of we can get rid of this assumption by using babai s algorithm to guess the value of d within a factor of and then run our algorithm for polynomially many values of d each within a factor of the previous one for define the following n lattice l v v l t let l be the lattice vector closest to then u t l kt for some k z we sample n vector pairs e y bn p using algorithm where bn t is a basis for next we run a number of iterations of the o further sieving algorithm to get a number of vector pairs such that r details can be found in algorithm note that in the algorithm n is the vector obtained by restricting v to the first n with respect to the computational basis from lemma we have seen that after iterations where kbi i h thus after the sieving iterations the set s consists of vector pairs r n such that the corresponding lattice vector v has c d o in order to ensure that our sieving algorithm doesn t return vectors from l kt for some k such that we choose our parameters as follows o then every vector has d and so either v t or v z t for some lattice vector z we need to argue that we must have at least some vectors in l t after the sieving iterations to do so we again use the tossing argument from section let l be the lattice vector closest to then let u let bn and bn bn u from lemma we have that the probability q that a random perturbation vector is in is bounded as n where cs log q thus as long as max algorithm approximate algorithm for cvp input i a basis b bn of a lattice l ii target vector t iii approximation factor iv v such that max c where c is a small constant vi vii n n output a closest vector to t in l d t s while d n maxi kbi do s s bn t l m span v v l for i to n do ei yi sample using algorithm s s ei yi end r n maxi kbi do while r s sieve s r using algorithm r end s y e e y s compute w s such that n min n s and t t w d d end let be any vector in t such that n min n w t n if then return t else return t end we have at least n n pairs ei yi before the sieving iterations such that ei thus using the same argument as in section we obtain the following theorem let and for any let max given a full rank lattice with l qn there is a randomized algorithm that for o approximates cvp j c n success probability at least space and time complexity s c where cc log in particular for o and the algorithm runs in time and cs log heuristic algorithm for svp nguyen and vidick introduced a heuristic variant of the aks sieving algorithm we have used it to solve svp the basic framework is similar to aks except that here we do not work with perturbation vectors we start with a set s of uniformly sampled lattice vectors of norm n l these are iteratively fed into a sieving procedure algorithm which when provided with a list of lattice vectors of norm say r will return a list of lattice vectors of norm at most in each iteration of the sieve a number of vectors are identified as centres if a vector is within distance from a centre we subtract it from the centre and add the resultant to the output list the iterations continue till the list s of vectors currently under consideration is empty the size of s can decrease either due to elimination of zero vectors at steps and of algorithm or due to removal of centres in algorithm after a linear number of iterations we expect to be left with a list of very short vectors and then we output the one with the minimum norm in order to have the shortest vector or a proper approximation of it with a good probability we have to ensure that we do not end up with a list of all indicating an end of the sieving iterations too soon say after number of iterations we make the following assumption about the distribution of vectors at any stage of the algorithm heuristic at any stage of the algorithm the vectors in in bn r x rn r r are uniformly distributed now after each sieving iteration we get a zero vector if there is a collision of a vector with a centre vector with the above assumption we can have following estimate about the expected number of collisions lemma let p vectors are randomly chosen with replacement from a set of cardinality n then the expected number of different vectors picked is n n p so the expected number of vectors lost through collisions is p n n p this number is negligible for p n since the expected number of lattice points inside a ball of radius is o rn the effect of collisions remain negligible till it can be shown that it is sufficient to take n which gives so collisions are expected to become significant only when we already have a good estimate of and even then collisions will imply we had a good proportion of lattice vectors in the previous iteration and algorithm svp algorithm in norm using lattice sieve input i a basis b bn of a lattice l ii sieve factor iii number output a short vector of s for j to n do s s sampling b using klein s algorithm end remove all zero vectors from s s while s do s s latticesieve s using algorithm remove all zero vectors from s end compute such that return algorithm lattice sieve input i a subset s bn r l ii sieve factor output a subset s bn r c for v s do if then s s v else if c such that kv then s s v c else c c v end end end return s thus with good probability we expect to get the shortest vector or a constant approximation of it at step of algorithm here we would like to make some comments about the initial sampling of lattice vectors at step of algorithm due to our assumption heuristic we have to ensure that the lattice points are uniformly distributed in the spherical shell or corona bn r at this stage too as in we can use klein s randomized variant of babai s nearest plane algorithm intuitively what we have to ensure is that the sampled points should not be biased towards a single direction gentry et al gave a detailed analysis of klein s algorithm and proved the following theorem p let b bn be any basis of an lattice l and s ln there exists a randomized polynomial time algorithm whose output distribution is statistical distance of the restriction to l of a gaussian centered at and with variance with density proportional to v exp using fact exp v exp for p exp v exp for p and assuming kbkp n p l we can conclude that the above algorithm can be used to uniformly sample lattice points of norm at most n p l at step of algorithm for all p we will now analyze the complexity of algorithm for this the crucial part is to assess the number of centres or which is done in the following lemma n where kc lemma let and nc kc s bn r such that n and its points are picked independently at random with uniform distribution from bn r if nc n then for any c s with uniformly distributed points and cardinality at least nc we have the following for any v s with high probability c such that kv proof assuming heuristic holds during every iteration of the sieve the expected fraction of bn r that is not covered by nc balls of radius centered at randomly chosen points of n from lemma bn r is nc where kc now nc log n log n thus the expected fraction of the corona covered by nc balls is at least so the expected number of uncovered points is less than since this number is an integer it is with probability at least theorem the expected space complexity and running time of algorithm is at most n n and n respectively n n where k is as defined proof let nc expected number of centers in each iteration poly n kc c in lemma thus each time the lattice sieve is invoked in steps of algorithm we expect size of n provided it satisfies heuristic s to decrease by aproximately kc we can use the lll algorithm lemma to obtain an estimate of l with approxima tion factor so we can start with vectors of norm n l in each iteration of lattice n n sieve the norm of the vectors decrease by a factor if we start with n kc vectors then after a linear number of iterations we expect to be left with some short vectors since the running time of the lattice sieve is quadratic the expected running time of the n algorithm is at most heuristic sieving algorithm for svp in order to improve the running time which is mostly dictated by the number of centres wang et al introduced a sieving procedure that improves upon the nv sieve for large here in the first level we identify a set of centres and to each c we associate vectors within a distance r from it now within each such r radius big ball we have another set of vectors which we call the centre from each we subtract those vectors which are in bn r and add the resultant to the output list we have analysed this sieve algorithm in the norm and also found similar improvement in the running time to analyze the complexity of algorithm with the sieving procedure algorithm we need to count the number of centres in first level which is given in lemma for each c we count the number of centres h i n where k lemma let and kc c s bn r r such that n and its points are picked independently at random with uniform distribution from bn r r if n then for any s with uniformly distributed points and cardinality at least we have the following s with high probability such that kv proof the proof is similar to lemma now we cover bn bn r where r bn with smaller balls bn q where q bn bn r let vq q bn we can apply lemma to conclude that h i lim b vq u n algorithm heuristic sieve input i a subset s bn r l ii sieve factors output a subset s bn r r s for v s do if r then s s v else if such that kv r then if such that kv r then s s v else v end else v v end end end return s and lim hence the fraction of bn bn u bn r covered by bn h v i q bn q is h in using similar arguments of lemma or lemma we can estimate the number of centres at second level in the following lemma we bound the number of centres within each r radius big ball centred at some point c say h i lemma let and kc c n where kc c t s bn r r bn c r where c bn r r such that n and its points are picked independently at random with uniform distribution if n then for any s with uniformly distributed points and cardinality at least we have the following s with high probability such that kv finally we can analyze the complexity of the above algorithm theorem the space complexity of algorithm using sieve algorithm is n poly n where and are as defined in lemma and respectively also the time complexity is at most n the optimal value is attained when yielding a time complexity of at most n the space complexity is at most n k proof the expected number of centres in any iteration of algorithm is poly n kc c we can use the lll algorithm lemma to obtain approximation of l thus we can initially sample n poly n vectors of norm n l assuming the heuristic holds in each iteration of the sieve the norm of the vectors decrease by a factor and also the expected size of s decreases by so after a polynomial number of sieve iterations we expect to be left with some vectors of norm o l n so space complexity is at most n now in each sieve iteration each vector is compared with at most centres thus the expected running time is at most t poly n n the optimal value plugging in the expressions we get t poly n is attained when n yielding at this value t n acknowledgement references sanjeev arora babai jacques stern and z sweedyk the hardness of approximate optima in lattices codes and systems of linear equations in foundations of computer science annual symposium on pages ieee divesh aggarwal daniel dadush oded regev and noah solving the shortest vector problem in time via discrete gaussian sampling in stoc full version available at http divesh aggarwal daniel dadush and noah solving the closest vector problem in n discrete gaussian strikes again in foundations of computer science focs ieee annual symposium on pages ieee vikraman arvind and pushkar s joglekar some sieving algorithms for lattice problems in international proceedings in informatics volume schloss informatik ajtai generating hard instances of lattice problems in proceedings of the annual acm symposium on theory of computing pages acm ajtai ravi kumar and sivakumar a sieve algorithm for the shortest lattice vector problem in stoc pages ajtai ravi kumar and sivakumar sampling short lattice vectors and the closest lattice vector problem in ccc pages divesh aggarwal and noah just take the average an embarrassingly simple algorithm for svp and cvp arxiv preprint babai on lattice reduction and the nearest lattice point problem combinatorica anja becker ducas nicolas gama and thijs laarhoven new directions in nearest neighbor searching with applications to lattice sieving in proceedings of the twentyseventh annual symposium on discrete algorithms pages society for industrial and applied mathematics huck bennett alexander golovnev and noah on the quantitative hardness of cvp arxiv preprint zvika brakerski adeline langlois chris peikert oded regev and damien classical hardness of learning with errors in stoc pages johannes and stefanie naewe sampling methods for shortest vectors closest vectors and successive minima theoretical computer science zvika brakerski and vinod vaikuntanathan efficient fully homomorphic encryption from standard lwe in focs pages ieee zvika brakerski and vinod vaikuntanathan fhe as secure as pke in itcs pages matthijs j coster antoine joux brian a lamacchia andrew m odlyzko schnorr and jacques stern improved subset sum algorithms computational complexity cai and ajay nerurkar approximating the svp to within a factor is under randomized conditions in computational complexity proceedings thirteenth annual ieee conference on pages ieee martin dyer alan frieze and ravi kannan a random algorithm for approximating the volume of convex bodies journal of the acm jacm irit dinur guy kindler ran raz and shmuel safra approximating cvp to within factors is combinatorica ducas lepoint vadim lyubashevsky peter schwabe gregor seiler and damien digital signatures from module lattices technical report iacr cryptology eprint archive daniel dadush chris peikert and santosh vempala enumerative lattice algorithms in any norm via coverings in foundations of computer science focs ieee annual symposium on pages ieee friedrich eisenbrand nicolai and martin niemeier covering cubes and the closest vector problem in proceedings of the annual symposium on computational geometry pages acm craig gentry fully homomorphic encryption using ideal lattices in stoc proceedings of the acm international symposium on theory of computing pages acm new york oded goldreich and shafi goldwasser on the limits of nonapproximability of lattice problems journal of computer and system sciences goldreich micciancio safra and seifert approximating shortest lattice vectors is not harder than approximating closest lattice vectors information processing letters craig gentry chris peikert and vinod vaikuntanathan trapdoors for hard lattices and new cryptographic constructions in proceedings of the fortieth annual acm symposium on theory of computing pages acm guillaume hanrot xavier pujol and damien algorithms for the shortest and closest lattice vector problems in international conference on coding and cryptology pages springer ishay haviv and oded regev hardness of the shortest vector problem to within almost polynomial factors theory of computing preliminary version in stoc piotr indyk and rajeev motwani approximate nearest neighbors towards removing the curse of dimensionality in proceedings of the thirtieth annual acm symposium on theory of computing pages acm russell impagliazzo and ramamohan paturi complexity of in computational complexity proceedings fourteenth annual ieee conference on pages ieee antoine joux and jacques stern lattice reduction a toolbox for the cryptanalyst journal of cryptology ravi kannan minkowski s convex body theorem and integer programming mathematics of operations research pp subhash khot hardness of approximating the shortest vector problem in lattices journal of the acm september preliminary version in focs philip klein finding the closest lattice vector when it s unusually close in proceedings of the eleventh annual symposium on discrete algorithms pages society for industrial and applied mathematics thijs laarhoven sieving for shortest vectors in lattices using angular hashing iacr cryptology eprint archive thijs laarhoven sieving for shortest vectors in lattices using angular hashing in annual cryptology conference pages springer thijs laarhoven and benne de weger faster sieving for shortest lattice vectors using spherical hashing in international conference on cryptology and information security in latin america pages springer hendrik w lenstra integer programming with a fixed number of variables mathematics of operations research lenstra lenstra and factoring polynomials with rational coefficients math susan landau and gary lee miller solvability by radicals is in polynomial time in proceedings of the fifteenth annual acm symposium on theory of computing pages acm mingjie liu xiaoyun wang guangwu xu and xuexin zheng shortest lattice vectors in the presence of gaps iacr cryptology eprint archive artur mariano christian bischof and thijs laarhoven parallel probable hash sieve a practical sieving algorithm for the svp in parallel processing icpp international conference on pages ieee daniele micciancio the shortest vector problem is to approximate to within some constant siam journal on computing march preliminary version in focs daniele micciancio and oded regev to reductions based on gaussian measures siam journal on computing daniele micciancio and panagiotis voulgaris faster exponential time algorithms for the shortest vector problem in soda pages daniele micciancio and panagiotis voulgaris a deterministic single exponential time algorithm for most lattice problems based on voronoi cell computations siam journal on computing phong q nguyen and jacques stern the two faces of lattices in cryptology in cryptography and lattices pages springer phong q nguyen and thomas vidick sieve algorithms for the shortest vector problem are practical journal of mathematical cryptology andrew m odlyzko the rise and fall of knapsack cryptosystems cryptology and computational number theory xavier pujol and damien solving the shortest lattice vector problem in time iacr cryptology eprint archive oded regev lecture notes on lattices in computer science oded regev on lattices learning with errors random linear codes and cryptography journal of the acm art peter van emde boas another partition problem and the complexity of computing short vectors in a lattice technical report xiaoyun wang mingjie liu chengliang tian and jingguo bi improved heuristic sieve algorithm for shortest vector problem in proceedings of the acm symposium on information computer and communications security pages acm wei wei mingjie liu and xiaoyun wang finding shortest lattice vectors in the presence of gaps in topics in cryptology the cryptographer s track at the rsa conference san francisco ca usa april proceedings pages
8
on the table of marks of a direct product of finite groups feb brendan masterson and pfeiffer abstract we present a method for computing the table of marks of a direct product of finite groups in contrast to the character table of a direct product of two finite groups its table of marks is not simply the kronecker product of the tables of marks of the two groups based on a decomposition of the inclusion order on the subgroup lattice of a direct product as a relation product of three smaller partial orders we describe the table of marks of the direct product essentially as a matrix product of three class incidence matrices each of these matrices is in turn described as a sparse block diagonal matrix as an application we use a variant of this matrix product to construct a ghost ring and a mark homomorphism for the rational double burnside algebra of the symmetric group introduction the table of marks of a finite group g was first introduced by william burnside in his book theory of groups of finite order this table characterizes the actions of g on transitive which are in bijection to the conjugacy classes of subgroups of thus the table of marks provides a complete classification of the permutation representations of a finite group g up to equivalence the burnside ring b g of g is the grothendieck ring of the category of finite the table of marks of g arises as the matrix of the mark homomorphism from b g to the free zr where r is the number of conjugacy classes of subgroups of like the character table the table of marks is an important invariant of the group by a classical theorem of dress g is solvable if and only if the prime ideal spectrum of b g is connected if b g has no nontrivial idempotents a property that can easily be derived from the table of marks of the table of marks of a finite group g can be determined by counting inclusions between conjugacy classes of subgroups of g for this the subgroup lattice of g needs to be known as the cost of complete knowledge of the subgroups of g increases drastically with the order of g or rather the number of prime factors of that order this approach is limited to small groups alternative methods for the computation of a table of marks have been developed which avoid excessive computations with the subgroup lattice of this includes a method for computing the table of marks of g from the tables of marks of its maximal subgroups and a method for computing the table of marks of a cyclic extension of g from the table of marks of g the purpose of this article is to develop tools for the computation of the table of marks of a direct product of finite groups and the obvious idea here is to relate the subgroup lattice of to the subgroup lattice of and and to compute the table of marks of using this relationship many properties of can be derived from the properties of and with little or no effort at all conjugacy classes date february mathematics subject classification key words and phrases burnside ring table of marks subgroup lattice double burnside ring ghost ring mark homomorphism brendan masterson and pfeiffer of elements of for example are simply pairs of conjugacy classes of and and the character table of is simply the kronecker product of the character tables of and however the relationship between the table of marks of and the tables of marks of and is much more intricate a flavour of the complexity to be expected is already given by a classical result known as goursat s lemma lemma according to which the subgroups of a direct product of finite groups and correspond to isomorphisms between sections of and this article presents the first general and systematic study of the subgroup lattice of a direct product of finite groups beyond goursat s lemma only very special cases of such subgroup lattices have been considered so far by schmidt and zacher in view of goursat s lemma it seems appropriate to first develop some theory for sections in finite groups here a section of a finite group g is a pair p k of subgroups p k of g such that k is a normal subgroup of we study sections by first defining a partial order on the set of sections of g as componentwise inclusion of subgroups p k p k if p p and k now if p k p k the canonical homomorphism p decomposes as a product of three maps an epimorhism an isomorphism and a monomorphism we show that this induces a decomposition of the partial order as a product of three partial orders which we denote by and for reasons that will become clear in section thus and this decomposition of the partial order is compatible with the conjugation action of g on the set of its sections the description of subgroups of in terms of sections of and allows us to transfer the decomposition of the partial orders on the sections to the set of subgroups of we will show in section that for subgroups l m of there exist unique intermediary subgroups l and m such that l l m m where the partial orders and on the set of subgroups of are defined in terms of the corresponding relations on the sections of and this gives a decomposition of the partial order on subgroups into three partial orders which is compatible with the conjugation action of in section we will show as one of our main results that this yields a corresponding decomposition of the table of marks of g as a matrix product of three class incidence matrices individually each of these class incidence matrices has a block diagonal structure which is significantly easier to compute than the subgroup lattice of the rest of this paper is arranged as follows in section we collect some useful known results in section we study the sections of a finite group g and discuss properties of the lattice of sections partially ordered componentwise we show how a decomposition of this partial order as a relation product of three partial orders leads to a corresponding decomposition of the class incidence matrix of the sections of g as a matrix product this section concludes with a brief discussion of an interesting variant of the partial order on sections and its class incidence matrix section considers isomorphisms from sections of g to a particular group u as subgroups of g u we determine the structure of the set of all such isomorphisms as a g aut u in section we study subgroups of as pairs of such isomorphisms one from a section of into u and one from this allows us to determine the structure of the set of all such subgroups as a aut u we also derive a decomposition of the subgroup inclusion order of as a relation product of three partial orders from the corresponding on the table of marks of a direct product of finite groups decomposition of the partial orders of sections from section in section we develop methods for computing the individual class incidence matrices for each of the partial orders on subgroups and use these matrices to compute the table of marks of essentially as their product finally in section we present an application of the theory the double burnside ring b g g of a finite group g is defined as the grothendieck ring of transitive g g and where addition is defined as disjoint union and multiplication is tensor product the double burnside ring is currently at the centre of much research and is an important invariant of the group g see here we study the particular case of g and use our partial orders to construct an explicit ghost ring and mark homomorphism for qb g g in the sense of boltje and danz acknowledgement much of the work in this article is based on the first author s phd thesis see this research was supported by the college of science national university of ireland galway preliminaries notation we denote the symmetric group of degree n by sn the alternating group of degree n by an and a cyclic group of order n simply by we use various forms of composition in this paper group homomorphisms act from the right and are composed accordingly the product of and is defined by for a where gi is a group i the relation product of relations r x y and s y z is the relation s r x z x y r and y z s for some y y x z where x y z are sets in section the product l m of subgroups l and m will be defined as mop lop op where rop y x x y r denotes the opposite of subgroups as relations the following classical result describes subgroups of a direct product as isomorphisms between section quotients here a section of a finite group g is a pair p k of subgroups of g so that k e lemma goursat s lemma let be groups there is a bijective correspondence between the subgroups of the direct product and the isomorphisms of the form where pi ki is a section of gi i proof let l and let pi gi be the projection of l onto gi i then l is a binary relation from to writing for l it is easy to see that is a partition of into cosets of the normal subgroup of similarly the sets are cosets of a normal subgroup of the relation l thus is difunctional it establishes a bijection between the section quotients and which in fact is a group homomorphism conversely any isomorphism between sections pi ki of gi i yields a relation which in fact is a subgroup of if a subgroup l corresponds to an isomorphism then we write pi l for pi and ki l for ki i we call the sections pi ki the goursat sections of l and the isomorphism type of pi the goursat type of finally l is called the graph of and conversely is the goursat isomorphism of the next lemma illustrated in fig can be derived from lemma see brendan masterson and pfeiffer figure butterfly lemma lemma butterfly lemma let and be sections of set ki for i and then and the canonical map is an isomorphism i we refer to the section as the butterfly meet of and let be finite groups the product l m of subgroups l and m is defined as l m l and m for some then l m is in fact a subgroup thanks to we obtain the goursat isomorphism of by composing goursat isomorphisms and as follows suppose that l is the graph of the isomorphism and that m is the graph of with both and being sections of let subgroups and isomorphisms i be as in the butterfly lemma let be the isomorphism obtained by restricting to defined by for p moreover let be the of to defined by for p then the graph of is a subgroup of although not necessarily of l the graph of is a subgroup of lemma with the above notation l m is the graph of the composite isomorphism we use the subgroup product and its goursat isomorphism in the proof of theorem bisets and biset products the action of a direct product on a set x is sometimes more conveniently described as the two groups gi acting on the same set x one from the left and one from the right on the table of marks of a direct product of finite groups definition let and be groups then a x is a left and a right such that the actions commute x gi gi x x under suitable conditions bisets can be composed as follows definition let and be groups if x is a and y a biset the tensor product of x and y is the x y x y of on the set x y under the action given by x y g the tensor product of bisets will be used in section to describe certain sets of subgroups of it also provides the multiplication in the double burnside ring of a group g which is the subject of section action on pairs we will also need to deal with one group acting on two sets the following parametrization of the orbits of a group acting on a set of pairs is lemma let g be a finite group acting on finite sets x and y and suppose that z x y is a set of pairs then a x y g x gy y g where zy x x x y z for y y the of pairs in z are thus represented by pairs x y where the y represent the orbits of g on y and for a fixed y the x represent the orbits of the stabilizer of y on the set zy of all x x that are to y proof note that a z x y g y g is a disjoint union of intersections z x y g whence is the corresponding disjoint union of orbit spaces z x y g by lemma for each y y the map x gy x y g is a bijection between and y g hence for every y y there is a bijection between and z x y g class incidence matrices let x be a finite partially ordered set poset with incidence matrix if y x a axy x where axy else this incidence matrix a is lower triangular if the order of rows and columns of a extends the partial order suppose further that is an equivalence relation on x then partitions x into classes x x t for a transversal t x we say that the partial order is compatible with the equivalence relation if for all classes x y the number axy x x y x brendan masterson and pfeiffer does not depend on the choice of the representatives x y x if axy axy for y y in that case we define the class incidence matrix of the partial order to be the matrix a axy x whose rows and columns are labelled by the chosen transversal t matrix multiplication relates the matrices a and a in the following way lemma define a row summing matrix r rxy and a column picking matrix c cxy with entries if x y if x y rxy cxy else else then i r c i the identity matrix on t ii r a a r iii a r a c p proof i for each x z t rxy cyz rxz ii for each x t z x the x of both matrices is equal to axy where y t represents the class z x iii follows from ii and i remark examples of compatible posets are provided by group actions suppose that a finite group g acts on a poset x in such a way that x y for all x y x and all a then x is called a the partial order is compatible with the partition of x into since x x y x x x x for all x y x we write r g and c g for r and c if the equivalence is given by a remark more generally any square matrix a with rows and columns indexed by a set x with an equivalence relation after choosing a transversal of the equivalence classes yields a product r a c we say that the matrix a is compatible with the equivalence if this product does not depend on the choice of transversal if the equivalence on x is induced by the action of a group g then the matrix a axy x is compatible if axy for all g such matrices are the subject of proposition and theorem the burnside ring and the table of marks the burnside ring b g of a finite group g is the grothendieck ring of the category of finite that is the free abelian group with basis consisting of the isomorphism classes x of transitive x with disjoint union as addition and the cartesian product as multiplication multiplication of transitive is described by mackey s formula lemma x ad b d the rational burnside algebra qb g q b g is isomorphic to a direct sum of r copies of q one for each conjugacy class of subgroups of g with products of basis elements determined by the above formula on the table of marks of a direct product of finite groups the mark of a subgroup h of g on a x is its number of fixed points x x x for all h h obviously whenever and are conjugate subgroups of the map b g zr assigns to x b g the vector zr where hr is a transversal of the conjugacy classes of subgroups of in this context the ring zr with componentwise addition and multiplication is called the ghost ring of we have x y x y x y x y where the latter product is componentwise multiplication in zr thus is a homomorphism of rings called the mark homomorphism of the table of marks m g of g is the r with rows i r the mark vectors of all transitive up to isomorphism regarding as a linear map from qb g to qr the table of marks is the matrix of relative to the natural basis of qb g and the standard basis of qr as k h hg k g g for subgroups h k g the table of marks provides a compact description of the subgroup lattice of in fact m g d a where d is the diagonal matrix with entries hi hi and a is the class incidence matrix of the group g acting on its lattice of subgroups by conjugation example let g then g has conjugacy classes of subgroups and m g sections let g be a finite group we denote by sg the set of subgroups of g and by sg h g h g the set of conjugacy classes of subgroups of a section of g is a pair p k of subgroups of g where k e we call p the top group and k the bottom group of the section p k we refer to the quotient group as the quotient of the section p k the isomorphism type of a section is the isomorphism type of its quotient and the size of a section is the size its quotient we denote the set of sections of g by qg p k k e p g the group g acts on the set of pairs qg by conjugation in sections and we classify the orbits of this action and describe the automorphisms induced by the stabilizer of a section on its quotient the partial order on sg induces a partial order on the pairs in qg in section we show that this partial order is in fact a lattice and how it can be decomposed as a product of three smaller partial order relations in section we determine the class incidence matrix of qg and show that the decomposition of the partial order on qg corresponds to a decomposition of the class incidence matrix of qg as a matrix product of three class incidence matrices in section we use the smaller partial orders to define a new partial order on qg that is consistent with the notion of size of a section brendan masterson and pfeiffer conjugacy classes of sections a finite group g naturally acts on its sections through componentwise conjugation via p k g pg kg where p k qg and g we write p k g for the conjugacy class of a section p k in g and denote the set of all conjugacy classes of sections of g by qg p k g p k qg the conjugacy classes of sections can be parametrized in different ways in terms of simpler actions as follows proposition let g and sg be as above ep i for p g let sep g k sg k e p then sg is an ng p and a qg p k g k sep g p p ke ii for k g let ske g p sg k e p then sg is an ng k and a qg p k g k ske g k k proof i note that qg sg sg is a set of pairs as the stabilizer of k sg is ng k the result follows with lemma ii follows in a similar way we write u g for a finite group u which is isomorphic to a subquotient of we denote by qg u the set of sections of g with isomorphism type u and by u qg u p k qg g its classes naturally qg a qg u each of the above three partitions of qg will be used in the sequel section automizers the automizer of a subgroup h in g is the quotient group of the section ng h cg h the automizer of h is isomorphic to the subgroup of aut h induced by the conjugation action of analogously we define the automizer of a section as a section whose quotient is isomorphic to the subgroup of automorphisms induced by conjugation by definition let p k qg and set n ng k using the natural homomorphism n n n nk we let p p and n n we define the section normalizer of p k to be the inverse image ng p k nn p the section centralizer to be cg p k cn p and the section automizer to be the section ag p k ng p k cg p k moreover we denote by autg p k the subgroup of aut of automorphisms induced by conjugation by g see fig on the table of marks of a direct product of finite groups aut ng p k nn p autg p k p cg p k p cn p inn cn p p p cg p k p cg p k p cn p k figure the section p k and its automorphisms the following properties of these groups are obvious lemma let p k be a section in qg then i ng p k ng p ng k ii cg p k is the set of all g ng p k which induce the identity automorphism on iii inn g autg p k aut the sections lattice subgroup inclusion induces a partial order on the set qg of sections of g which inherits the lattice property from the subgroup lattice as follows definition qg is a poset with partial order defined componentwise p k p k if p p and k k for sections p k and p k of for subgroups a b g we write a b ha bi for the join of a and b in the subgroup lattice of g and hhaiib for the normal closure of a in b proposition the poset qg is a lattice with componentwise meet and join given by for sections and of proof clearly is a normal subgroup of and the section is the unique greatest lower bound of the sections and in qg it is also easy to see that the least section p k of g with p and k has p and k iip theorem let p k p k be sections of a finite group then i p k p is the largest section between p k and p k with top group p ii p k k is the smallest section between p k and p k with bottom group k iii the map p k p pk p p is an isomorphism between the section quotients of p k p and p k k brendan masterson and pfeiffer p p k figure p k p k proof if p k p k then there is a canonical homomorphism from p to given by k p kp for p p according to the homomorphism theorem can be decomposed into a surjective bijective and an injective part that is where p p p p p p p are uniquely determined see fig motivated by the above result we define the following three partial orders on qg definition let p k p k then we write i p k p k if p p if the sections have the same top groups ii p k p k if k k if the sections have the same bottom groups iii p k p k if the map pk pk p p is an isomorphism we can now reformulate theorem in terms of these three relations corollary the partial order on qg is a product of three relations let a denote the incidence matrix for the partial order then the stronger property a a a a also holds proof by theorem for p k p k there exists unique intermediate sections s s qg such that p k s s p k remark note that by the correspondence theorem there is a bijective correspondence between the subgroups of and the sections p k of g with p k p k similarly there is a bijective correspondence between the normal subgroups and hence the factor groups of and the sections p k of g with p k p k class incidence matrices we denote the class incidence matrix of the qg by a note that the set qg of sections of g is also a with respect to any of the partial orders from definition with respective class incidence matrices a a and a theorem with this notation a a a a on the table of marks of a direct product of finite groups proof set r r g and c c g from lemma iii we have that a r a by corollary a a a a lemma ii then gives a r a a a c a r a a c a a r a c a a a each of the classes incidence matrices a a and a is a direct sum of smaller class incidence matrices as the following results show theorem for p g denote the class incidence matrix of the ng p sep g by ap then m a ap p proof let p k qg by proposition the classes containing a section with top group p are represented by sections p k where k runs over a transversal of the ng p of sep g in order to count the of p k above p k in g the it now suffices to note that p k p k for some g g if and only if k k g for some g ng p example let g then a g g g g g g g g theorem for k g denote the class incidence matrix of the ng k ske g by ak then m a ak k proof similar to the proof of theorem example let g then a g g g g lemma we have a g m g g g au where for u g au is the class incidence matrix of the qg u brendan masterson and pfeiffer proof p k p k implies p example let g then a g g g g g g g g the class incidence matrix a of the qg is the product of this matrix and the class incidence matrices in examples and according to theorem a g g g g g g g g the sections lattice revisited the partial order on qg is not compatible with section size as p k p k implies it turns out that by effectively replacing the partial order by its opposite p one obtains from a new partial order which is compatible with section size proposition define a relation on qg by p k p k if p p and k p k for sections p k and p k of then qg is a proof the relation is clearly reflexive and antisymmetric on qg and compatible with the action of hence it only remains to be shown that this relation is transitive let p k p k and p k be sections of g such that p k p k and p k p k in order to show that p k p k we need p p which is clear and k p k intersecting both sides of k p k with p gives k p k p k as desired example let us denote the three subgroups of order of the klein g by and then g g and g g g g as the sections g g have no unique infimum the poset qg is not a lattice proposition let p k and p k be sections of a finite group g such that p k p k then there are uniquely determined sections of g p k p k such that i p k ii p k p iii proof by definition p k p k implies p p and k p k where k e p and k e then by the second isomorphism theorem k p is a normal subgroup of p p k is a subgroup of p such that k e p k and p k is isomorphic to p k p on the table of marks of a direct product of finite groups p p k k figure p k p k hence p k k and p k p have the desired properties see fig corollary the partial order on qg is a product of three relations p moreover a a a a p and a a a a p example let g then a g g g g g g g g in contrast to the class incidence matrix a in example the matrix a is lower triangular when rows and columns are sorted by section size moreover p k g for all sections p k of remark whenever p k p k there is a canonical isomorphism p p let be the goursat isomorphism of a subgroup of and suppose that p k the canonical isomorphism determines a unique restriction of to a goursat isomorphism p similarly for each section p k there is a unique of to a goursat isomorphism p as the butterfly meet p k of sections and of a group g satisfies p k ki i by lemma the product of subgroups with goursat isomorphisms and is the composition of the restriction of and the of to the butterfly meet of and morphisms let u be a finite group a of g is an isomorphism u between a section p k of g and the group u the set mg u u p k qg u of all of g forms a g aut u in section we describe the set mg u of of as an out u the identification of mg u brendan masterson and pfeiffer with certain subgroups of g u in section induces a partial order on mg u in section we compute the class incidence matrix of this partial order classes of each u of g induces an isomorphism between the automorphism groups aut and aut u we define the automizer of as an isomorphism between the quotient of the automizer ag p k of the section p k and the corresponding subgroup of aut u definition given a p k u denote ng p k and cg p k and let aut u be the image of autg in aut u the automizer of the is the ag that for n maps the coset to the automorphism of u corresponding to conjugation by n on moreover denote by u out u the group of outer automorphisms of u induced via noting that inn u a a the group g acts on mg u via a where p is the conjugation map induced by a we denote by g a g the of the and by mg u g mg u the set of of for a section p k qg u denote by mp k g u the set of with domain under the action for mg u and aut u the set mg u decomposes into regular aut u mp k g u one for each section p k qg u as the action of aut u commutes with that of g it induces an aut u g g on the set mg u of this action can be used to classify the of as follows proposition let u i as aut u mg u is the disjoint union of transitive aut u p k mp k g u g mg u one for each of sections p k g qg u ii let u be a of then mp k g u g where is a transversal of the right cosets of in aut u note that by an abuse of notation mp k g u is the set of full of the up k morphisms in mp k u although m u is not a in general g g proof let x mg u and let y qg u then x can be identified with the ginvariant subset z of x y consisting of those pairs p k where has domain by lemma is the disjoint union of aut u mp k g u one for each p k g of sections of on the table of marks of a direct product of finite groups now let u be a the stabilizer of the section p k in g is its normalizer ng p k the automizer ag transforms this action into the subgroup of aut u as aut u acts regularly on mp k g u the on this set correspond to the cosets of in aut u and g mp k g u g as inn u for all mg u the aut u on mg u can be regarded as an out u thus for each section p k of g the set mp k g u is isomorphic to out u as out u example let g and u then qg u and aut u makes two orbits on the of the form u as comparing morphisms by goursat s lemma lemma a u corresponds to the subgroup l p pk p p g u we call l the graph of the partial order on the subgroups of g u induces a natural partial order on mg u as follows if and are with graphs l and l then we define l this partial order on mg u is closely related to the order on qg u proposition let u and p u be of then p k p k and where p is the homomorphism defined by pk pk for p p proof let l p pk p p be the graph of and let l p pk p p be that of assume first that l this clearly implies p p and k moreover for any p p if p pk l l then pk pk pk as p pk is the unique element in l with first component hence now is an isomorphism whence p k p k conversely if p k p k and then clearly p pk p pk p pk l for all p p whence l more generally for finite groups u u g suppose that sections p k qg u and p k qg u are such that p k p k with canonical homomorphism p if u and p u are isomorphisms then the composition obviously is a homomorphism from u to u see fig p u figure u u brendan masterson and pfeiffer in case u u the previous lemma says that if and only if idu if u u then and are incomparable however there are the following connections to the partial orders on qg lemma let u be a then induces i an order preserving bijection between the sections p k of g with p k p k and the subgroups of u ii an order preserving bijection between the sections p k of g with p k p p k and the normal subgroups of u proof this is an immediate consequence of remark on the correspondence theorem the partial order of morphism classes the partial order on mg u is compatible in the sense of section with the conjugation action of g and hence yields a class incidence matrix ag u a u where for mg u a a g this matrix is a submatrix of the class incidence matrix of the subgroup lattice of g u corresponding to the classes of subgroups which occur as graphs of proposition suppose that mg u have graphs l l g u then a l a u l a u g u proof the result follows if we can show that the g u of l is not larger than its for this let u u then p u l for some p p and hence l p u but then l u l p as for mg u and aut u we have the matrix ag u is compatible in the sense of section with the action of out u on mg u in fact this relates it to the class incidence matrix au of qg u as follows proposition with the row summing and column picking matrices corresponding to the out u on mg u we have au r out u ag u c out u proof by proposition i the union of the classes g aut u is the set of all of the form a u for some a this set contains for each conjugate a with a p exactly one above p u by proposition example let g and u then mg u consists of three classes one with p k and two with p k permuted by out u we have g au au on the table of marks of a direct product of finite groups subgroups of a direct product from now on let and be finite groups in this section we describe the subgroups and the conjugacy classes of subgroups of the direct product in terms of properties of the groups and by goursat s lemma the subgroups of correspond to isomorphisms between sections of and any such isomorphism arises as composition of two for a suitable finite group u this motivates the study of subgroups of as pairs of pairs of morphisms let u be a finite group we call l a u i and we denote of if u is its goursat type if pi by u the set of all of given morphisms pi u in mgi u i composition yields an isomorphism with whose graph is a l hence there is a map u u u defined by in fact the u is the tensor product of the aut u u and the opposite of the aut u u proposition u u u u op proof for any l there exist mgi u i such that moreover for mgi u we have if and only if in aut u it will be convenient to express the order of a in terms of lemma let l be a of then comparing subgroups let u u be finite groups we now describe and analyze the partial order of subgroups of in terms of pairs of morphisms proposition let pi u mgi u and u mgi u i be morphisms let with corresponding subgroups l l of then l l if and only if i pi ki as sections of gi i and ii where and pi is the homomorphism defined by p ki p for p i u u u u figure l l brendan masterson and pfeiffer proof write l and l then l l if and only if pi ki i and for pi we have but if pi then pi ki pi so if and only if see fig corollary with the notation of proposition l l if and only if i pi ki as sections of gi i ii the partial orders on sections introduced in definition give rise to relations on the subgroups of as follows definition let l and l be subgroups of and suppose that l we write i l l if pi ki i if both sections of l and l have the same top groups ii l l if pi ki i if both sections of l and l have the same bottom groups iii l l if pi ki i if the canonical homomorphisms pi are isomorphisms all three relations are obviously partial orders moreover they decompose the partial order on the subgroups of in analogy to corollary theorem let l and l be such that l define a map by whenever pi are such that and a map by whenever pi are such that then i and are isomorphisms with corresponding graphs and ii and are the unique subgroups of with l proof denote by pi the canonical homomorphism i then as in the proof of theorem is the product of an epimorphism ker an isomorphism ker im and a monomorphism im pi by corollary it follows that im im and ker ker thus restricts to an isomorphism from im to im and induces an isomorphism from ker to ker and the following diagram commutes p p ki by proposition im ki and ker i i corollary the partial order on is a product of three relations moreover if a r denotes the incidence matrix of the relation r the stronger property a a a a also holds proof like corollary this follows from the uniqueness of the intermediate subgroups in theorem on the table of marks of a direct product of finite groups im ker im ker figure lemma let pi u mgi u i and l then i the set l l l is in an order preserving bijective correspondence with the subgroups of u ii the set l l l is in an order preserving bijective correspondence with the quotients of u iii the set l l l is in an order preserving bijective correspondence with iv the set l l l is in an order preserving bijective correspondence with proof this follows from lemma on the correspondences induced by a together with proposition and theorem classes of subgroups the conjugacy classes of of can be described as aut u of pairs of classes of theorem let u gi i i u is the disjoint union of sets u u u op one for each pair of section classes pi ki gi qgi u ii let pi u be of gi i then u u u op d where is a transversal of the cosets in aut u proof i as u u u u the classes of of are aut u on the direct product u u by proposition i this direct product is the disjoint union of aut u invariant direct products u u one for each choice of gi of sections pi ki gi qgi u i ii let aut u i note first that the image of under is a class of and that each is of this form we show that the classes in u u correspond to the brendan masterson and pfeiffer cosets in aut u for this let aut u i and assume that by proposition ii this is the case if and only if if and lie in the same coset example let g for each u we have out u therefore by theorem there exists exactly one conjugacy class of subgroups for each pair of classes of isomorphic sections a transversal of the conjugacy classes of subgroups of g g can be labelled by pairs of sections as follows g g g g g g g g here a subgroup li in row and column has a goursat isomorphism of the form the normalizer of a subgroup of described as a quotient of two can be described as the quotient of the automizers of the two theorem let u gi and let mgi u for i then op proof for i suppose that pi u and let agi pi ki then agi aut u is the automizer of let then on the one hand consists of those elements which induce automorphisms i agi aut u such that ai on the other hand by the lemma op where for i is the restriction of the isomorphism agi to the preimage of in hence as a subgroup of the product op consists of those elements with it follows that op as desired as an immediate consequence we can determine the normalizer index of a subgroup of in terms of corollary let l for pi u i then l u where ni ngi ki and pi pi i proof by lemma with the notation from the preceding proof l thus l on the table of marks of a direct product of finite groups but ki pi i by definition moreover u and u u table of marks we are now in a position to assemble the table of marks of from a collection of smaller class incidence matrices theorem let and be finite groups then the table of marks of is m d a a a where d is the diagonal matrix with entries l for l running over a transversal of the conjugacy classes of subgroups of proof the proof is similar to that of theorem in combination with corollary in the remainder of this section we determine the block diagonal structure of each of the matrices a a and a of the class incidence matrix of the is a block diagonal matrix with one block for each pair of conjugacy classes ki of subgroups of gi i theorem for ki gi i denote by the class incidence matrix of acting on the subposet of consisting of those subgroups l with bottom groups ki l ki i then m a ki proof let x and y we identify x with z x y where z x y x x y then lemma yields a partition of the conjugacy classes of subgroups of indexed by ki sgi i the stabilizer of y y is and zy x x x x y let l be such that ki l ki i in order to count the conjugates of a subgroup l with bottom groups ki l ki i above l in the it suffices to note that l l g for some g if and only if l l g for some g finally by the definition of there are no incidences between subgroups with different ki giving the block diagonal structure example let then a is the block sum of the matrices in the table below with rows and columns labelled by the conjugacy classes of subgroups k of within the row label of a subgroup of the form is just for brevity the column labels are identical and have been omitted k brendan masterson and pfeiffer the class incidence matrix of the is a block diagonal matrix with one block for each group u gi i up to isomorphism definition for a finite group g and finite and let ai be a square matrix with rows and columns labelled by xi i the action of g on permutes the rows and columns of the kronecker product if the matrices and are compatible with the then so is their kronecker product and we define r g c g where the row summing and column picking matrices r g and c g have been constructed as in lemma with respect to the on i for u gi consider the class incidence matrices ai ag u of the gi mgi u i from section by proposition these matrices and hence are compatible with the action of out u on their rows and columns theorem we have a m ag u u au i where for u gi ag u is the class incidence matrix of the gi mgi u i proof let l be a subgroup of with goursat type u and select and such that by lemma iv the subgroups l of with l l correspond to pairs of sections pi ki i for each such section pi ki set i where pi pi is the canonical isomorphism by proposition pi u is the unique pi ki u with the number of conjugates lx of a subgroup l of in mg i with lx l is thus equal to the number of pairs u u with such that is a conjugate of l in pi ki u then by theorem the set of all pairs of uif l for mg i morphisms mapping to a conjugate of l under is the out u of in u u the number of gi of above is given by i the entry a of the class incidence matrix ag u by proposition the aut u pi ki mgi u is isomorphic to aut u hence lx l x x a a where is a transversal of the right cosets of in aut u as can also be used to represent the right cosets of in out u the same number appears as the l l of the matrix ag u u au example let then a is the block sum of the following matrices ag u u au as out u here acts trivially the matrices are simply the kronecker squares of the matrices ag u in example the column labels are on the table of marks of a direct product of finite groups identical to the row labels and have been omitted u ag u u au example continuing example for g and u we have g ag ag u u u au illustrating the effect of a out u the class incidence matrix of the is a block diagonal matrix with one block for each pair of conjugacy classes pi of subgroups of gi i theorem for pi gi i denote by the class incidence matrix of acting on the sub poset of consisting of those subgroups l with pi l pi i then m a pi proof similar to the proof of theorem with x y z x y x x y x y example again we let then a is the block sum of the matrices in the table below with rows and columns labelled by the conjugacy classes of subgroups p of similar to example within the row label of a subgroup of the form is just for brevity the column labels are identical and have been omitted p brendan masterson and pfeiffer example combining the matrices from examples and according to theorem yields the table of marks m of with rows and columns sorted by section size as in example the double burnside algebra of as an application of the ideas from previous sections we now construct a mark homomorphism for the rational double burnside algebra of g the double burnside ring let g h and k be finite groups the grothendieck group of the category of finite g h is denoted by b g h if g h are identified with g then the abelian group b g h is identified with the burnside group b g h and hence the transitive bisets g where l runs through a transversal of the conjugacy classes of subgroups of g h form a of b g h there is a map from b g h b h k to b g k given by x y x y x y multiplication of transitive bisets is described by the following proposition let l and m let x h be a transversal of the l m cosets in then x g h h k g k l x m with this multiplication in particular b g g is a ring the double burnside ring of the rational double burnside algebra qb g g q b g g is known to be semisimple if and only if g is cyclic proposition little more is known about the structure of qb g g in general on the table of marks of a direct product of finite groups a mark homomorphism for the double burnside ring of for the ordinary burnside ring b g the table of marks of g is the matrix of the mark isomorphism qb g qr between the rational burnside algebra and its ghost algebra it is an open question whether there exist equivalent constructions of ghost algebras and mark homomorphims for the double burnside ring boltje and danz have investigated the role of the table of marks of the direct product g g in this context here we use the decomposition of the table of marks of g g from theorem and the idea of transposing the part from section in order to build a satisfying ghost algebra for the group g for this purpose we first set up a labelling of the natural basis of qb g g as follows set i let li i i be the conjugacy class representatives from example then the rational burnside algebra qb g g has a consisting of elements bi g i i and multiplication defined by by theorem the table of marks m of g g is a matrix product m a a a of a diagonal matrix with entries li li i i and three class incidence matrices for our purpose we now modify this product and set a a a p where diag diag are diagonal matrices the resulting matrix is m the matrix m mij is obviously invertible hence there are unique elements cj qb g g j i such that x bi mij cj forming a new of qb g g brendan masterson and pfeiffer theorem let g then the linear map qb g g defined by x xi ci x where ci qb g g are defined as above and xi q i i is an injective homomorphism of algebras proof this claim is validated by an explicit calculation whose details we omit the general strategy is as follows for i i let ci be the matrix of ci in the right regular representation of qb g g computed with the help of the mackey formula in proposition let be the equivalence relation on i corresponding to the kernel of the map that sends the conjugacy class of a subgroup l p to the conjugacy class of the section p then partitions i as it turns out that all transposed matrices cti are compatible with the equivalence in the sense of section hence after choosing a transversal of and using the corresponding row summing and column picking matrices r and c the map defined by ci c t ci r t i i is independent of the choice of transversal in fact by lemma ci ck c t ci ck r t ci c t ck r t ci ck for i k i showing that is a homomorphism injectivity follows from a dimension count it might be worth pointing out that the equivalence and hence the notion of compatibility and the map depend on the basis used for the matrices of the right regular representation in the case g the natural basis bi of qb g g also yields compatible matrices but the corresponding map is not injective a base change under the table of marks of g g gives matrices which are not compatible changing basis under the matrix product a a a p yields compatible matrices and an injective homomorphism like m does our matrices ci have the added benefit of being normalized and extremely sparse exposing other representation theoretic properties of the algebra qb g g such as the following corollary let g and denote by j the jacobsen radical of the rational burnside algebra qb g g i with ci as above ci i is a basis of j q q q ii qb g g the map qb g g can be regarded as a mark homomorphism for the double burnside ring of g it assigns to each g g a square matrix of rational marks for example for g g on the table of marks of a direct product of finite groups we have and the image of g is the identity matrix while the case g provides only a small example and the above construction involves some ad hoc measures we expect that for many if not all finite groups g a mark homomorphism for the rational double burnside algebra qb g g can be constructed in a similar way this will be the subject of future research references robert boltje and susanne danz a ghost ring for the double burnside ring and an application to fusion systems adv math no mr a ghost algebra of the double burnside algebra in characteristic zero j pure and appl algebra no mr serge bouc biset functors for finite groups lecture notes in mathematics vol springerverlag berlin mr serge bouc radu stancu and jacques simple biset functors and double burnside ring j pure appl algebra no mr william burnside theory of groups of finite order cambridge university press cambridge mr andreas dress a characterisation of solvable groups math z no mr goursat sur les substitutions orthogonales et les divisions de l espace ann sci norm sup bertram huppert endliche gruppen i die grundlehren der mathematischen wissenschaften band york mr joachim lambek goursat s theorem and the zassenhaus lemma canad j math mr klaus lux and herbert pahlings representations of groups a computational approach cambridge university press cambridge mr brendan masterson on the table of marks of a direct product of finite groups thesis national university of ireland galway liam naughton and pfeiffer computing the table of marks of a cyclic extension math comp no mr pfeiffer the subgroups of or how to compute the table of marks of a finite group experiment math no mr ragnarsson and radu stancu saturated fusion systems as idempotents in the double burnside ring geom topol no mr roland schmidt direkter produkte von gruppen arch math basel no mr giovanni zacher on the lattice of subgroups of the cartesian square of a simple group rend sem mat univ padova mr brendan masterson and pfeiffer department of design engineering and mathematics middlesex university london the boroughs london united kingdom address school of mathematics statistics and applied mathematics national university of ireland galway university road galway ireland address
4
apr merging joint distributions via causal model classes with low vc dimension dominik janzing max planck institute for intelligent systems germany april abstract if x y z denote sets of random variables two different data sources may contain samples from px y and py z respectively we argue that causal inference can help inferring properties of the unobserved joint distributions px y z or px z the properties may be conditional independences as in integrative causal inference or also quantitative statements about dependences more generally we define a learning scenario where the input is a subset of variables and the label is some statistical property of that subset sets of jointly observed variables define the training points while unobserved sets are possible test points to solve this learning task we infer as an intermediate step a causal model from the observations that then entails properties of unobserved sets accordingly we can define the vc dimension of a class of causal models and derive generalization bounds for the predictions here causal inference becomes more modest and better accessible to empirical tests than usual rather than trying to find a causal hypothesis that is true which is a problematic term when it is unclear how to define interventions a causal hypothesis is useful whenever it correctly predicts statistical properties of unobserved joint distributions within such a pragmatic application of causal inference some popular heuristic approaches become justified in retrospect it is for instance allowed to infer dags from partial correlations instead of conditional independences if the dags are only used to predict partial correlations i hypothesize that our pragmatic view on causality may even cover the usual meaning in terms of interventions and sketch why predicting the impact of interventions can sometimes also be phrased as a task of the above type introduction the difficulty of inferring causal relations from purely observational data lies in the fact that the observations drawn from a joint distribution px with x xn are supposed to imply statements about how the system behaves under interventions pearl spirtes et more specificly one may be interested in the new joint distribution obtained by setting a subset x of the variables to some specific values which induces a different joint distribution if the task of causal inference is phrased this way it actually lies outside the typical domain of statistics it thus requires assumptions that link statistics to causaliy to render the task feasible under certain limitations for instance one can infer the causal directed acyclic graph dag up to its markov equivalence class from the observed conditional statistical independences spirtes et pearl moreover on can also distinguish dags in the same markov equivalence class when certain model assumptions such as linear models with noise kano and shimizu or additive noise hoyer et are made relevance of causal information without reference to interventions the goal of causal inference need not necessarily consist in predicting the impact of interventions instead causal information could help for transferring knowledge accross data sets with different distributions et the underlying idea is a modularity assumption peters et according to which only some conditional distribiutions in a causal bayesian network may change and others remain fixed among many other tasks for which causal information could help we should particularly emphasize integrative causal inference tsamardinos et which is the work that is closest to the present paper tsamardinos et al use causal inference to combine knowledge from different data sets the idea reads as follows given some data sets dk containing observations from different but overlapping sets sk xn of variables then causal inference algorithms are applied independently to sk afterwards a joint causal model is constructed that entails independences of some other subsets of variables of which no joint observations are available by slightly abusing terminology we will refer to sets of variables that have not been observed together as unobserved sets of variables but keep in mind that although they have not been observed jointly they usually have been observed individually as part of some other observed set to explain the idea more explicitly we sketch example from tsamardinos et al which combines knowledge from just two data sets contains the variables x y w for which one observes x w and no further conditional or unconditional independences the data set contains the variables x w z where one observes x q as the only independence then one constructs the set of all maximal ancestral graphs mags on the set x y z w that is consistent with the observed pattern of independences as a result the mag implies that x y given any other subset of variables although x and y have never been observed together from a perspective the inference procedure thus reads statistical properties of observed subsets causal models consistent with those statistical properties of unobserved subsets in contrast to tsamardinos et al the term statistical properties need not necessarily refer to conditional independences on the one hand there is meanwhile a broad variety of new approaches that infer causal directions from statistical properties other than conditional independences kano and shimizu sun et al hoyer et al mags define a class of graphical causal models that is closed under marginalization and conditioning on subsets of variables richardson and spirtes zhang and daniusis et al janzing et al mooij et al peters et al mooij et al on the other hand the causal model inferred from the observations may entail statistical properties other than conditional independences subject to the model assumptions on which the inference procedures rely regardless of what kind of statistical properties are meant the scheme in describes a sense in which a causal model that can be tested within the usual scenario this way a causal model entails statements that can be empirically tested without referring to an interventional scenario consequently we drop the ambitious demand of finding the true causal model and replace it with the more modest goal of finding causal models that properly predict unseen joint distributions after reinterpreting causal inference this way it also becomes directly accessible to statistical learning theory assume we have a found a causal model that is consistent with the statistical properties of a large number of observed subsets we can hope that it also correctly predicts properties of unobserved subsets provided that the causal model has been taken from a sufficiently small class to avoid overfitting this radical empirical point of view can be developed even further rather than asking whether some statistical property like statistical independence is true we only ask whether the test at hand rejects or accepts hence we can replace the term statistical properties in the scheme with test results this point of view may also justify several common pragmatic solutions of the following issues linear causal models for relations our perspective justifies to apply multivariate gaussian causal models to data sets that are clearly assume a hypothetical causal graph is inferred from the conditional independence pattern obtained via partial correlation tests which is correct only for multivariate gaussians as done by common causal inference software tetrad even if one knows that the graph only represents partial correlations correctly but not conditional independences it may predict well partial correlations of unseen variable sets this way the linear causal model can be helpful when the goal is only to predict linear statistics this is good news particularly because general conditional independence tests remain a difficult issue see for instance zhang et al for a recent proposal tuning of confidence levels there is also another heuristic solution of a difficult question in causal inference that can be justified inferring causal dags based on causal markov condition and causal faithfulness spirtes et relies on setting the confidence levels for accepting conditional dependence in practice one will usually adjust the level such that enough independences are accepted and enough are rejected for the sample size at hand otherwise inference is impossible this is problematic however from the perspective of the common justification of causal faithfulness if one rejects causal hypotheses with accidental conditional independences because they occur with measure zero meek it becomes questionable to set the confidence level high enough just because one wants ot get some independences here we argue as follows instead assume we are given any arbitrary confidence level as threshold for the conditional independence tests further assume we have found a dag g asking whether two variables are in fact statistically independent does not make sense for an empirical sample unless the sample is thought to be part of an infinite sample which is ridiculous in our finite world for a detailed discussion of how causal conclusions of several causal inference algorithms may repeatedly change after increasing the sample size see kelly and from a sufficiently small model class that is consistent with all the outcomes of the conditional independence tests on a large number of subsets sk it is then justified to assume that g will correctly predict the outcomes of this test for unobserved variable sets sk methodological justification of causal faithfulness in our learning scenarios dags are used to predict for some choice of variables xjk whether xjk without faithfulness the dag can only entail independence but never entail dependence rather than stating that unfaithful distributions are unlikely we need faithfulness simply to obtain a definite prediction in the first place the paper is structured as follows section explains why causal models sometimes entail strong statements regarding the composition of data sets this motivates to use causal inference as an intermediate step when the actual task is to predict properties of unobserved joint distributions section formalizes our scenario as a standard prediction task where the input is a subset or an ordered tuple of variables for which we want to test some statistical property the output is a statistical property of that subset or tuple this way each observed variable set defines a training point for inferring the causal model while the unobserved variable sets are the test instances accordingly classes of causal models define function classes as described in section whose richness can be measured via vc dimension by starightforward application of vc learning theory section derives error bounds for the predicted statistical properties and discusses how they can be used as guidance for constructing causal hypotheses from classes of hypotheses in section we argue that our use of causal models is linked to the usual interpretation of causality in terms of interventions which raises philosophical questions of whether the empirical content of causality reduces to providing rules on how to merge probability distributions why causal models are particularly helpful it is not obvious why inferring properties of unobserved joimnt distributions from observed ones should take the detour via causal models visualized in one could also define a class of statistical models that is a class of joint distributions without nay causal interpretation that is sufficiently small to yield definite predictions for the desired properties the below example however suggests that causal models typically entail particularly strong predictions regarding properties of the joint distribution this is among other reasons because causal models on subsets of variables sometimes imply a simple joint causal model to make this point consider the following toy example example merging two pairs to a chain assume we are given variables x y z where we observed px y and py z the extension to px y z is heavily underdetermined now assume that we have the additional causal information that x causes y and y causes z see figure left in the sense that both pairs are causally sufficient in other words neither x and y nor y and z have a common cause this information can be the result of some bivariate causal inference algorithm that is able to exclude confunding given that there is for instance an additive noise model from y to z kano and shimizu hoyer et x y y z x y z z px y z px y figure simplest example where causal information allows to glue two distributions two a unique joint distribution a confounder is unlikely because it would typically destroy the independence of the additive noise term entire causal structure we can then infer the entire causal structure to be the causal chain x y z for the following reasons first we show that x y z is a causally sufficient set of variables a common cause of x and z would be a common cause of y and z too the pair x y and y z both have no common causes by assumption one checks easily that no dag with arrows leaves all pairs unconfounded checking all dags on x y z with arrows that have a path from x to y and from y to z we end up with the causal chain in figure middle as the only option resulting joint distribution this implies x z therefore px y z px y note that our presentation of example neglected a subtle issue there are several different notions of what it means that x causes y in a causally sufficient way we have above used the purely graphical criterion asking whether there is some variable z having directed paths to x and y an alternative option for defining that x influences y in a do causally sufficient way would be to demand that py py this condition is called interventional sufficiency in peters et al a condition that is testable by interventions on x without referring to a larger background dag in which x and y are embedded this condition however is weaker than the graphical one and not sufficient for the above argument this is because one could add the link x z to the chain x y z and still observe that do y pz as detailed by example in peters et al therefore we stick to the graphical criterion of causal sufficiency and justify this by the fact that for generic parameter values it coincides with interventional sufficiency which would actually be the more reasonable criterion causal marginal problem probabilistic marginal problem given marginal distributions psk on sets of variables the problem of existence and uniqueness of the joint distribution consistent with the marginals is usually referred to as marginal problem vorob ev kellerer here we will call it the probabilistic marginal problem motivated by this terminology we informally inroduce the causal marginal problem as follows given distributions psk together with causal models mk is there a unique joint distribution with causal model m consistent with the marginal model the definition is informal because we have not specified our notion of causal model neither did we specify marginalization of causal models for dags marginalization requires the more general graphical model class mags richardson and spirtes already mentioned above while marginalization of sructural equations require structural equations with dependent noise terms rubenstein et without formalizing this claim example suggests that the causal marginal problem may have a unique solution even when the probabilistic marginal problem doesn t janzing the procedure for constructing the joint distribution in example can be described by the following special case of the scheme in statistical properties of observed subsets causal model for observed subsets joint causal model statistical properties of unobserved subsets whether or not the joint causal model is inferred by first inferring marginal causal models for whether it is directly inferred from statistical properties of marginal disributions will be irrelevant in our further discussion in example the detour over marginal causal models has been particularly simple the formal setting below we will usually refer to some given set of variables s xjk whose subsets are considered whenever this can not cause any confusion we will not carefully distinguish between the set s and the vector x xjk and also use the term joint distribution ps although the order of variables certainly matters statistical properties statistical properties are the crucial concept of this work on the one hand they are used to infer causal structure on the other hand causal structure is used to predict them definition statistical property a statistical property q with range y is given by a function q yk y where yk denotes the joint distribution of k variables under consideration and y some output space often we will consider binary or properties that is y y or y r respectively by slightly abusing terminology the term statistical property will sometimes refer to the value in y that is the output of q or to the function q itself this will hopefully cause no confusion here q may be defined for fixed size k or for general moreover we will consider properties that depend on the ordering of the variables yk those that do not depend on it or those that are invariant under some permutations k variables this will be clear from the context we will be refer to k tuples for which part of the order matters as partly ordered tuples to given an impression about the variety of statistical properties we conclude the section with a list of examples we start with an example for a binary property that does not refer to an ordering example statistical independence q yk for yj jointly independent otherwise the following binary property allows for some permutations of variables example conditional independence for q yk otherwise yk to emphasize that our causal models are not only used to predict conditional independences but also other statistical properties we also mention linear additive noise models kano and shimizu example existence of linear additive noise models q yk if and only if there is a matrix a with entries aij that is lower triangular after permutation of basis vectors such that x yi aij yj nj j i where nk are jointly independent noise variables if no such additive linear model exists we set q yk lower triangularity means that there is a dag such that a has entries aij whenever there is an arrow from j to i here the entire order of variables matters then is a linear structural equation whenever the noise variables nj are linear additive noise models allow for the unique identification of the causal dag kano and shimizu if one assumes that the true generating process has been linear then q yk holds for those orderings of variables that are compatible with the true dag this way we have a statistical propery that is directly linked to the causal structure subject to a strong assumption of course the following simple binary property will also play a role later example sign of correlations whether a pair of random variables is positively or negatively correlated defines a simple binary property in a scenario where all variables are correlated if cov q if cov finally we mention a statistical property that is not binary but example covariances and correlations for k variables yk let y be the set of positive matrices then define q yk yk where yn denotes the joint covariance matrix of yn for k one can also get a property by focusing on the term one may then define a map q q cov or alternatively if one prefers correlations define q corr statistical and causal models the idea of this paper is that causal models are used to predict statistical properties but a priori the models need not be causal one can use bayesian networks for instance to encode conditional statistical independences with or without interpreting the arrows as formalizing causal influence for the formalism introduced in this section it does not matter whether one interprets the models as causal or not example however suggested that model classes that come with a causal semantics are particularly intuitive regarding the statistical properties they predict we now introduce our notion of models definition models for a statistical property given a set s xn of variables and some statistical property q a model m for q is a class of joint distributions xn that coincide regarding the output of q that is q yn q yn yn yn m where yk accordingly the property qm predicted by the model m is given by a function yk qm yk q yn for all xn in m where yk runs over all allowed input partly ordered tuples of q formally the partly ordered tuples are equivalence classes in s k where equivalence corresponds to irrelevant reorderings of the tuple to avoid cumbersome formalism we will just refer to them as the allowed inputs later such a model will be for instance a dag g and the property q formalizes all conditional independences that hold for the respective markov equivalence class to understand the above terminology note that q receives a distribution as input and the output of q tells us the respective property of the distribution whether independence holds in contrast qm receives a set of nodes variables of the dag as inputs and tells us the property entailed by m the goal will be to find a model m for which qm and q coincide for the majority of observed tuples of variables our most prominent example reads example dag as model for conditional independences let g be a dag with nodes s xn and q be the set of conditional independences as in example then let qg be the function on from s defined by qg yk if and only if the markov condition implies yk and qg yk otherwise note that qg does not mean that the markov condition implies dependence it only says that it does not imply independence however if we think of g as a causal dag the common assumption of causal faithfulness spirtes et states that all dependences that are allowed by the markov condition occur in reality adopting this assumption we will therefore interpret qg as a function that predicts dependence or independence instead of making no prediction otherwise we also mention a particularly simple class of dags that will appear as an interesting example later example dags consisting of a single colliderfree path let g be the set of dags that consist of a single colliderfree path n where the directions of the arrows are such that there is no variable with two arrowheads colliderfree paths have the important property that any dependence between two nodes is screened off by any variable that lies between the two nodes that is xj xk xl whenever xl lies between xj and xk if one assumes in addition that the joint distribution is gaussian then the partial correlation between xj and xk given xl vanishes then one can show that the correlation coefficient of any two nodes is given by the product of pairwise correlations along the path k corr xj xk y k corr i j y ri j this follows easily by induction because corr x z corr x y corr y z for any three variables x y z with x z therefore such a dag together with all the correlations between adjacent nodes predicts all pairwise correlations we therefore specify our model by m r that is the ordering of nodes and correlations of adjacent nodes the following example shows that a dag can entail also properties that are more sophisticated than just conditional independences and correlations example dags and linear additive noise let g be a dag with nodes s xn and q be the linear additive noise property in example let qg be the function on from s defined by qg yk if and only if the following two conditions hold yk is a causally sufficient subset from s in g and that is no two different yi yj have a common ancestor in g the ordering yk is consistent with g that is yj is not ancestor of yi in g for any i j example predicts from the graphical structure whether the joint distribution of some subset of variables admits a linear additive noise model the idea is the following assuming that the entire joint distribution of all n variables has been generated by a linear additive noise model kano and shimizu any yk also admits a linear additive noise model provided that and hold this is because marginalizations of linear additive noise models remain linear additive noise models whenever one does not marginalize over common hence conditions and are clearly sufficient for generic parameter values of the underlying linear model the two conditions are also necessary because linear models render causal directions uniquely identifiable and also admit the detection of hidden common causes hoyer et testing properties on data so far we have introduced statistical properties as mathematical properties of distributions in applications however we want to predict the outcome of a test on empirical data the task is no longer to predict whether some set of variables is really conditionally independent we just want to predict whether the statistical test at hand accepts independence whether or not the test is appropriate for the respective mathematical property q is not relevant for the generalization bounds derived later if one infers dags for instance by partial correlations and uses these dags only to infer partial correlations it does not matter that relations actually prohibit to replace conditional independences with partial correlations the reader may get confused by these remarks because now there seems to be no requirement on the tests at all if it is not supposed to be a good test for the mathematical property q this is a difficult question one can say however that for a test that is entirely unrelated to some property q we have no guidance what outcomes of our test a causal hypothesis should predict the fact that partial correlations despite all their limitations approximate conditional independence does provide some justification for expecting vanishing partial correlations in many cases where there is in the causal dag we first specify the information provided by a data set definition data set each data set dj is an lj kj matrix of observations where lj denotes the sample size and kj the number of variables further the dataset contains a kj tuple of values from n specifying the kj variables ykj xn the samples refer to to check whether the variables under consideration in fact satisfy the property predicted by the model we need some statistical test in the case of binary properties or an estimator in the case of or other properties let us say that we are given some test estimator for a property q formally defined as follows definition statistical test estimator for q a test respective estimator for properties for the statistical property q with range y is a map qt d qt d y where d is a data set that involves the observed instances of yn where yk is a partly ordered tuple that defines an allowed input of qt d is thought to indicate the outcome of the test or the estimated value respectively phrasing the task as standard prediction problem our learning problem now reads given the data sets dl with the sl of variables find a model m such that qm sj qt dj for all data sets j l or less note that the class of additive noise models hoyer et is not closed under marginalization demanding for most of the data sets however more importantly we would like to choose m such that qm sj qt will also hold for a future data set the problem of constructing a causal model now becomes a standard learning problem where the training as well as the test examples are data sets note that also et al phrased a causal inference problem as standard learning problem there the task was to classify two variables as cause and effect after getting a large number of pairs as training examples here however the data sets refer to observations from different subsets of variables that are actually assumed to follow a joint distribution over the union of all variables occuring in any of the data sets having phrased our problem as a standard prediction scenario whose inputs are subsets of variables we now introduce the usual notion of empirical error on the training data accordingly definition empirical error let q be a statistical property qt a statistical test and d dk a collection of data sets referring to the variable tuples sk then the empirical training error of model m is defined by k l m dj qm sdj k finding a model m for which the training error is small does not guarantee however that the error will also be small for future test data if m has been chosen from a too rich class of models the small training error may be a result of overfitting fortunately we have phrased our learning problem in a way that the richness of a class of causal models can be quantified by standard concepts from statistical learning theory this will be discussed in the following section capacity of classes of causal models we have formally phrased our problem as a prediction problem where the task is to predict the outcome in y of qt for some test t applied to an unobserved variable set we now assume that we are given a class of models m defining statistical properties qm m that are supposed to predict the outcomes of qt binary properties given some binary statistical property we can straightforwardly apply the notion of vcdimension vapnik to classes m and define definition vc dimension of a model class for binary properties let s xn a set of variables and q be a binary property let m be a class of models for q that is each m m defines a map qm yk qm yk then the vc dimension of m is the largest number h such that there are h allowed inputs sh for qm such that the restriction of all m m to sh runs over all possible binary functions since our model classes are thought to be given by causal hypotheses the following class is our most important example although we will later further restrict the class to get stronger generalization bounds lemma vc dimension of conditional independences entailed by dags let g be the set of dags with nodes xn for every g g we define qg as in example then the vc dimension h of qg satisfies h n n n n o proof the number nn of dags on n labeled nodes can easily be upper bounded by the number of orderings times the number of choices to draw an edge or not this yields nn n using stirling s formula we obtain n n nn e and thus nn nn since the vc dimension of a class can not be larger than the binary logarithm of the number of elements it contains easily follows note that the number of possible conditional independence tests of the form already grows faster than the vc dimension namely with the third power therefore the class of dags is sufficiently restrictive since it is not able to explain all possible patterns of conditional in dependences even when one conditions on one variable only nevertheless the set of all dags may be too large for the number of data sets at hand we therefore mention the following more restrictive class given by polytrees that is dags whose skeleton is a tree hence they contain no undirected cycles lemma vc dimension of cond independences entailed by polytrees let g be the set of polyntrees with nodes xn for every g g we define qg as in example then the vc dimension h of qg satisfies h n proof according to cayley s formula the number of trees with n nodes reads aigner and ziegler the number of markov equivalence classes of polytrees can be bounded from above by n radhakrishnan et al again the bound follows by taking the logarithm we will later use the following result lemma vc dimension of sign of correlations along a path consider the set of dags on xn that consist of a single colliderfree path as in example the sign of pairwise correlations is the determined by the permutation that aligns the graph and the sign of correlations of all adjacent pairs we thus parameterize a model by m s where the vector s sn denotes the signs of adjacent nodes the full model class m is obtained when runs over the entire group of permutations and s over all combinations in n let q be the property indicating the sign of the correlation of any two variables as in example then the vc dimension of qm m is at most proof defining j sj y sign corr i we obtain sign corr xi xj si sj due to therefore the signs of all can be computed from sn since there are possible assignments for these values g thus induces functions and thus the vc dimension is at most statistical properties we also want to obtain quantitative statements about the strength of dependences and therefore consider also the correlation as an example of a property lemma correlations along a path let m be the model class whose elements m are colliderfree paths where the correlations of adjacent nodes are specified see example as already explained this specification determines uniquely all pairwise correlations and we can thus define the model induced property qm xj xk corrm xj xk where the term on the right hand side denotes the correlation determined by the model m r as introduced in example then the vc dimension of qm m is in o n proof we assume for simplicity that all correlations are to specify the absolute value of the correlation between adjacent nodes we define the parameters log i to specify the sign of those correlations we define the binary values for corrm i gi otherwise for all i it will be convenient to introduce the parameters j x which are cumulative versions of the adjacent log correlations likewise we introduce the binaries j x sj gi mod which indicate whether the number of negative correlations along the chain from its beginning is odd or even this way the correlations between any two nodes can be computed from and s j k j k corrm xj xk e for technical reasons we define corr formally as a function of ordered pairs of variables although it is actually symmetric in j and we are interested in the vc dimension of the family f fm m of functions defined by fm j k corrm xj xk i j its is defined as the vc dimension of the set of classifiers c m with for j k cm j k otherwise to estimate the vc dimension of c we compose it from classifiers whose vc dimension is easier to estimate we first define the family of classifiers given by c c with for j k j k otherwise likewise we define c c with for j k j k otherwise the vc dimensions pf c and c are at most because they are given by linear functions on the space of all possible rn vapnik section example further we define a set of classifiers that classify only according to the sign of the correlations m s cm where cm j k if j k otherwise if j k otherwise likewise we set cm j k since both components of s have vc dimension n at most the vc dimension of s is in o n for j k is equivalent to j k j k log k j log therefore s u c u c for all where u denotes the intersection of concept classes van der wart and wellner given by u likewise the union of concept classes is given by t as opposed to the unions and intersections for j k is equivalent to j k j k log k j log hence s t c u c for all we then obtain c s u c u c s t c u c hence c is a finite union and intersection of concept classes and set theoertic union each having vc dimension in o n therefore c has vc dimension in o n van der wart and wellner generalization bounds binary properties after we have seen that in our scenario causal models like dags define classifiers in the sense of standard learning scenarios we can use the usual vc bounds like theorem in vapnik to guarantee generalization to future data sets to this end we need to assume that the data sets are sampled from some distribution of data sets an assumption that will be discussed at the end of this section theorem vc generalization bound let qt be a statistical test for some statistical binary property and m be a model class with vc dimension h defining some property qm given k data sets dk sampled from distribution pd then s k h ln h ln e d qm d dj qm sdj k k with probability it thus suffices to increase the number of data sets slightly faster than the vc dimension to illustrate how to apply theorem we recall the class of polytrees in lemma an interesting property of polytrees is that every pair of nodes can already be rendered conditional independent by one appropriate intermediate node this is because there is always at most one undirected path connecting them moreover for any two nodes x y that are not too close together in the dag there is a realistic chance that some randomly chosen z satisfies x y therefore we consider the following scenario draw k triples uniformly at random and check whether search for a polytree g that is consistent with the k observed in dependences number of independence tests number of nodes figure the red curce shows how the number of tests required by the vc bound grows with the number of variables while the blue one shows how the number of possible tests grows predict conditional independences for unobserved triples via g since the number of points in the training set should increase slightly faster than the vc dimension which is o n see lemma we know that a small fraction of the possible independence tests which grows with third power is already sufficient to predict conditional further independences the red curve in figure provides a rough estimate of how k needs to grow if we want to ensure that the term in is below for the blue curve shows how the number of possible tests grows which significantly exceeds the required ones after n for more than variables only a fraction of about of the possible tests is needed to predict that also the remaining ones will hold with high probability while conditional independences have been used for causal inference already since decades more recently it became popular to use other properties of distributions to infer causal dags in particular several methods have been proposed that distinguish between cause and effect from bivariate distributions kano and shimizu hoyer et al zhang and daniusis et al peters et al et al mooij et al it is tempting to do multivariate causal inference by finding dags that are consistent with the bivariate causal direction test this motivates the following example lemma bivariate directionality test on dags let g be the class of dags on n nodes for which there is a directed path between all pairs of nodes define a property qg by iff there is a directed path from xi to xj qg xi xj iff there is a directed path from xj to xi the of qg is at most n proof the vc dimension is the maximal number h of pairs of variables for which the causal directions can be oriented in all possible ways if we take n or more pairs the undirected graph defined by connecting each pair contains a cycle xl xl with l then however not all causal directions are possible because xl would be a directed cycle this result can be used to infer causal directions for pairs that have not been observed together apply the bivariate causality test qt to k randomly chosen ordered pairs where k needs to grow slightly faster than search for a dag g g that is consistent with a last fraction of the outcomes infer the outcome of further bivariate causality tests from it is remarkable that the generalization bound holds regardless of how bivariate causality is tested and whether one understands which statistical features are used to infer the causal direction solely the fact that a causal hypothesis from a class of low vc dimension matches the majority of the bivariate tests ensures that it generalizes well to future tests properties the vc bounds in subsection referred to binary statistical properties to consider also properties note that the vc dimension of class of functions with f x r is defined as the vc dimension of the set of binary functions see section vapnik r by combining with and in vapnik we obtain theorem vc bound for statistical properties let qm m be a class of a b properties with vc dimension given k data sets dk sampled from some distribution pd then s k x h ln hk ln e d qm d dj qm sdj b a k k with probability at least this bound can easily be applied to the prediction of correlations via paths due to lemma we then have h o n since correlations are in we can set for b a interpretation of the setting in learning theory in practical applications the scenario is usually somehow different because one does not choose observed and unobserved subsets randomly instead the observed sets are defined by the available data sets one may object that the above considerations are therefore inapplicable there is no formal argument against this objection however there may be reasons to believe that the observed variable sets at hand are not substantially different from the unobserved ones whose properties are supposed to be predicted apart from the fact that they are observed based on this belief one may still use the above generalization bounds as guidance on the richness of the class of causal hypotheses that is allowed to obtain good generalization properties predicting impact of interventions by merging distributions we have argued that causal hypotheses provide strong guidance on how to merge probability distributions and thus become empirically testable without resorting to interventions one may wonder whether this view on causality is completely disconnected to interventions here i argue that it is not in some sense estimating the impact of an intervention can also be phrased as the problem of inferring properties of unobserved joint distributions assume we want to test whether the causal hypothesis x y is true we would then check how the distribution of y changes under randomized interventions on x let us formally introduce a variable fx pearl that can attain all possible values x of x indicating to which value x is set to or the value idle if no intervention is made whether x influences y is then equivalent to fx y if we demand that this causal relation is unconfounded as is usually intended by the notation x y we have to test the condition py py before the intervention is made both conditions and refer to the unobserved distribution py fx inferring whether x y is true from px y thus amounts to inferring the unobserved distribution py fx from px y plus the additional background knowledge regarding the statistical and causal relation between fx and x which is just based on the knowledge that the action we made has been in fact the desired intervention in applications it can be a question why some action can be considered an intervention on a target variable at hand for instance in complex interactions if one assumes that it is based on purely observational data maybe earlier in the past we have reduced the problem of predicting the impact of interventions entirely to the problem of merging joint distributions conclusions we have described different scenarios where causal models can be used to infer statistical properties of joint distributions of variables that have never been observed together if the causal models are taken from a class of sufficiently low vc dimension this can be justified by generalization bounds from statistical learning theory this opens a new pragmatic and perspective on causality where the essential empirical content of a causal model may consist in its prediction regarding how to merge distributions from overlapping data sets such a pragmatic use of causal concepts may be helpful for domains where the interventional definition of causality raises difficult questions if one claims that the age of a person causally influences income as assumed in mooij et al it is unclear what it means to intervene on the variable age we have moreover argued that our pragmatic view of causal models is related to the usual concept of causality in terms of interventions it is even possible that this view on causality could also be relevant for foundational questions of physics where the language of causal models plays an increasing role recently leifer and spekkens chaves et ried et wood and spekkens janzing et references aigner and ziegler proofs from the book springer berlin chaves majenz and gross implications of quantum causal structures nat commun daniusis janzing mooij zscheischler steudel zhang and inferring deterministic causal relations in proceedings of the annual conference on uncertainty in artificial intelligence uai pages auai press hoyer shimizu kerminen and palviainen estimation of causal effects using linear causal models with hidden variables international journal of approximate reasoning hoyer janzing mooij peters and nonlinear causal discovery with additive noise models in koller schuurmans bengio and bottou editors proceedings of the conference neural information processing systems nips vancouver canada mit press janzing from the probabilistic marginal problem to the causal marginal problem talk in the open problem session of the workshop causation foundation to application of the conference on uncertainty in artificial intelligence uai janzing x sun and distinguishing cause and effect via second order exponential models http janzing chaves and algorithmic independence of initial condition and dynamical law in thermodynamics and causal inference new journal of physics kano and shimizu causal inference using nonnormality in proceedings of the international symposium on science of modeling the anniversary of the information criterion pages tokyo japan kellerer marginalprobleme math in german kelly and causal conclusions that flip repeatedly in and spirtes editors proceedings of the conference on uncertainty in artificial intelligence uai auai press leifer and spekkens towards a formulation of quantum theory as a causally neutral theory of bayesian inference phys rev a muandet and tolstikhin towards a learning theory of inference in proceedings of the international conference on machine learning volume of jmlr workshop and conference proceedings page jmlr meek causal inference and causal explanation with background knowledge in proceedings of the conference on uncertainty in artificial intelligence pages san francisco ca morgan kaufmann mooij stegle janzing zhang and probabilistic latent variable models for distinguishing between cause and effect in advances in neural information processing systems nips pages mooij peters janzing zscheischler and distinguishing cause from effect using observational data methods and benchmarks journal of machine learning research pearl causality models reasoning and inference cambridge university press peters janzing and identifying cause and effect on discrete data using additive noise models in proceedings of the thirteenth international conference on artificial intelligence and statistics aistats jmlr w cp chia laguna sardinia italy peters janzing and causal inference on discrete data using additive noise models ieee transactions on pattern analysis and machine intelligence peters janzing and elements of causal inference foundations and learning algorithms mit press radhakrishnan solus and uhler counting markov equivalence classes for dag models on trees arxiv richardson and spirtes ancestral graph markov models the annals of statistics ried agnew vermeyden janzing spekkens and resch a quantum advantage for inferring causal structure nature physics rubenstein weichwald bongers mooij janzing and causal consistency of structural equation models in proceedings of the conference on uncertainty in artificial intelligence uai janzing peters sgouritsa zhang and mooij on causal and anticausal learning in and pineau editors proceedings of the international conference on machine learning icml pages acm spirtes glymour and scheines causation prediction and search lecture notes in statistics new york ny x sun janzing and causal inference by choosing graphs with most plausible markov kernels in proceedings of the international symposium on artificial intelligence and mathematics pages fort lauderdale fl tetrad the tetrad homepage http tsamardinos triantafillou and lagani towards integrative causal analysis of heterogeneous data sets and studies mach learn van der wart and wellner a note on bounds for vc dimensions inst math stat collect vapnik the nature of statistical learning theory springer new york vapnik statistical learning theory john wileys sons new york vapnik estimation of dependences based on empirical data statistics for engineering and information science springer verlag new york edition vorob ev consistent families of measures and their extensions theory probab appl wood and spekkens the lesson of causal discovery algorithms for quantum correlations causal explanations of violations require new journal of physics zhang and on the identifiability of the causal model in proceedings of the conference on uncertainty in artificial intelligence montreal canada zhang peters janzing and conditional independence test and application in causal discovery in proceedings of the conference on uncertainty in artificial intelligence uai http
10
faster computation via rle keita kuboi yuta fujishige shunsuke inenaga hideo bannai masayuki takeda department of informatics kyushu university japan inenaga bannai takeda mar abstract the constrained lcs problem asks one to find a longest common subsequence of two input strings a and b with some constraints the problem is a variant of the constrained lcs problem where the solution must include a given constraint string c as a substring given two strings a and b of respective lengths m and n and a constraint string c of length at most min m n the best known algorithm for the problem proposed by deorowicz inf process runs in o m n time in this work we present an o mn nm solution to the problem where m and n denote the sizes of the encodings of a and b respectively since m m and n n always hold our algorithm is always as fast as deorowicz s algorithm and is faster when input strings are compressible via rle introduction longest common subsequence lcs is one of the most basic measures of similarity between strings and there is a vast amount of literature concerning its efficient computation an lcs of two strings a and b of lengths m and n respectively is a longest string that is a subsequence of both a and b there is a well known o m n time and space dynamic programming dp algorithm to compute an lcs between two strings lcs has applications in bioinformatics file comparisons pattern recognition etc recently several variants of the problem which try to find a longest common subsequence that satisfy some constraints have been considered in tsai proposed the constrained lcs clcs problem where given strings a b with respective lengths m n and a constraint string c of length k the problem is to find a longest string that contains c as a subsequence and is also a common subsequence of a and tsai gave an o m n k time solution which was improved in by chin et al to o m n k time variants of the constrained lcs problem called and were considered by chen and chao in each problem considers as input three strings a b and c and the problem is to find a longest string that includes ic or excludes ec c as a subsequence seq or substring str and is a common subsequence of a and b clcs is equivalent to the problem the best solution for each of the problems is shown in table table time complexities of best known solutions to various constrained lcs problems problem dp solution dp solution using rle o m n k o m n k min mn nm o m n k o m n o mn nm this work o m n k in order to speed up the lcs computation one direction of research that has received much attention is to apply compression namely encoding rle of strings bunke and csirik were one of the first to consider such a scenario and proposed an o mn nm time algorithm here m n are the sizes of the rle of the input strings of lengths m and n respectively notice that since rle can be computed in linear time and m m and n n the algorithm is always asymptotically faster than the standard o n m time dynamic programming algorithm especially when the strings are compressible by rle furthermore ahsan et al proposed an algorithm which runs in o m n r log log mn r log log m n time where r is the total number of pairs of runs of the same character in the two rle strings r o mn and the algorithm can be much faster when the strings are compressible by rle for the constrained lcs problems rle based solutions for only the problem have been proposed in an o k mn nm time algorithm was proposed by ann et al later in liu et al proposed a faster o m n k min mn nm time algorithm in this paper we present the first rle based solution for the problem that runs in o mn nm time again since rle can be computed in linear time and m m and n n the proposed algorithm is always asymptotically faster than the best known solution for the problem by deorowicz which runs in o m n time a common criticism against rle based solutions is a claim that although they are theoretically interesting since most strings in the real world are not compressible by rle their applicability is limited and they are only useful in extreme artificial cases we believe that this is not entirely true there can be cases where rle is a natural encoding of the data for example in music a melody can be expressed as a string of pitches and their duration furthermore in the data mining community there exist popular preprocessing schemes for analyzing various types of time series data which convert the time series to strings over a fairly small alphabet as an approximation of the original data after which various analyses are conducted sax symbolic aggregate approximation clipped bit representation these conversions are likely to produce strings which are compressible by rle and in fact shown to be effective in indicating that rle based solutions may have a wider range of application than commonly perceived preliminaries let be the finite set of characters and be the set of strings for any string a let be the length of a for any i let a i be the ith character of a and let a i a i a denote a substring of a especially a denotes a prefix of a and a i denotes a suffix of a a string z is a subsequence of a if z can be obtained from a by removing zero or more characters for two string a and b a string z is a longest common subsequence lcs of a b if z is a longest string that is a subsequence of both a and b for any i and j let lpref i j denote the length of an lcs of a i b j and let lsuf i j denote the length of an lcs of a i b j the lcs problem is to compute the length of an lcs of given two strings a and b a well known solution is dynamic programming which computes in o m n time a table which we will call dp table of size o m n that stores values of lpref i j for all i m j n the dp table for lsuf i j can be computed similarly for two strings a b and a constraint string c a string z is an of a b c if z is a longest string that includes c as a substring and also is a subsequence of both a and b the problem is to compute the length of an of any given three strings a b and for example if a abacab b babcaba c bb then abcab and bacab are lcss of a b and abb is an of a b the encoding rle of a string a is a kind of compressed representation of a where each maximal run of the same character is represented by a pair of the character and the length of the run let rle a denote the rle of a string a the size of rle a is the number of the runs in a and is denoted by a by definition a is always less than or equal to in the next section we consider the problem of strings a b and constraint string let m n k a m and b we assume that k min m n and c min m n since in such case there can be no solution we also assume that k because in that case the problem becomes the normal lcs problem of a b algorithm in this section we will first introduce a slightly modified version of deorowicz s o m n algorithm for the problem and then propose our o mn nm algorithm which is based on his dynamic programming approach but uses rle deorowicz s o m n algorithm we first define the notion of minimal of a string definition for any strings a and c an interval s f is a minimal of a if c is a subsequence of a s f and c is not a subsequence of a s f or a s f deorowicz s algorithm is based on lemma which is used implicitly in lemma implicit in if z is an of a b c then there exist minimal s f f s f m f n respectively of a and b such that z xcy where x is an lcs of a s and b and y is an lcs of a f m and b f n proof from the definition of c is a substring of z and therefore there exist possibly empty strings x y such that z xcy also since z is a common subsequence of a and b there exist monotonically increasing sequences and such that z a a b b and c a a b b now since c is a subsequence of a and b there exist minimal cintervals s f f respectively of a and b that satisfy s f and f let x be an lcs of a s and b and y an lcs of a f m and b f n since x must be a common subsequence of a s and b and y a common subsequence of a f m and b f n we have and however we can not have that or since otherwise x cy would be a string longer than z that contains c as a substring and is a common subsequence of a b contradicting that z is an of a b thus and implying that x is also an lcs of a s b and y is also an lcs of a f m b f n proving the lemma the algorithm consists of the following two steps whose correctness follows from lemma step compute all minimal of a and b step for all pairs of a minimal s f of a and a minimal f of b compute the length of an lcs of the corresponding prefixes of a and b lpref s and that of the corresponding suffixes of a and b lsuf f f the largest sum of lcs lengths plus lpref s lsuf f f is the length of an the steps can be executed in the following running times for step there are respectively at most m and n minimal of a and b which can be enumerated in o m k and o n k time for step we precompute in o m n time two dynamic programming tables which respectively contain the values of lpref i j and lsuf i j for each i m and j n using these tables the value lpref s lsuf f f can be computed in constant time for any s f and f there are o m n possible pairs of minimal so step can be done in o m n time in total since k m k n the problem can be solved in o m n time we note that in the original presentation of deorowicz s algorithm that is intervals s f where c is a subsequence of a s f but not of a s f are computed instead of minimal as defined in definition although the number of considered intervals changes this does not influence the asymptotic complexities in the case however as we will see in lemma of section this is an essential difference for the rle case since when c the number of minimal of a and b can be bounded by o m and o n but the number of of a and b can not and are only bounded by o m and o n our algorithm via rle in this subsection we propose an efficient algorithm based on deorowicz s algorithm explained in subsection extended to strings expressed in rle there are two main cases to consider when c when c consists of only one type of character and when c when c contains at least two different characters case c theorem let a b c be any strings and let m n a m and b if c we can compute the length of an of a b c in o mn nm time for step we execute the following procedure to enumerate all minimal of a and b let first find the right minimal starting at the smallest position such that c is a subsequence of a next starting from position of a search backwards to find the left minimal ending at the largest position such that c is a subsequence of a the process is then repeated find the smallest position such that c is a subsequence of a and then search backwards to find the largest position such that c is a subsequence of a and so on it is easy to see that the intervals obtained by repeating this procedure until reaching the end of a are all the minimal of a since each interval that is found is distinct and there can not exist another minimal between those found by the procedure the same is done for b for strings this takes o m n k time the lemma below shows that the procedure can be implemented more efficiently using rle lemma let a and c be strings where m a m and if c the number of minimal of a is o m and can be enumerated in o m mk time proof because c it is easy to see from the backward search in the procedure described above that for any minimal of a there is a unique run of a such that the last character of the first run of c corresponds to the last character of that run therefore the number of minimal of a is o m kk mm we can compute rle a am and rle c ck in o m k time what am ck remains is to show that the search procedure described above to compute all minimal of a can be implemented in o mk time the of the algorithm described is shown in algorithm in the forward search we scan rle a to find a right minimal by greedily matching the runs of rle c to rle a we maintain the character cq and exponent rest of the first run crest of q m rle c where c is the suffix of c that is not yet matched when comparing a run ap p of rle a m and crest if the characters are different ap cq we know that the entire run ap p will not match q and thus we can consider the next run of a suppose the characters are the same then if mp rest m the entire run ap p of a is matched and we can consider the next run of a also rest can be updated accordingly in constant time by simple arithmetic furthermore since cq ap we can in fact skip to the next run if mp rest the entire run crest is matched and we consider the next q k m m run in also since ap cq we can skip the rest of ap p and consider the next run of a thus we spend only constant time for each run of a that is scanned in the forward search the same holds for the backward search to finish the proof we show that the total number of times that each run of a is scanned in the procedure is bounded by o k the number of minimal of a that intersects with a given m m run ap p of a is o k since c a minimal can not be contained in ap p thus for mp a minimal to intersect with the run ap it must cross either the left boundary of the run or the right boundary of the run for a minimal to cross the left boundary of the run it must be that for some strings u v such that c uv u occurs as a subsequence in am and m m v occurs as a subsequence in ap p am m the minimal corresponds to the union of the left minimal ending at the left boundary of the run and the right minimal starting at the left boundary of the run and is thus unique for u similar arguments also hold for minimal m that cross the right boundary of ap p since there are only k choices for u v the claim holds thus proving the lemma in deorowicz s algorithm two dp tables were computed for step which took o m n time for our algorithm we use a compressed representation of the dp table for a and b proposed by bunke and csirik instead of the normal dp table we note that bunke and csirik actually solved the edit distance problem when the cost is for insertion and deletion and for substitution but this easily translates to lcs lpref i j i j ed pref i j where ed pref i j denotes the edit distance with such costs between a i and b j mm definition let a b be strings of length m n respectively where rle a am am nn and rle b bn the compressed dp table cdp table of a b is an o mn nm compressed representation of the dp table of a b which holds only the values of the dp table for p j and i q where i m j n p m q n p mp q nq algorithm computing all minimal of a input strings a and c output all minimal sl fl of a kk mm rle a am am rle c ck p mp p q index of run in a c respectively k rest number of rest of searching characters of cq q l number of minimal in a p q rest l while true do while p m and q k do forward search if ap cq then p p else if mp rest then q q if q k then l l fl rest else p p rest kq else rest rest mp p p if p m then break p p if rest kk then q q rest else q k rest kk rest while q do backward search if ap cq then p p else if mp rest then q q if q then sl p rest else p p rest kq else rest rest mp p p p p q rest rest return sl fl a b b b a a a a a a a a b b b b a a figure an example of a compressed lpref dp table for strings a bbbaaaa and b aaaabbbaa figure illustrates the values stored in the cdp table for strings a bbbaaaa b aaaabbbaa note that although the figure depicts a sparsely filled table of size m n the values are actually stored in two completely filled tables one of size m n holding the values of p j and another of size m n holding the values of i q for a total of o mn nm space below are results adapted from we will use lemma theorem let a and b be any strings where m n a m and b the compressed dp table of a and b can be computed in o mn nm time and space lemma lemma let and let a and b be any strings where m and n for any integer d if a m m b n n then lpref m n lpref m n lemma lemma let and let a and b be any strings where m and n for any integers d and if a m d m and b n n d then lpref m n max lpref m d n lpref m n from lemmas and we easily obtain the following lemma lemma let a and b be any strings any entry of the dp table of a and b can be retrieved in o time by using the compressed dp table of a and b from lemma we can compute in o mn nm time two cdp tables of a b which respectively hold the values of lpref p j lpref i q and lsuf p j lsuf i q each of them taking o mn nm space from lemma we can obtain lpref i j lsuf i j for any i and j in o time actually to make lemma work we also need to be able to convert the indexes between dp and cdp in constant time for any p m q n the values p and q and for any i m j n the largest p q such that p i q j this is easy to do by preparing some arrays in o m n time and space now we are ready to show the running time of our algorithm for the case c we can compute rle a rle b rle c from a b c in o m n k time in step we have from lemma that the number of all minimal of a b are respectively o m and o n and can be computed in o m n mk nk time for the preprocessing of step we build the cdp tables holding the values of lpref i j lsuf i j for i m j n which can be computed in o mn nm time and space from lemma with these tables we can obtain for any i j the values lpref i j lsuf i j in constant time from lemma since there are o mn pairs of a minimal of a and a minimal of b the total time for step computing lpref and lsuf for each of the pairs is o mn since n n m m and we can assume that k m n the total time is o mn nm thus theorem holds case c next we consider the case where c and c consists of only one run theorem let a b c be any strings and let m n a m and b if c we can compute the length of an of a b c in o mn nm time for step we compute all minimal of a and b by lemma note the difference from lemma in the case of c lemma if c the number of minimal of a and b are o m and o n respectively and these can be enumerated in o m and o n time respectively proof let c and let be the number of times that occurs in a then the number of minimal of a is k o m the minimal can be enumerated in o m time by checking all positions of in a the same applies to b from lemma we can see that the number of pairs of minimal of a and b can be m n and we can not afford to consider all of those pairs for step we overcome this problem as follows let u sl fl be the set of all minimal of a consider the partition g g g of u which are the equivalence classes induced by the following equivalence relation on u for any p q m and sx fx sy fy u sx fx sy fy sx sy p and fx fy q where in other words sx fx and sy fy are in the same equivalence class if they start in the same run and end in the same run noticing that minimal can not be completely contained in another we can assume that for h g sx fx g h and sy fy g we have sx sy and fx fy lemma let g g g be the partition of the set u of all minimal of a induced by the equivalence relation then g o m proof let x y l and h for any sx fx g h and sy fy g h let p q m satisfy sx p fx q since the intervals are not equivalent either p sy or q fy must hold thus g o m equivalently for b we consider the set u of all minimal of b and the partition g of u based on the analogous equivalence relation where g o n for some h let sx fx sy fy be the minimal in g h with the smallest and largest start positions since by definition a sx a sy a fx a fy we have g h sx i fx i sx fx sy fy the same can be said for of b from this observation we can show the following lemma lemma for any h g and g let s f f g h and f f for some positive integer then lpref s lsuf f f lpref s d d lsuf f d f d proof since a s s d a f f d b d b f f d c d we have from lemma lpref lpref and lsuf f f lsuf f f from lemma we can see that for any g h h g g we do not need to compute lpref s lsuf f f for all pairs of s f g h and f let gmin h and be the minimal respectively in g h and with the smallest starting position then we only need to consider the combination of gmin h with each of f and the combination of each of s f g h with therefore of all combinations of minimal in u and u we only need to consider for all h g and g the combination of gmin h with each of u and each of u with the number of such combinations is clearly o mn nm for example consider rle a rle b rle c for the minimal of a we have g g g for the minimal of b we have also gmin figure shows the lengths of the lcs of prefixes and suffixes for each combination between minimal in g and the gray part is the values that are referred to the values denoted inside parentheses are not stored in the cdp table but each of them can be computed in o time from lemma figure shows the sum of the lcs of prefixes and suffixes corresponding to the gray part due to lemma the values along the diagonal are equal thus for the combinations of minimal in g we only need to consider the six combinations now we are ready to show the running time of our algorithm for the case c we can compute rle a rle b rle c from a b c in o m n k time there are respectively o m and o n minimal of a and b and each of them can be assigned to one of the o m and o n equivalence classes g in total of o m n time the preprocessing for the cdp table is the same as for the case of c which can be done in o mn nm time by lemma we can reduce the number of combinations of minimal to consider to o mn nm finally from lemma the lcs lengths for each combination can be computed in o using the cdp table therefore the total running time is o mn nm proving theorem from theorems and the following theorem holds the for our proposed algorithm is shown in algorithm written in appendix theorem let a b c be any strings and let m n a m and b we can compute the length of an of a b c in o mn nm time although we only showed how to compute the length of an we note that the algorithm can be modified so as to obtain a rle of an in o m n time provided that rle c figure an example depicting the lcss of corresponding prefixes left and suffixes right of all combinations of g and for strings rle a rle b and rle c the values denoted inside parentheses are not stored in the cdp table but each of them can be computed in o time a a a a a a b a b b b b a b a a a a a a b b a a a a b b b figure sum of the lengths of lcss of corresponding prefixes and suffixes shown in figure values along the diagonal are equal each value is equal to the value to its upper right is precomputed simply by storing the minimal s f f respectively of a and b that maximizes lpref s lsuf f f from lemmas and we can simulate a standard of the dp table for obtaining lcss with the cdp table to obtain rle of the lcss in o m n time finally an rle of can be obtained by combining the three rle strings the two lcss with rle c in the middle appropriately merging the boundary runs if necessary conclusion in this work we proposed a new algorithm to solve the problem using rle representation we can compute the length of an of strings a b c in o mn nm time and space using this algorithm where m n a m and b this result is better than deorowicz s o m n time and space which doesn t use rle if we want to know not only the length but also an of a b c we can retrieve it in o m n time references shegufta bakht ahsan syeda persia aziz and sohel rahman longest common subsequence problem for strings jcp ann yang tseng and hor fast algorithms for computing the constrained lcs of encoded strings theor comput anthony bagnall chotirat ann ratanamahatana eamonn keogh stefano lonardi and gareth janacek a bit level representation for time series data mining with shape based similarity data mining and knowledge discovery horst bunke and csirik an improved algorithm for computing the edit distance of coded strings inf process chen and chao on the generalized constrained longest common subsequence problems comb francis chin alfredo de santis anna lisa ferrara ho and kim a simple algorithm for the constrained sequence problems inf process sebastian deorowicz algorithm for a string constrained lcs problem inf process paul heckel a technique for isolating differences between files communications of the acm james hunt and malcolm douglas mcilroy an algorithm for differential file comparison technical report compt sci techn dmitry korkin and lev goldfarb multiple genome rearrangement a general approach via the evolutionary genome graph bioinformatics suppl jessica lin eamonn keogh li wei and stefano lonardi experiencing sax a novel symbolic representation of time series data mining and knowledge discovery jia jie liu wang and chiu constrained longest common subsequences with strings comput helman stern merav shmueli and sigal berman most discriminating segment longest common subsequence mdslcs algorithm for dynamic hand gesture classification pattern recognition letters tsai the constrained longest common subsequence problem inf process robert wagner and michael fischer the correction problem acm congmao wang and dabing zhang a novel compression tool for efficient storage of genome resequencing data nucleic acids research lei wang xiaodong wang yingjie wu and daxin zhu a dynamic programming solution to a generalized lcs problem inf process a appendix here we show the for our proposed algorithm algorithm proposed o mn m n time algorithm for input strings a b and c output length of an of a b c sx fx a minimal in a a minimal in b l number of minimal in a b respectively gmin h minimum element in g h respectively g g number of sets g respectively make compressed dp tables of a and if c then compute all minimal sl fl of a and of b use algorithm lmax for x to l do for y to do lsum lpref sx lsuf fx if lmax lsum then lmax lsum else l k g gmin for p to m do if ap c then for to mp do l l p if l then fl p if l then if sl or fl then g g gmin g l k g for q to n do if bq c then for q to nq do q q if then q q if then if or then g g g gmin g l g lmax for h to g do for to g do for x gmin h to gmin h do lsum lpref sx lsuf fx fg min min h if lmax lsum then lmax lsum for y to do lsum lpref sgmin h lsuf fgmin h if lmax lsum then lmax lsum return lmax k
8
jun eine charakterisierung der moduln helmut mathematisches institut der theresienstr germany zoeschinger abstract let r m be a noetherian local ring e the injective hull of k and m homr m e the matlis dual of the if the canonical monomorphism m m is surjective m is known to be called reflexive with the help of the bass numbers p m p homr m p of m with respect to p we show m is reflexive if and only if p m p m for all p spec r from this it follows for every m if there exists a monomorphism m m or an epimorphism m m then m is already reflexive key words modules bass numbers associated prime ideals torsion modules cotorsion modules mathematics subject classification der rang eines moduls stets sei r noethersch und lokal jeden m p m p extir m p i p spec r die von m p siehe und weil wir sie im folgenden nur i brauchen schreiben wir statt p m kurz p m ist r ein mit k so ist m dimk k m offenbar der rang von ist r beliebig wird m p annm p zu einem modul dem und es folgt p m m p insbesondere ist p m mit p ass m allgemeiner gilt mit einem untermodul u von m u iq q spec r p m p u ist satz ist m ein und m m die kanonische einbettung so gilt jedes primideal p p m p m p ass kok beweis sei r ein und m ein mit rang m rang m dann ist rang m endlich zum beweis man einen freien untermodul u von m mit torsion so aus rang u rang m auch rang u rang u folgt mit u r i ist e i u also auch u ein zerfallender monomorphismus insbesondere rang r i rang ri daraus folgt die endlichkeit von i mit s r i und p ri ist auch flach so es nach theorem einen freien zwischenmodul s f p gibt mit f rein in p f mp klar ist rang s dimk k i andererseits rang f dimk dimk dimk k i so aus rang s rang f mit die behauptung folgt sei r ein und m ein beliebiger dann gilt rang m rang m kok ist torsion der beweis folgt unmittelbar aus der gleichung rang m rang m rang kok denn bei sind nach dem ersten schritt alle kardinalzahlen endlich also rang kok und ist klar sind jetzt r und m beliebig gibt es zu jedem primideal p nach lemma ein kommutatives diagramm m p m p kok p k m p m p kok p mit exakten zeilen in dem also auch ein isomorphismus ist wobei das dem lokalen ring sei genau dann ist jetzt p m p m wenn m p und m p denselben rang haben nach dem zweiten schritt kok p torsion ist kok p torsion ist p ass kok folgerung genau dann ist m reflexiv wenn p m p m gilt alle primideale folgerung gibt es einen monomorphismus m m oder einen epimorphismus m m so ist m bereits reflexiv beweis bei der ersten folgerung ist kok mit der rechten seite bei der zweiten gilt jeden monomorphismus f x m p x p m ist alle p bei x m also p m p m wie ist aber g m m ein epimorphismus wird g m m ein monomorphismus so nach eben m reflexiv ist also auch bemerkung in einigen sich sofort entscheiden wann ein primideal p die bedingung im satz alle p ass m gilt p m p m denn beide seiten sind null ist m e n mit n gilt p m p m genau dann wenn ist denn die von m m liefert ass kok ass m m und bekanntlich ist ass m koass m p spec r ist ist m e i mit i unendlich gibt es kein primideal p mit p m p m denn auch m p ist nicht reflexiv also nach beispiel im rang m p rang m p starke torsionsmoduln ist r ein so ist die klasse c aller m mit rang m rang m nach dem in untermoduln faktormoduln und gruppenerweiterungen abgeschlossen ein torsionsfreier m genau dann zu c wenn kok m reflexiv ist siehe die genauere beschreibung in ein torsionsmodul m genau dann zu c wenn m torsion ist und t haben wir keine te beschreibung nur wenn m sogar stark torsion ass m ist siehe wir die struktur von m angeben satz ist r ein kein so sind einen m i m ist stark torsion ii d m ist artinsch und reflexiv m ist ass m koass m ist nach voraussetzung t beweis i ii wegen koass m m im sinne von stark kotorsion so nach dem dortigen satz in der exakten folge t m m m m das erste glied und das dritte endlich erzeugt ist den divisiblen anteil d m ist dann d m torsionsfrei als faktormodul von m m also endlich erzeugt und reflexiv sogar also d m selbst artinsch und reflexiv in der exakten folge m m m t m ist dann das erste glied artinsch und das dritte also r m artinsch ein r es folgt r m artinsch r r m also auch m wie behauptet ii i zu d d m gibt es nach voraussetzung ein r m mit r jedes p ass m gilt dann p ass d oder p ass p m oder t r p also r ass m wie verlangt bemerkung ist r ein und m nur torsion so wir wenigstens zeigen stets ist d m artinsch ist r ein nagataring so ist m sogar besitzt koass m eine endliche finale teilmenge ist m gut im sinne von so ist m bereits stark torsion beispiel ist r ein m injektiv und rang m rang m so ist m bereits reflexiv beweis t m ist abgeschlossen also direkter summand in m und in m t m ist dann c torsionsfrei also reflexiv sei jetzt gleich m injektiv und torsion nach voraussetzung ist m torsion m flach also gut so m sogar stark torsion ist und mit die behauptung folgt beispiel ist r ein diskreter bewertungsring so gilt jeden m rang m rang m m mit reflexiv beweis nur ist zu zeigen und weil r jeder modul gut ist ist t m sogar stark torsion also nach d t m artinsch und reflexiv t m t m damit ist t m d t m m t m also m mit d t m reflexiv und sei r wieder beliebig und m reflexiv dann folgt aus der bijektion l m l m u annm u jeden untermodul u von m die menge v v u m ein minimales element besitzt ein sogenanntes komplement von u in allein aus dieser komplementeigenschaft folgt sehr viel die struktur von m satz ist r ein mit k r und m ein komplementierter so gilt m d m wobei endlich erzeugt und d m von der form ist mit artinsch k n n beweis sei p m der radikalvolle untermodul von weil dann jeder zwischenmodul p m u m mit radikalvoll ein komplement in m besitzt folgt bereits u m m ist koatomar ein komplement von p m in m ist dann ebenfalls koatomar also von der form mit endlich erzeugt me ein e weil auch p m komplementiert ist lemma besitzt p m m wie eben keine teilbaren faktormoduln ist also kotorsion nach theorem ist nun p m minimax ein komplement von d m in p m also sogar stark kotorsion radikalvoll nach satz schon damit ist auch und m d m bleibt d m zu zerlegen auch t d m ist minimax besitzt also einen endlich erzeugten untermodul x so artinsch ist und aus x r mit r r folgt auch r also selbst artinsch ist weil d m teilbar torsionsfrei und minimax ist also isomorph zu k n ist als reiner artinscher untermodul von d m sogar direkter summand d m und k n wie bemerkung seien r und m wie in satz falls dim r ist bekanntlich k als nicht minimax also und daher d m selbst artinsch literatur and local cohomology an algebraic introduction with geometric applications cambridge univ press commutative ring theory cambridge univ press on the structure of couniform and complemented modules j pure appl algebra gelfandringe und koabgeschlossene untermoduln bayer akad wiss starke kotorsionsmoduln arch math
0
flying insect classification with inexpensive sensors yanping chen department of computer science engineering university of california riverside adena why department of entomology university of california riverside gustavo batista university of paulo usp gbatista agenor isca technologies president eamonn keogh department of computer science engineering university of california riverside eamonn abstract the ability to use inexpensive noninvasive sensors to accurately classify flying insects would have significant implications for entomological research and allow for the development of many useful applications in vector control for both medical and agricultural entomology given this the last sixty years have seen many research efforts on this task to date however none of this research has had a lasting impact in this work we explain this lack of progress we attribute the stagnation on this problem to several factors including the use of acoustic sensing devices the overreliance on the single feature of wingbeat frequency and the attempts to learn complex models with relatively little data in contrast we show that optical sensors can produce vastly superior data that we can exploit additional features both intrinsic and extrinsic to the insect s flight behavior and that a bayesian classification approach allows us to efficiently learn classification models that are very robust to overfitting we demonstrate our findings with large scale experiments that dwarf all previous works combined as measured by the number of insects and the number of species considered keywords automate insect classification insect flight sound insect wingbeat bayesian classifier flight activity circadian rhythm introduction the idea of automatically classifying insects using the incidental sound of their flight as opposed to deliberate insect sounds produced by stridulation hao et al dates back to the very dawn of computers and commercially available audio recording equipment in three researchers at the cornell university medical college kahn celestin and offenhauser used equipment donated by oliver buckley then president of the bell telephone laboratories to record and analyze mosquito sounds kahn et al the authors later wrote it is the authors considered opinion that the intensive application of such apparatus will make possible the precise rapid and simple observation of natural phenomena related to the sounds of mosquitoes and should lead to the more effective control of such mosquitoes and of the diseases that they kahn and offenhauser in retrospect given the importance of insects in human affairs it seems astonishing that more progress on this problem has not been made in the intervening decades an even earlier paper reed et al makes a similar suggestion however these authors determined the wingbeat frequencies manually aided by a stroboscope there have been sporadic efforts at flying insect classification from audio features sawedal and hall schaefer and bent unwin and ellington moore et al especially in the last decade moore and miller repasky et al however little real progress seems to have been made by lack of progress we do not mean to suggest that these pioneering research efforts have not been fruitful however we would like to have automatic classification to become as simple inexpensive and ubiquitous as current mechanical traps such as sticky traps or interception traps capinera but with all the advantages offered by a digital device higher accuracy very low cost monitoring ability and the ability to collect additional information time of we feel that the lack of progress in this pursuit can be attributed to three related factors most efforts to collect data have used acoustic microphones reed et al belton et al mankin et al raman et al sound attenuates according to an inverse squared law for example if an insect flies just three times further away from the microphone the sound intensity informally the loudness drops to one ninth any attempt to mitigate this by using a more sensitive microphone invariably results in extreme sensitivity to wind noise and to ambient noise in the environment moreover the difficulty of collecting data with such devices seems to have led some researchers to obtain data in unnatural conditions for example nocturnal insects have been forced to fly by tapping and prodding them under bright halogen lights insects have been recorded in confined spaces or under extreme temperatures belton et al moore and miller in some cases insects were tethered with string to confine them within the range of the microphone reed et al it is hard to imagine that such insect handling could result in data which would generalize to insects in natural conditions unsurprisingly the difficultly of obtaining data noted above has meant that many researchers have attempted to build classification models with very limited data as few as instances moore or less however it is known that for building a commercially available rotator bottle trap made by does allow researchers to measure the time of arrival at a granularity of hours however as we shall show in section additional feature circadian rhythm of flight activity we can measure the time of arrival at a granularity and exploit this to improve classification accuracy classification models more data is better halevy et al banko and brill shotton et al compounding the poor quality data issue and the sparse data issue above is the fact that many researchers have attempted to learn very complicated classification models especially neural networks moore et al moore and miller li et al however neural networks have many including the interconnection pattern between different layers of neurons the learning process for updating the weights of the interconnections the activation function that converts a neuron s weighted input to its output activation etc learning these on say a classification problem with millions of training data is not very difficult zhan et al but attempting to learn them on an insect classification problem with a mere twenty examples is a recipe for overfitting cf figure it is difficult to overstate how optimistic the results of neural network experiments can be unless rigorous protocols are followed prechelt in this work we will demonstrate that we have largely solved all these problems we show that we can use optical sensors to record the sound of insect flight from meters away with complete invariance to wind noise and ambient sounds we demonstrate that these sensors have allowed us to record on the order of millions of labeled training instances far more data than all previous efforts combined and thus allow us to avoid the overfitting that has plagued previous research efforts we introduce a principled method to incorporate additional information into the classification model this additional information can be as quotidian and as as the yet still produce significant gains in accuracy finally we demonstrate that the enormous amounts of data we collected allow us to take advantage of the unreasonable effectiveness of data halevy et al to produce simple accurate and robust classifiers in summary we believe that flying insect classification has moved beyond the dubious claims created in the research lab and is now ready for deployment the sensors while there is a formal framework to define the complexity of a classification model the vc dimension vapnik and chervonenkis informally we can think of a complicated or complex model as one that requires many parameters to be set or learned and software we present in this work will provide researchers worldwide robust tools to accelerate their research background and related work the vast majority of attempts to classify insects by their flight sounds have explicitly or implicitly used just the wingbeat frequency reed et al sotavalta sawedal and hall schaefer and bent unwin and ellington moore et al moore however such an approach is limited to applications in which the insects to be discriminated have very different frequencies consider figure which shows a histogram created from measuring the wingbeat frequencies of three sexed species of insects culex stigmatosoma female aedes aegypti female and culex tarsalis male we defer details of how the data was collected until later in the paper ae aegypti i cx stigmatosoma cx tarsalis ae aegypti ii cx stigmatosoma cx tarsalis figure i histograms of wingbeat frequencies of three species of insects cx stigmatosoma ae aegypti and cx tarsalis each histogram is derived based on wingbeat sound snippets ii gaussian curves that fit the wingbeat frequency histograms it is visually obvious that if asked to separate cx stigmatosoma from cx tarsalis the wingbeat frequency could produce an accurate classification as the two species have very different frequencies with minimal overlap to see this we can compute the optimal bayes error rate fukunaga which is a strict lower bound to the actual error rate obtained by any classifier that considers only this feature here the bayes error rate is half the overlapping area under both curves divided by the total area under the two curves because there is only a tiny overlap between the wingbeat frequency distributions of the two species the bayes error rate is correspondingly small if we use the raw histograms and if we use the derived gaussians however if the task is to separate cx stigmatosoma from ae aegypti the wingbeat frequency will not do as well as the frequencies of these two species overlap greatly in this case the bayes error rate is much larger if we use the raw histograms and if we use the derived gaussians this problem can only get worse if we consider more species as there will be increasing overlap among the wingbeat frequencies this phenomenon can be understood as a version of the pigeonhole principle grimaldi as a concrete example assume we limit our attention to just mosquitoes there are more than species of mosquitoes described worldwide assume the range of the mosquito wingbeat frequency is between hz and hz if each species takes up an integer wingbeat frequency then at least species must share a same wingbeat frequency with another species as there are only possible frequency values the actual overlap rate will be even higher because the typical wingbeat frequency for a species is a distribution peaking at some value rather than just a single integer value as shown in figure given this it is unsurprising that some doubt the utility of wingbeat sounds to classify the insects however we will show that the analysis above is pessimistic insect flight sounds can allow much higher classification rates than the above suggests because there is more information in the flight sound signal than just the wingbeat frequency by analogy humans have no problem distinguishing between middle c on a piano and middle c on a saxophone even though both are the same hz fundamental frequency the bayes error rate to classify the three species in figure using just the wingbeat frequency is however as we shall see below in the section titled flying insect classification that by using the additional features from the signal we can obtain an error rate of we believe that our experiments are the first explicit demonstration that there is actionable information in the signal beyond the wingbeat frequency we can augment the wingbeat sounds with additional features that can help to improve the classification performance for example many species may have different flight activity circadian rhythms as we shall see below in the section titled additional feature circadian rhythm of flight activity simply incorporating the information can significantly improve the performance of the classification the ability to allow the incorporation of auxiliary features is one of the reasons we argue that the bayesian classifier is ideal for this task cf section flying insect classification as it can gracefully incorporate evidence from multiple sources and in multiple formats one of the that the bayesian classifier can incorporate is the prior probability of seeing a particular insect in some cases we may be able to further improve the accuracy of classification by adjusting these prior probabilities with some intervention for example we may use attractants or repellants or we may construct the sensors with mechanical barriers that limit entry for large insects or fans that discourage weak flyers etc we leave such considerations to future work materials and methods insect colony and rearing six species of insects were studied in this work cx tarsalis cx stigmatosoma ae aegypti culex quinquefasciatus musca domestica and drosophila simulans all adult insects were reared from laboratory colonies derived from wild individuals collected at various locations cx tarsalis colony was derived from wild individuals collected at the eastern municipal water district s demonstration constructed treatment wetland san jacinto ca in cx quinquefasciatus colony was derived from wild individuals collected in southern california in georghiou and wirth cx stigmatosoma colony was derived from wild individuals collected at the university of california riverside aquatic research facility in riverside ca in ae aegypti colony was started in with eggs from thailand van dam and walton musca domestica colony was derived from wild individuals collected in san jacinto ca in and drosophila simulans colony were derived from wild individuals caught in riverside ca in the larvae of cx tarsalis cx quinquefasciatus cx stigmatosoma and ae aegypti were reared in enamel pans under standard laboratory conditions h light dark ld cycle with hour periods and fed ad libitum on a mixture of ground rodent chow and brewer s yeast v v musca domestica larvae were kept under standard laboratory conditions h light dark ld cycle rh and reared in a mixture of water bran meal alfalfa yeast and powdered milk drosophila simulans larvae were fed ad libitum on a mixture of rotting fruit mosquito pupae were collected into cups solo cup chicago il and placed into experimental chambers alternatively adults were aspirated into experimental chambers within week of emergence the adult mosquitoes were allowed to feed ad libitum on a sucrose and water mixture food was replaced weekly cotton towels were moistened twice a week and placed on top of the experimental chambers and a cup of tap water solo cup chicago il was kept in the chamber at all times to maintain a higher level of humidity within the cage musca domestica adults were fed ad libitum on a mixture of sugar and dried milk with free access to water drosophila simulans adults were fed ad libitum on a mixture of rotting fruit experimental chambers consisted of kritter keepers lee s aquarium and pet products san marcos ca that were modified to include the sensor apparatus as well as a sleeve bug dorm sleeve bioquip rancho dominguez ca attached to a piece of pvc piping to allow access to the insects two different sizes of experimental chambers were used the larger cm l x cm w x cm h and the smaller cm l x cm w x cm the lids of the experimental chambers were modified with a piece of mesh cloth affixed to the inside in order to prevent escape of the insects as shown in figure experimental chambers were maintained on a h light dark ld cycle and rh for the duration of the experiment each experimental chamber contained to individuals of a same species in order to capture as many flying sounds as possible while limiting the possibility of capturing more than one sound at a same time i ii insect handling portal phototransistor array lid circuit board laser source laser beam power supply recording device figure i one of the cages used to gather data for this project ii a logical version of the setup with the components annotated instruments to record flying sounds we used the sensor described in batista to capture the insect flying sounds the logic design of the sensor consists of a phototransistor array which is connected to an electronic board and a laser line pointing at the phototransistor array when an insect flies across the laser beam its wings partially occlude the light causing small light fluctuations the light fluctuations are captured by the phototransistor array as changes in current and the signal is filtered and amplified by the custom designed electronic board the physical version of the sensor is shown in figure the output of the electronic board feeds into a digital sound recorder zoom handy recorder and is recorded as audio data in the format each file is hours long and a new file starts recording immediately after a file has recorded for hours so the data is continuous the length of the file is limited by the device firmware rather than the disk space the standard is a lossy format and optimized for human perception of speech and music however most flying insects produce sounds that are well within the range of human hearing and careful comparisons to lossless recordings suggest that we lose no exploitable or indeed detectable information sensor data processing we downloaded the sound files to a pc twice a week and used a detection algorithm to automatically extract the brief insect flight sounds from the raw recording data the detection algorithm used a sliding window to slide through the raw data at each data point a is used to decide whether the audio segment contains an insect flying sound it is important to note that the classifier used at this stage is solving the relatively simple task differentiating between we will discuss the more sophisticated classifier which attempts to differentiate species and sex in the next section the used for the problem is a nearest neighbor classifier based on the frequency spectrum for ground truth data we used ten flying sounds extracted from early experiments as the training data for the insect sounds and ten segments of raw recording background noise as the training data for the sounds the number of training data was limited to ten because more training data would slow down the algorithm while fewer data would not represent variability observed note that the training data for background sounds can be different from minute to minute this is because while the frequency spectrum of the background sound has little variance within a short time interval it can change greatly and unpredictably in the long run this variability called concept drift in the machine learning community tsymbal widmer and kubat may be due to the effects of temperature change on the electronics and the slow decline of battery output power etc fortunately given the high ratio in the audio the high variation of the sounds does not cause a significant problem figure shows an example of a audio clip containing a flying insect generated by our sensor as we can see the signal of insects flying across the laser is well distinguished from the background signal as the amplitude is much higher and the range of frequency is quite different from that of background sound the length of the sliding window in the detection algorithm was set to be ms which is about the average length of a flying sound each detected insect sound is saved into a onesecond long wav format audio file by centering the insect flying signal and padding with zeros elsewhere this makes all flying sounds the same length and simplifies the future archiving and processing of the data note that we converted the audio format from to wav at this stage this is simply because we publicly release all our data so that the community can confirm and extend our results because the vast majority of the signal processing community uses matlab and matlab provides native functions for working with wav files this is the obvious choice for an archiving format figure shows the saved audio of the insect sound shown in figure flying sounds detected in the raw recordings may be contaminated by the background noise such as the hz noise from the american domestic electricity which bleeds into the recording due to the inadequate filtering in power transformers to obtain a cleaner signal we applied the spectral subtraction technique boll ephraim and malah to each detected flying sound to reduce noise flying insect classification in the section above we showed how a simple nearest neighbor classifier can detect the sound of insects and pass the sound snippet on for further inspection here we discuss algorithms to actually classify the snippets down to species and in some cases sex level while there are a host of classification algorithms in the literature decision trees neural networks nearest neighbor etc the bayes classifier is optimal in minimizing the probability of misclassification devroye under the assumption of independence of features the bayes classifier is a simple probabilistic classifier that predicts class membership probabilities based on bayes theorem in addition to its excellent classification performance the bayesian classifier has several properties that make it extremely useful in practice and particularly suitable to the task at hand the bayes classifier is undemanding in both cpu and memory requirements any devices to be deployed in the field in large quantities will typically be small devices with limited resources such as limited memory cpu power and battery life the bayesian classifier once constructed offline in the lab requires time and space resources that are just linear in the number of features the bayes classifier is very easy to implement unlike neural networks moore and miller li et al the bayes classifier does not have many parameters that must be carefully tuned in addition the model is fast to build and it requires only a small amount of training data to estimate the distribution parameters necessary for accurate classification such as the means and variances of gaussian distributions unlike other classification methods that are essentially black box the bayesian classifier allows for the graceful introduction of user knowledge for example if we have external to the training data set knowledge that given the particular location of a deployed insect sensor we should expect to be twice as likely to encounter a cx tarsalis as an ae aegypti we can tell the algorithm this and the algorithm can use this information to improve its accuracy this means that in some cases we can augment our classifier with information gleaned from the text of journal papers or simply the experiences of field technicians in section a tentative additional feature geographic distribution we give a concrete example of this the bayesian classifier simplifies the task flagging anomalies most classifiers must make a classification decision even if the object being classified is vastly different to anything observed in the training phase in contrast we can slightly modify the bayesian classifier to produce an unknown classification one or two such classifications per day could be ignored but a spate of them could be investigated in case it is indicative of an infestation of a completely unexpected invasive species when there are multiple features used for classification we need to consider the possibility of missing values which happens when some features are not observed for example as we discuss below we use as a feature however a dead clock battery could deny us this feature even when the rest of the system is working perfectly missing values are a problem for any learner and may cause serious difficulties however the bayesian classifier can trivially handle this problem simply by dynamically ignoring the feature in question at classification time because of the considerations listed above we argue that the bayesian classifier is the best for our problem at hand note that our decision to use bayesian classifier while informed by the above advantages was also informed by an extensive empirical comparison of the accuracy achievable by other methods given that in some situations accuracy trumps all other considerations while we omit exhaustive results for brevity in figure we show a comparison with the neural network classifier as it is the most frequently used technique in the literature moore and miller we considered only the frequency spectrum of wingbeat snippets for the three species discussed in figure the training data was randomly sampled from a pool of objects and the test data was a completely disjoint set of objects and we tested over random resamplings for the neural network we used a single hidden layer of size ten which seemed to be approximately the default parameters in the literature mean performance of bayesian classifier mean performance of neural network worst performance of bayesian classifier worst performance of neural network number of items in the training set figure a comparison of the mean and worst performance of the bayesian versus neural networks classifiers for datasets ranging in size from five to fifty the results show that while the neural network classifier eventually converges on the performance of the bayesian classifier it is significantly worse for smaller datasets moreover for any dataset size in the range examined it can occasionally produce pathologically poor results doing worse than the default rate of note that our concern about performance on small datasets is only apparently in conflict with our claim that our sensors can produce massive datasets in some cases when dealing with new insect species it may be necessary to bootstrap the modeling of the species by using just a handful of annotated examples to find more unannotated examples in the archives a process known as learning chen et al the intuition behind bayesian classification is to find the mostly likely class given the data observed the probability that an observed data x belongs to a class is computed using the bayes rule as where is a prior probability of class that can be estimated from frequencies in the database is the probability of observing the data x in class and is the probability of occurrence of the observed data x the probability is usually unknown but since it does not depend on the class it is usually understood as a normalization factor and thus only the numerator is considered for classification the probability is proportional to the numerator the is called the posterior probability the bayesian classifier assigns the data to the class which has the highest posterior probability that is argmax where is the set of classes cx stigmatosoma ae aegypti cn an gambiae a bayesian classifier can be represented using a graph called bayesian network the bayesian network that uses a single feature for classification is shown in figure the direction of the arrow in the graph encodes the fact that the probability of an insect to be a member of class c depends on the value of the feature observed etc c figure a bayesian network that uses a single feature for classification when the classifier is based on a single feature the posterior probability that an observed data belongs to a class is calculated as where is the probability of observing feature in class for insect classification the primary data we observed are the flight sounds as illustrated in figure the flying sound signal is the amplitude section in the center of the audio and can be represented by a sequence s where si is the signal sampled in the instance i and n is the total number of samples of the signal this sequence contains a lot of acoustic information and features can be extracted from it i a mosquito flying across the laser our sensor captured the flying sound background noise ii the flying signal is extracted and centered paddings elsewhere to make each sound long iii wingbeat frequency at amplitude spectrum of the flying sound harmonics figure i an example of a audio clip containing a flying sound generated by the sensor the sound was produced by a female cx stigmatosoma the insect sound is highlighted in ii the insect sound that is cleaned and saved into a long audio clip by centering the insect signal and padding with elsewhere iii the frequency spectrum of the insect sound obtained using dft the most obvious feature to extract from the sound snippet is the wingbeat frequency to compute the wingbeat frequency we first transform the audio signal into a frequency spectrum using the discrete fourier transform dft bracewell and bracewell as shown in figure the frequency spectrum of the sound in figure has a peak in the fundamental frequency at hz and some harmonics in integer multiples of the fundamental frequency the highest peak represents the frequency of interest the insect wingbeat frequency the wingbeat frequency distribution is a univariate density function that can be easily estimated using a histogram figure shows a wingbeat frequency histogram plot for three species of insects each for a single sex only we can observe that the histogram for each species is well modeled by gaussian distribution hence we fit a gaussian for each distribution and estimated the means and variances using the frequency data the fitted gaussian distributions are shown in figure note that as hinted at in the introduction the bayesian classifier does not have to use the idealized gaussian distribution it could use the raw histograms to estimate the probabilities instead however using the gaussian distributions is computationally cheaper at classification time and helps guard against overfitting with these distributions we can calculate the probability of observing the flying sound from a class given the wingbeat frequency for example suppose the class of the insect shown in figure is unknown but we have measured its wingbeat frequency to be hz further suppose that we have previously measured the mean and standard deviation of female cx stigmatosoma wingbeat frequency to be and respectively we can then calculate the probability of observing this wingbeat frequency from a female cx stigmatosoma insect using the gaussian distribution function we can calculate the probabilities for the other classes in a similar way and predict the unknown insect as the most likely class using equation in this example the prior probability is equal for each class and the unknown insect is about times more likely to be a female cx stigmatosoma than be a female ae aegypti the second most likely class and thus is in this case correctly classified as a female cx stigmatosoma note that the wingbeat frequency is a scalar and learning the density functions for a feature typically no more than dimensions is easy because they can either be fitted using some distribution models such as the gaussian distribution or be approximated using a histogram which can be constructed with a small amount of training data however if a feature is we need other multivariate density function estimation methods because we usually do not have any idea of what distribution model should fit the distributions of the features and building a histogram of a feature requires a prohibitively large size of training dataset the size of training dataset grows exponentially with the increase of dimensionality the literature has offered some multivariate density function estimation methods for highdimensional variables such as the window method rosenblatt parzen and the knn approach mack and rosenblatt the knn approach is very simple it leads to an approximation of the optimal bayes classifier and hence we use it in this work to estimate the density functions for features the knn approach does not require a training phase to learn the density function it directly uses the training data to estimate the probability of observing an unknown data in a class specifically given an observed data x the knn approach first searches the training data to find the top k nearest neighbors of x it then computes the probability of observing x in class as the fraction of the top k nearest neighbors which are labeled as class that is where is a parameter specifying the number of nearest neighbors and is the number of neighbors that are labeled as class among the top nearest neighbors with equation we can calculate the probability and plug this into the bayesian classifier as an example imagine that we use the entire spectrum as a feature for the insect sound cf figure given an unknown insect we first transform its flight sound into the spectrum representation and then search the entire training data to find the top k nearest neighbors suppose we set and among the eight nearest neighbors three of them belong to female cx stigmatosoma one belongs to female ae aegypti and four belong to male cx tarsalis we can then calculate the conditional probability using equation as cx stigmatosoma these conditional probabilities are then multiplied by the class prior probability to calculate the posterior probability and the observed insect is predicted to be the class which has the highest posterior probability as such we are able to estimate the probability for features in any format including the feature of distance returned from an opaque similarity function and thus generalize the bayesian classifier to subsume some of the advantages of the nearest neighbor classifier table outlines the bayesian classification algorithm using the nearest neighbor distance feature the algorithm begins in lines by estimating the prior probability for each class this is done by counting the number of occurrences of each class in the training data set it then estimates the conditional probability for each unknown data using the knn approach outlined above specifically given an unknown insect sound the algorithm first searches the entire training data to find the top k nearest neighbors using some distance measure lines it then counts for each class the number of neighbors which belong to that class and calculates the probability using equation with the prior probability and the probability known for each class the algorithm calculates the posterior probability for each class lines and predicts the unknown data to belong to the class that has the highest posterior probability line table the bayesian classification algorithm using a feature notation k the number of nearest neighbors in knn approach disfunc a distance function to calculate the distance between two data c a set of classes train the training dataset tci number of training data that belong to class ci for i p ci tci prior probability end for each unknown data for j d j disfunc trainj distance of to each training data end d sort d ascend the distance in ascending order to k find the nearest neighbors for i kci number of data in that are labeled as class ci p kci k calculate the conditional probability with knn approach p p ci p calculate posterior probability end p normalize the posterior probability for i p p end argmax assign the unknown data to the class end the algorithm outlined in table requires two inputs including the parameter the goal is to choose a value of k that minimizes the probability estimation error one way to do this is to use validation kohavi the idea is to keep part of the training data apart as validation data and evaluate different values of k based on the estimation accuracy on the validation data the value of k which achieves the best estimation accuracy is chosen and used in classification this leaves only the question of which distance measure to use that is how to decide the distance between any two insect sounds to find a good distance measure for the flying sounds we turned to crowdsourcing by organizing a contest from july to november in chen et al that asked participants to create the best distance measure for the insect sounds more than fifteen teams worldwide participated in the contest and we received more than eighty submissions a team is allowed to have multiple submissions and we evaluated each submission but for the final score only one submission was scored for a team the result of the contest suggested that the best distance measure is a simple algorithm which computes the euclidean distance between the frequency spectrums of the insect sounds building on our crowdsourcing efforts we found that if we truncated the frequency spectrums to exclude data outside the range of possible wingbeat frequencies cf table we could further improve accuracy note that some of our crowdsourcing participants had somewhat similar but less explicit ideas our distance measure is further explained in table given two flying sounds we first transform each sound into frequency spectrums using dft lines the spectrums are then truncated to include only those corresponding to the frequency range from to lines the frequency range is thus chosen because according to entomological all other frequencies are unlikely to be the result of insect activity and probably reflect noise in the sensor we then compute the euclidean distance between the two truncated spectrums line and return it as the distance between the two flying sounds table our distance measure for two insect flight sounds notation two sound sequences dis the distance between the two sounds function dis disfunc dft dft frequency frequency dis our insect classification algorithm is obtained by plugging the distance measure explained in table into the bayesian classification framework outlined in table to demonstrate the effectiveness of the algorithm we considered the data that was used to generate the plot in figure these data were randomly sampled from a dataset with over sounds generated by our sensor we sampled in total flying sounds sounds for each species so the prior probability for each class is using our insect classification algorithm with k set to eight which was selected based on the validation result we achieved an error rate of using we then compared our algorithm to the optimal result possible using only the wingbeat frequency which is the most commonly used approach in previous research efforts the optimal bayes to classify the insects using wingbeat frequency is which is the lower bound for any algorithm that uses just that feature this means that using the truncated frequency spectrum is able to reduce the error rate by almost a third to the best of our knowledge this is the first explicit demonstration that there is exploitable information in the flight sounds beyond the wingbeat frequency it is important to note that we do not claim that the distance measure we used in this work is optimal there may be better measures which could be discovered by additional research or many large insects most members of odonata lepidoptera have wingbeat frequencies that are significantly slower than hz our choice of truncation level reflects our special interest in culicidae by revisiting crowdsourcing etc moreover it is possible that there may be better distance measures if we are confining our attention to just culicidae or just tipulidae etc however if and when a better distance measure is found we can simply plug the distance measure in the bayesian classification framework to get a better classification performance additional feature circadian rhythm of flight activity in addition to the insect flight sounds there are other features that can be used to reduce the error rate the features can be very cheap to obtain as simple as noting the yet the improvement can be significant it has long been noted that different insects often have different circadian flight activity patterns taylor and thus the time when a flying sound is intercepted can be used to help classify insects for example house flies musca domestica are more active in the daytime than at night whereas cx tarsalis are more active at dawn and dusk if an unknown insect sound is captured at noon it is more probable to be produced by a house fly than by cx tarsalis based on this information given an additional feature we must consider how to incorporate it into the classification algorithm one of the advantages of using the bayesian classifier is that it offers a principled way to gracefully combine multiple features thus the solution is straightforward here for simplicity we temporarily assume the two features the and the are conditionally independent they are independent given the class we will revisit the reasonableness of this assumption later with such an independence of feature assumption the bayesian classifier is called bayesian classifier and is illustrated in figure the two arrows in the graph encode the fact that the probability of an unknown data to be in a class depends on the features and whereas the lack of arrows between and means the two features are independent c figure a bayesian network that uses two independent features for classification an observed object x now should include two values and and the posterior probability of x belonging to a class is calculated as p p with the independence assumption this probability is proportional to p p p p where p fj is the probability of observing the pair fj in class for concreteness the feature in our algorithm is the insect sound and is the time when the sound was produced in the previous section we have shown how to calculate the probability of the using the knn estimation method and the prior probability to incorporate our additional feature we only need to calculate the probability note that the is a scalar as we discussed in the section above learning the distributions of a feature can be easily done by constructing a histogram this histogram of the for a species is simply the insect s flight activity circadian rhythm in the literature there have been many attempts to quantify these patterns for various insects however due to the difficulty in obtaining such data most attempts were made by counting if there is activity observed in a time period such as a window and otherwise taylor and jones rowland and lindsay without distinguishing the number of observations in each period the resulting patterns have a course granularity in contrast by using our sensors we have been able to collect on the order of hundreds of thousands of observations per species and count the exact number of observations at sub second granularity producing what we believe are the densest circadian rhythms ever recorded figure shows the flight activity circadian rhythms of cx stigmatosoma female cx tarsalis male and ae agypti female those circadian rhythms were learned based on observations collected over one month the results are consistent with the report in mian et al taylor and jones but with a much finer temporal resolution down to minutes note that although all three species are most active at dawn and dusk ae aegypti females are significantly more active during daylight hours dusk dawn cx stigmatosoma cx tarsalis ae aegypti figure the flight activity circadian rhythms of cx stigmatosoma female cx tarsalis male and ae aegypti female learned based on observations generated by our sensor that were collected over one month for the insects discussed in this work we constructed circadian rhythms based on many hundreds of thousands of individual observations however it is obvious that as our sensors become more broadly used by the community we can not always expect to have such finegrained data for example there are approximately mosquito species worldwide it is unlikely that high quality circadian rhythms for all of them will be collected in the coming years however the absence of gold standard circadian rhythms should not hinder us from using this useful additional feature instead we may consider using approximate rhythms one such idea is to use the circadian rhythms of the most closely related insect species taxonomically for which we do have data for example suppose we do not have the circadian rhythm for cx stigmatosoma we can use the rhythm of cx tarsalis as an approximation if the latter is available in the cases where we do not have the circadian rhythms for any insects we can construct approximate rhythms simply based on text descriptions in the entomological literature some frequently encountered descriptions of periods when insects are active include diurnal nocturnal crepuscular etc offline we can build a dictionary of templates based on these descriptions this process of converting text to probabilities is of course subjective however as we will show below it does lead to improvements over using no information our simple first attempt at this work is by quantifying different levels of activities with numbers from to representing low medium and high for example if an insect is described as diurnal and most active at dawn and dusk we can use these three degrees to quantify the activities highest degree at dawn and dusk second in the daytime and low activity during the night the resulting template is shown in figure note that a circadian rhythm is a probability distribution that tells how likely we are to capture a certain insect species flights at a certain time and thus each template is normalized such that the area under the template sums to one in figure we show an approximate circadian rhythm for cx stigmatosoma that we constructed this way we spend two minutes searching the web for an academic paper that describes cx stigmatosoma flight activity discovering that according to mian et al cx stigmatosoma is active at dawn and dusk crepuscular dawn dusk i diurnal ii crepuscular iii nocturne diurnal most active at dawn and dusk iv figure examples of approximation templates for insects flight activity circadian rhythm no markings are shown on the all templates are normalized such that the area under the curve is one and the smallest value is an epsilon greater than zero in the worst case if we can not glean any knowledge about the insect s circadian rhythm a species we can simply use a constant line as an approximation the constant line encodes no information about the activity hours of that insect but it enables the incorporation of the more familiar insects circadian rhythms into the classifier to improve the lowest level of flight activity we represent is low but not zero in a bayesian classifier we never want to assign zero probabilities to an event unless we are sure it is logically impossible to occur in practice a technique called laplacian correction is typically used to prevent any probability estimate from being exactly zero performance in the pathological case where all the circadian rhythms are approximated using a constant line the classifier degenerates to a bayesian classifier that does not use this additional feature given the above we can almost always incorporate some of the circadian rhythm information into our classifier given a observation we can calculate the probability of observing the activity from a species simply by looking up the flight activity circadian rhythms of that species for example suppose an insect sound was detected at the probability of observing this activity from a cx tarsalis male is three times the probability of observing it from an ae aegypti female according to the circadian rhythms shown in figure for concreteness the insect classification algorithm that uses two features is outlined in table it is similar to the algorithm outlined in table that uses a single feature only five modifications are made table the insect classification algorithm using two features this algorithm is similar to the one outlined in table only three modifications are needed which are listed below the modifications are highlighted in line for each unknown data line p p ci p calculate the posterior probability line normalize the posterior probability line line argmax assign the unknown data x to the class to demonstrate the benefit of incorporating the additional feature in classification we again revisit the toy example in figure with the feature incorporated and the accurate flight activity circadian rhythms learned using our sensor data we achieve a classification accuracy of recall that the classification accuracy using just the is cf the paragraph right below table simply by incorporating this feature we reduce the classification error rate by about from to only to test the effect of using proxies of the learned flight activity circadian rhythm we imagine that we do not have the flight activity circadian rhythm for cx stigmatosoma female and that we must use one of the approximate rhythms discussed above the results are shown in table as we can see even with a constant line approximation the classification accuracy is slightly better than not using the feature this is because although the algorithm has no knowledge about the circadian rhythm for cx stigmatosoma females it does have knowledge of the other two species circadian rhythms with the approximation created based on the text description we achieve an accuracy of which is better than using the constant line this is as we hoped as the approximate rhythm carries some useful information about the insects activity pattern even though it is at a very coarse granularity an even better classification accuracy of is achieved by using the circadian rhythm of cx tarsalis males as the approximation as can be seen from figure the circadian rhythm of cx tarsalis males is quite close to that of cx stigmatosoma females table classification performance using different approximations for cx stigmatosoma female flight activity circadian rhythm flight activity circadian no rhythm constant line description using the learned rhythm approximations used approximation based using our approximation insect s rhythm sensor data classification accuracy note that the classification accuracy with any approximation is worse than that of using the accurate circadian rhythm accuracy this is not surprising as the more accurate the estimated distribution is the more accurate the classification will be this reveals the great utility of our sensor it allows the inexpensive collection of massive amounts of training data which can be used to learn accurate distributions a tentative additional feature geographic distribution in addition to the we can also use the as an additional feature to reduce classification error rate the is also very it is simply the location where the sensor is deployed we must preempt a possible confusion on behalf of the reader one application of our sensors is estimating the relative abundance of various species of insects at some particular location however we are suggesting here that we can use estimates of the relative abundance of the species of insects at that location to do this more accurately this appears to be a chicken and egg however there is no contradiction the classifier is attempting to optimize the accuracy of its individual decisions about each particular insect observed and knowing even approximately the expected prevalence of each species can improve accuracy the carries useful information for classification because insects are rarely evenly distributed at any spatial granularity we consider for example cx tarsalis is relatively rare east of the mississippi river reisen whereas aedes albopictus the asian tiger mosquito has now become established in most states in that area novak if an insect is captured in some state east of the mississippi river it is more probable to be an ae albopictus than a cx tarsalis at a finer spatial granularity we may leverage knowledge such as since this trap is next to a dairy farm an animal manure source we are five times more likely to see a sylvicola fenestralis window gnat than an anopheles a bayesian classifier that uses three features is illustrated in figure here we again assume that all the three features and are independent c figure a bayesian network that uses three independent features for classification based on figure the probability of an observed object belonging to a class is calculated as p p p p p where is the probability of observing an insect from class at location this probability reflects the geographic distribution of the insects for classification we do not need the true absolute densities of insect prevalence we just need the ratio of densities for the different species at the observation location this is because with the ratio of we can calculate the ratio of the posterior probability p of each species and predict an observation to belong to the species which has the highest posterior probability in the case where we do need the actual posterior probability values we can always calculate them from the posterior probability ratio based on the constraint that the sum of all posterior probabilities over different classes should be one as shown in lines in table to obtain the ratio of p at a given a location is simple we can glean this information from the text of relevant journal papers or simply from the experiences of local field technicians for example suppose we deploy our insect sensor at a location where we should expect to be twice as likely to encounter cx tarsalis as cx stigmatosoma the ratio of cx tarsalis to cx stigmatosoma is in the case where we can not glean any such knowledge about the local insect population we can temporarily augment our sensor with various insect traps that physically capture insects cdc trap for mosquitoes yellow sticky cards traps for sharpshooters etc and use these manually counted number of observations of each species to estimate the ratios to demonstrate the utility of incorporating this feature we did a simple simulation experiment note that the and features are real data only the was simulated by assuming there are two species of insects cx stigmatosoma female and ae aegypti female which are geographically distributed as shown in figure we further assumed our sensors are deployed at three different locations and where is about the same distance from both centers and and are each close to one of the centers here we model the location distributions as gaussian density bumps for simplicity however this is not a necessary assumption but we can use any density distribution distribution of cx stigmatosoma insect sensor locations distribution of ae aegypti figure the assumptions of geographic distributions of each insect species and sensor locations in our simulation to demonstrate the effectiveness of using feature in classification to simulate the data captured by the three sensors we project ten thousand insect exemplars of each species onto the map according to the geographic distribution assumption we then sample these insects that are within the capture range of the sensors which is assumed to be a square region centering at the sensor as shown in figure each sampled insect is assumed to fly across our sensor once and have its data captured in our experiment we sampled cx stigmatosoma females and ae aegypti females at location and at location and and at location using just the frequency spectrum and the to classify those sampled insects we achieved an error rate of however by incorporating the we reduced the error rate to this is impressive as the information is very yet it reduced the error rate by more than half a general framework for adding features in the previous two sections we have shown how to extend our insect flight sound classifier to incorporate two additional features however there may be dozens of additional features that could help improve the classification performance these potential features are so domain and application specific that we can not give any more than a few representative examples here it has long been known that some species of insects have a preferred height at which they fly for example njie et al noted that anopheline mosquitoes are much likely to be observed than culicine flying at a height two meters which is the approximate height required to enter through the eave of a house here we imagine using a feature by placing two of our sensors at a known distance apart and observing the lag between the sensor observations we can obtain an approximation of the speed at which the insect is flying only an approximation as the insect may fly at an angle to the light beam this feature may help to discriminate between speedy members of the genus culicoides flying at and the relatively sluggish members of the family culicidae which max out at about bidlingmayer et al in this section we generalize our classifier to a framework that is easily extendable to incorporate arbitrarily many such specialized features if we compare figure figure and figure which show the bayesian networks with an increasing number of features we can see that adding a feature to the classifier is represented by adding a node to the bayesian network the bayesian network that uses independent features for classification is shown in figure c fn figure the general bayesian network that uses n features for classification where n is a positive integer with independent features the posterior probability that an observation belongs to a class is calculated as p fn p p fj where p fj is the probability of observing in class note that the posterior probability can be calculated incrementally as the number of features increases that is if we have used some features to classify the objects and later on we have discovered more useful features and would like to add those new features to the classifier to the objects we do not have to the entire classification from scratch instead we can keep the posterior probability obtained from the previous classification based on the old features update each posterior probability by multiplying it with the corresponding probability of the new features and the objects using the new posterior probabilities for example suppose we first used the and to classify an observation the posterior probability of belonging to ae aegypti is and that of belonging to cx tarsalis is later on we find that is useful and would like to incorporate this new feature to suppose the probability of observing ae aegypti at the location where x was intercepted was and the probability of observing a cx tarsalis at that location was in that case we can update the posterior probability of belonging to ae agypti as and that of belonging to cx tarsalis as and in this case to belong to ae agypti the advantage of incremental calculation is that incorporating a new feature is fast only a simple multiplication is required to calculate a posterior probability in our discussions thus far we have assumed that all the features are independent given the class in practice features are seldom independent given the class however as shown in domingos and pazzani even when the independence assumption does not hold the bayesian classifier may still be optimal in minimizing the misclassification error empirical evidence in recent years also showed that the bayesian classifier works quite well even in the domains where clear feature dependences exist in this work we will not prove that the three features used are conditionally independent however as we shall show below the independence assumption of the features used in our classifier should be reasonable for the bayesian classifier to work well revisiting the independent assumption recall that the naive bayesian classifier is optimal only under the assumption that the features are independent domingos and pazzani the majority of experiments in this work consider two features frequency spectrum and in order to test if these two features and are conditionally independent we can check if they satisfy the constraint concretely for our task this constraint is p p given a certain insect species or equivalently the properties of frequency spectrum of a given species must be the same at any timestamp if the spectrum was a scalar value such as mass or length we could use a standard test such as a test to see if the properties observed at two different time windows are from the same distribution however the spectrums are vectors and this complicates this issue greatly thus to see if this constraint is satisfied we did the following experiment which indirectly but forcefully tests the constraint we sampled insect sounds captured at dawn between and sounds captured at dusk between all the sounds were generated by ae aegypti females we then classified the sounds captured in different time periods using the frequency spectrum our hypothesis is that if the distribution of frequency spectrum of a same species is the same at dawn and at dusk then it should be impossible to distinguish between sounds captured in the two different periods and thus the classification error rate would be around in our experiment the sounds were sampled from a pool of objects and we averaged over ten samplings with replacement the average classification error rate was which suggests that there is no perceptible difference in the frequency spectrum of the insect sounds captured at dawn or at dusk note that this experiment was conducted for insects observed under constant temperature and humidity in an insectary it may not generalize to insects observed in the field however this experiment increases our confidence that the conditional independence assumption of the two features is at least reasonable nevertheless it is clear that in our general framework it is possible that users may wish to use features that clearly violate this assumption for example if the sensor was augmented to obtain insect mass a generally useful feature it is clear from basic principles of allometric scaling that the frequency spectrum feature would not be independent deakin the good news is that as shown in figure the bayesian network can be generalized to encode the dependencies among the features in the cases where there is clear dependence between some features we can consider adding an arrow between the dependent features to represent this dependence for example suppose there is dependence between features and we can add an arrow between them as shown by the red arrow in figure the direction of the arrow represents causality for example an insect s larger mass causes it to have a slower wingbeat the only drawback to this augmented bayesian classifier keogh and pazzani is that more training data is required to learn the classification model if there are feature dependences as more distribution parameters need to be estimated the covariance matrix is required instead of just the standard deviation c fn figure the bayesian network that uses n features for classification with feature and being conditionally dependent a case study sexing mosquitoes sexing mosquitoes is required in some entomological applications for example sterile insect technique a method which eliminates large populations of breeding insects by releasing only sterile males into the wild has to separate the male mosquitoes from the females before being released papathanos et al here we conducted an experiment to see how well it is possible to distinguish female and male mosquitoes from a single species using our proposed classifier in this experiment we would like to distinguish male ae aegypti mosquitoes from females the only feature used in this experiment is the frequency spectrum we did not use the as there is no obvious difference between the flight activity circadian rhythms of the males and the females that belong to a same species a recent paper offers evidence of minor but measurable differences for the related species anopheles gambiae rund et al however we ignore this possibility here for simplicity the data used were randomly sampled from a pool of over exemplars we varied the number of exemplars from each sex from to and averaged over runs each time using random sampling with replacement the average classification performance using cross validation is shown in figure average accuracy spectrum wingbeat number of each sex in training data figure the classification accuracy of sex discrimination of ae agypti mosquitoes with different numbers of training data using our proposed classifier and the classifier we can see that our classifier is quite accurate in sex separation with training data for each sex we achieved a classification accuracy of using just the truncated frequency spectrum that is if our classifier is used to separate mosquitoes we will make about eight misclassifications note that as the amount of training data increases the classification accuracy increases this is an additional confirmation of the claim that more data improves classification halevy et al we compared our classifier to the classifier using just the wingbeat frequency as shown in figure our classifier consistently outperforms the wingbeat frequency classifier across the entire range of the number of training data the classification accuracy using the wingbeat classifier was if there are training data for each sex recall that the accuracy using our proposed classifier was by using the frequency spectrum instead of the wingbeat frequency we reduced the error rate by more than from to it is important to recall that in this comparison the data and the basic classifier were identical thus all the improvement can be attributed to the additional information available in the frequency spectrum beyond just the wingbeat frequency this offers additional evidence for our claim that wingbeat frequency by itself is insufficient for accurate classification in this experiment we assume the cost of female misclassification misclassifying a female as a male is the same as the cost of male misclassification misclassifying a male as a female the confusion matrix of classifying mosquitoes equal size for each sex with the same cost assumption from one experiment is shown in table i table i the confusion matrix for sex discrimination of ae aegypti mosquitoes with the decision threshold for female being same cost assumption ii the confusion matrix of sexing the same mosquitoes with the decision threshold for female being predicted class i balanced cost actual female class male female male predicted class ii asymmetric cost female male actual female class male however there are cases in which the misclassification costs are asymmetric for example when the sterile insect technique is applied to mosquito control failing to release an occasional male mosquito because we mistakenly thought it was a female does not matter too much in contrast releasing a female into the wild is a more serious mistake as it is only the females that pose a threat to human health in the cases where we have to deal with asymmetric misclassification costs we can change the decision boundary of our classifier to lower the number of misclassifications in a principled manner of course there is no free lunch and a reduction in the number of misclassifications will be accompanied by an increase in the number of misclassifications in the previous experiment with equal misclassification costs an unknown insect is predicted to belong to the class that has the higher posterior probability this is the equivalent of saying the threshold to predict an unknown insect as female is that is only when the posterior probability of belonging to the class of females is larger than will an unknown insect be predicted as a female equivalently we can replace line in table with the code in table by setting the threshold to table the decision making policy for the sex separation experiment if p threshold is a female else is a male end we can change the threshold to minimize the total cost when the costs of different misclassifications are different in sterile insect technique the goal is to reduce the number of female misclassifications this can be achieved by lowering the threshold required to predict an exemplar to be female for example we can set the threshold to be so that if the probability of an unknown exemplar belonging to a female is no less than this value it is predicted as a female while changing the threshold may result in a lower overall accuracy as more males will be misclassified as females it reduces the number of females that are misclassified as male by examining the experiment summarized in table i we can predict that by setting the threshold to be we reduce the female misclassification rate to with the male misclassification rate rising to we chose this threshold value because it gives us an approximately one in a thousand chance of releasing a female however any domain specific threshold value can be used the practitioner simply needs to state her preference in one of two intuitive and equivalent ways what is the threshold that gives me a one in some value chance of misclassifying a female as a male or for my problem misclassifying a male as a female is some value times worse than the other type of mistake what should the threshold be elkan we applied our threshold to the data which was used to produce the confusion matrix shown in table and obtained the confusion matrix shown in table as we can see of insects in this experiment males and zero females where misclassified numbers in close agreement to theory experiment insect classification with increasing number of species when discussing our we are invariably asked how accurate is it the answer to this depends on the insects to be classified for example if the classifier is used to distinguish cx stigmatosoma female from cx tarsalis male it can achieve near perfect accuracy as the two classes are radically different in their wingbeat sounds whereas when it is used to separate cx stigmatosoma female from ae aegypti female the classification accuracy will be much lower given that the two species have quite similar sounds as hinted at in figure therefore a single absolute value for classification accuracy will not give the reader a good intuition about the performance of our system instead in this section rather than reporting our classifier s accuracy on a fixed set of insects we applied our classifier to datasets with an incrementally increasing number of species and therefore increasing classification difficulty we began by classifying just two species of insects then at each step we added one more species or a single sex of a sexually dimorphic species and used our classifier to classify the increased number of species we considered a total of ten classes of insects different sexes from the same species counting as different classes exemplars in each class our classifier used both frequency spectrum and for classification the classification accuracy measured at each step and the relevant class added is shown in table note that the classification accuracy at each step is the accuracy of classifying all the species that come at and before that step for example the classification accuracy at the last step is the accuracy of classifying all ten classes of insects table classification accuracy with increasing number of classes step species added classification classification step species added cx quinquefasciatus accuracy accuracy ae aegypti musca domestica cx stigmatosoma ae aegypti cx tarsalis cx stigmatosoma cx cx tarsalis drosophila simulans as we can see our classifier achieves more than accuracy when classifying no more than five species of insects significantly higher than the default rate of accuracy even when the number of classes considered increases to ten the classification accuracy is never lower than again significantly higher than the default rate of note that the ten classes are not easy to separate even by human inspection among the ten species eight of them are mosquitoes six of them are from the same genus the utility of automatic insect classification the reader may already appreciate the utility of automatic insect classification however for completeness we give some examples of how the technology may be used electrical discharge insect control systems edics bug zappers are insect traps that attract and then electrocute mosquitoes they are very popular with consumers who are presumably gratified when hearing the characteristic buzz sound produced when an insect is electrocuted while most commercial devices are sold as mosquito deterrents studies have shown that as little as of the insects killed are mosquitoes frick and tallamy this is not surprising since the attractant is typically just an ultraviolet light augmenting the traps with or other chemical attractants helps but still allows the needless electrocution of beneficial insects isca technologies owned by author mn is experimenting with building a smart trap that classifies insects as they approach the trap selectively killing the target insects but blowing the insects away with compressed air as noted above sterile insect technique has been used to reduce the populations of certain target insects most notably with screwworm flies cochliomyia hominovorax and the mediterranean fruit fly ceratitis capitata the basic idea is to release sterile males into the wild to mate with wild females because the males are sterile the females will lay eggs that are either unfertilized or produce a smaller proportion of fertilized eggs leading to population declines and eventual eradication in certain areas benedict and robinson note that it is important not to release females and sexing mosquitoes is notoriously difficult researchers at the university of kentucky are experimenting with our sensors to create insectaries from which only male hatchlings can escape the idea is to use a modified edics or a high powered laser that selectively turns on and off to allow males to pass through but kills the females much of the research on insect behavior with regard to color odor is done by having human observers count insects as they move in dual choice olfactometer or on landing strips etc for example cooperband et al notes virgin female wasps were individually released downwind and the color on which they landed was recorded by a human observer there are several problems with this human time becomes a bottleneck in research human error is a possibility and for some host seeking insects the presence of a human nearby may affect the outcome of the experiment unless costly isolation is used we envision our sensor can be used to accelerate such research by making it significantly cheaper to conduct these types of experiments moreover the unique abilities of our system will allow researchers to conduct experiments that are currently impossible for example a recent paper rund et al attempted to see if there are differences in the daily flight activity patterns of anopheles gambiae mosquitoes to do this the authors placed individual sexed mosquitoes in small glass tubes to record their behavior however it is possible that both the small size of the glass tubes and the fact that the insects were in isolation affected the result moreover even the act of physically sexing the mosquitoes may affect them due to metabolic stress etc in contrast by using our sensors we can allow unsexed pupae to hatch out and the adults fly in cages with order of magnitude larger volumes in this way we can automatically and noninvasively sex them to produce daily flight activity plots conclusion and future work in this work we have introduced a framework that allows the inexpensive and scalable classification of flying insects we have shown experimentally that the accuracies achievable by our system are good enough to allow the development of commercial products and to be a useful tool for entomological research to encourage the adoption and extension of our ideas we are making all code data and sensor schematics freely available at the ucr computational entomology page chen moreover within the limits of our budget we will continue our practice of giving a complete system as shown in figure to any research entomologist who requests one acknowledgements we would like to thank the vodafone americas foundation the bill and melinda gates foundation and paulo research foundation fapesp for funding this research and the many faculties from the department of entomology at ucr that offered advice and expertise references banko m brill e mitigating the problem exploring the effect of training corpus size on classifier performance for natural language processing proceedings of the first international conference on human language technology research pp association for computational linguistics batista ge keogh ej a rowton e sigkdd demo sensors and software to allow computational entomology an emerging application of data mining in proceedings of the acm sigkdd international conference on knowledge discovery and data mining pp belton p costello ra flight sounds of the females of some mosquitoes of western canada entomologia experimentalis et applicata benedict m robinson a the first releases of transgenic mosquitoes an argument for the sterile insect technique trends in parasitology accessed march bidlingmayer wl day jf evans dg effect of wind velocity on suction trap catches of some florida mosquitoes journal of the american mosquito control association boll s suppression of acoustic noise in speech using spectral subtraction acoustics speech and signal processing ieee transactions bracewell rn bracewell rn the fourier transform and its applications new york vol capinera jl encyclopedia of entomology springer epsky nd morrill wl mankin r traps for capturing insects in encyclopedia of entomology pp springer netherlands chen y supporting materials https chen y hu b keogh e batista ge time series learning from a single example in proceedings of the acm sigkdd international conference on knowledge discovery and data mining pp chen y keogh e batista g http ucr insect classification contest cooperband mf hartness a lelito jp cosse aa landing surface color preferences of spathius agrili hymenoptera braconidae a parasitoid of emerald ash borer agrilus planipennis coleoptera buprestidae journal of insect behavior deakin ma formulae for insect wingbeat frequency journal of insect devroye l a probabilistic theory of pattern recognition springer vol domingos p pazzani m on the optimality of the simple bayesian classifier under loss machine learning elkan c the foundations of learning in international joint conference on artificial intelligence vol no pp lawrence erlbaum associates ephraim y malah d speech enhancement using a square error spectral amplitude estimator acoustics speech and signal processing ieee transactions on frick tb tallamy dw density and diversity of insects killed by suburban electric insect traps entomological news fukunaga k introduction to statistical pattern recognition access online via elsevier grimaldi rp discrete and combinatoral mathematics an applied introduction ed addisonwesley longman publishing halevy a norvig p pereira f the unreasonable effectiveness of data ieee intelligent systems hao y campana b and keogh ej monitoring and mining animal sounds in visual space journal of insect behavior kahn mc celestin w offenhauser w recording of sounds produced by certain mosquitoes science kahn mc offenhauser w the identification of certain west african mosquitos by sound amer trop ivied keogh e pazzani m learning augmented bayesian classifiers a comparison of and approaches in proceedings of the seventh international workshop on artificial intelligence and statistics pp kohavi r a study of and bootstrap for accuracy estimation and model selection in ijcai vol no pp li z zhou z shen z yao q automated of mosquito diptera culicidae wingbeat waveform by neural network intelligence applications and innovations mack yp rosenblatt m multivariate neighbor density estimates journal of multivariate analysis mankin rw machan r jones r field testing of a prototype acoustic device for detection of mediterranean fruit flies flying into a trap proc int symp fruit flies of economic importance pp mermelstein p distance measures for speech recognition psychological and instrumental pattern recognition and artificial intelligence mian ls mulla ms axelrod h chaney jd dhillon ms studies on the bioecological aspects of adult mosquitoes in the prado basin of southern california journal of the american mosquito control association moore a artificial neural network trained to identify mosquitoes in flight journal of insect behavior moore a miller rh automated identification of optically sensed aphid homoptera aphidae wingbeat waveforms ann entomol soc am moore a miller jr tabashnik be gage sh automated identification of flying insects by analysis of wingbeat frequencies journal of economic entomology njie m dilger e lindsay sw kirby mj importance of eaves to house entry by anopheline but not culicine mosquitoes j med entomol novak r the asian tiger mosquito aedes albopictus wing beats vol papathanos pa bossin hc benedict mq catteruccia f malcolm ca alphey l crisanti a sex separation strategies past experience and new approaches malar j suppl parzen e on estimation of a probability density function and annals of mathematical statistics prechelt l a quantitative study of neural network learning algorithm evaluation practices in proceedings of the int l conference on artificial neural networks pp raman dr gerhardt rr wilkerson jb detecting insect flight sounds in the field implications for acoustical counting of of the asabe reed sc williams c m chadwick l e frequency of as a character for separating species races and geographic varieties of drosophila genetics reisen w the western encephalitis mosquito culex tarsalis wing beats vol repasky ks shaw ja scheppele r melton c carsten jl spangler lh optical detection of honeybees by use of modulation of scattered laser light for locating explosives and land mines appl rosenblatt m remarks on some nonparametric estimates of a density function the annals of mathematical statistics rowland mw lindsay sw the circadian flight activity of aedes aegypti parasitized with the filarial nematode brugia entomology rund ssc lee sj bush br duffield ge and differences in daily flight activity and the circadian clock of anopheles gambiae mosquitoes journal of insect physiology sawedal l hall r flight tone as a taxonomic character in chironomidae diptera entomol scand suppl schaefer gw bent ga an remote sensing system for the active detection and automatic determination of insect flight trajectories iradit bull entomol res shotton j sharp t kipman a fitzgibbon a finocchio m blake a cook m moore r human pose recognition in parts from single depth images communications of the acm sotavalta o the frequency of insects contributions to the problem of insect flight acta entomol fenn taylor b geographical range and circadian rhythm nature taylor b jones mdr the circadian rhythm of flight activity in the mosquito aedes aegypti the effects of and journal of experimental biology no tsymbal a the problem of concept drift definitions and related work computer science department trinity college dublin unwin dm ellington cp an optical tachometer for measurement of the frequency of freeflying insects journal of experimental biology vapnik vn chervonenkis ay on the uniform convergence of relative frequencies of events to their probabilities theory of probability and its applications widmer g kubat m learning in the presence of concept drift and hidden contexts machine learning zhan c lu x hou m zhou x a neural network email approach acm sigops oper syst rev issn
5
polylogarithmic approximation algorithms for akanksha daniel pranabendu saket meirav zehavik abstract jul let f be a family of graphs a canonical vertex deletion problem corresponding to f is defined as follows given an undirected graph g and a weight function w v g r find a minimum weight subset s v g such that g s belongs to this is known as weighted f vertex deletion problem in this paper we devise a recursive scheme to obtain o logo n algorithms for such problems building upon the classic technique of finding balanced separators in a graph roughly speaking our scheme applies to those problems where an optimum solution s together with a set x form a balanced separator of the input graph in this paper we obtain the first o logo n approximation algorithms for the following vertex deletion problems we give an o n approximation algorithm for weighted chordal vertex deletion wcvd the vertex deletion problem to the family of chordal graphs on the way to this algorithm we also obtain a constant factor approximation algorithm for multicut on chordal graphs we give an o n approximation algorithm for weighted distance hereditary vertex deletion wdhvd also known as weighted vertex deletion this is the vertex deletion problem to the family of distance hereditary graphs or equivalently the family of graphs of rankwidth our methods also allow us to obtain in a clean fashion a o n algorithm for the weighted f vertex deletion problem when f is a minor closed family excluding at least one planar graph for the unweighted version of the problem constant factor approximation algorithms are were known fomin et focs while for the weighted version considered here an o log n log log n algorithm follows from bansal et al soda we believe that our recursive scheme can be applied to obtain o logo n algorithms for many other problems as well the research leading to these results received funding from the european research council under the european unions seventh framework programme erc grant agreement no university of bergen bergen norway university of bergen bergen norway daniello institute of mathematical sciences chennai india pranabendu university of bergen bergen norway the institute of mathematical sciences hbni chennai india saket k university of bergen bergen norway introduction let f be a family of undirected graphs then a natural optimization problem is as follows weighted f vertex deletion input an undirected graph g and a weight function w v g question find a minimum weight subset s v g such that g s belongs to the weighted f vertex deletion problem captures a wide class of node or vertex deletion problems that have been studied from the for example when f is the family of independent sets forests bipartite graphs planar graphs and chordal graphs then the corresponding vertex deletion problem corresponds to weighted vertex cover weighted feedback vertex set weighted vertex bipartization also called weighted odd cycle transversal weighted planar vertex deletion and weighted chordal vertex deletion respectively by a classic theorem of lewis and yannakakis the decision version of the weighted f vertex deletion whether there exists a set s weight at most k such that removing s from g results in a graph with property for every hereditary characterizing the graph properties for which the corresponding vertex deletion problems can be approximated within a bounded factor in polynomial time is a long standing open problem in approximation algorithms in spite of a long history of research we are still far from a complete characterization constant factor approximation algorithms for weighted vertex cover are known since lund and yannakakis observed that the vertex deletion problem for any hereditary property with a finite number of minimal forbidden induced subgraphs can be approximated within a constant ratio they conjectured that for every nontrivial hereditary property with an infinite forbidden set the corresponding vertex deletion problem can not be approximated within a constant ratio however it was later shown that weighted feedback vertex set which doesn t have a finite forbidden set admits a constant factor approximation thus disproving their conjecture on the other hand a result by yannakakis shows that for a wide range of graph properties approximating the minimum number of vertices to delete in order to obtain a connected graph with the property within a factor is we refer to for the precise list of graph properties to which this result applies to but it is worth mentioning the list includes the class of acyclic graphs and the class of outerplanar graphs in this paper we explore the approximability of weighted f vertex deletion for several different families f and design o logo n approximation algorithms for these problems more precisely our results are as follows let f be a finite set of graphs that includes a planar graph let f g f be the family of graphs such that every graph h g f does not contain a graph from f as a minor the vertex deletion problem corresponding to f g f is known as the weighted planar f deletion wpf the wpf problem is a very generic problem and by selecting different sets of forbidden minors f one can obtain various fundamental problems such as weighted vertex cover weighted feedback vertex set or weighted treewidth our first result is a randomized o n deterministic o n approximation algorithm for wpf for any finite f that contains a planar graph we remark that a different approximation algorithm for the same class of problems with a a graph property is simply a family of graphs and it is called if there exists an infinite number of graphs that are in as well as an infinite number of graphs that are not in a graph property is called hereditary if g implies that every induced subgraph of g is also in slightly better approximation ratio of o log n log log n follows from recent work of bansal reichman and umboh see also the discussion following theorem therefore our first result should be interpreted as a clean and gentle introduction to our methods we give an o n approximation algorithm for weighted chordal vertex deletion wcvd the vertex deletion problem corresponding to the family of chordal graphs on the way to this algorithm we also obtain a constant factor approximation algorithm for weighted multicut in chordal graphs we give an o n approximation algorithm for weighted distance hereditary vertex deletion wdhvd this is also known as the weighted vertex deletion problem this is the vertex deletion problem corresponding to the family of distance hereditary graphs or equivalently graphs of rankwidth all our algorithms follow the same recursive scheme that find well structured balanced separators in the graph by exploiting the properties of the family in the following we first describe the methodology by which we design all these approximation algorithms then we give a brief overview consisting of known results and our contributions for each problem we study our methods multicommodity theorems are a classical technique in designing approximation algorithms which was pioneered by leighton and rao in their seminal paper this approach can be viewed as using balanced vertex or edge in a graph to obtain a approximation algorithm in a typical application the optimum solution s forms a balanced separator of the graph thus the idea is to find an minimum cost balanced separator w of the graph and add it to the solution and then recursively solve the problem on each of the connected components this leads to an o logo n approximation algorithm for the problem in question our recursive scheme is a strengthening of this approach which exploits the structural properties of the family here the optimum solution s need not be a balanced separator of the graph indeed a balanced separator of the graph could be much larger than s rather s along with a possibly large but subset of vertices x forms a balanced separator of the graph we then exploit the presence of such a balanced separator in the graph to compute an approximate solution consider a family f for which weighted f vertex deletion is amenable to our approach and let g be an instance of this problem let s be the approximate solution that we will compute our approximation algorithm has the following steps find a set x such that g x has a balanced separator w which is not too costly next compute the balanced separator w of g x using the known factor o log n approximation algorithm or deterministic o log n algorithm for weighted vertex separators then add w into the solution set s and recursively solve the problem on each connected component of g x s let s be the solutions returned by the recursive calls we add s to the solution finally we add x back into the graph and consider the instance g s x observe that v g s can be partitioned into v x where g v belongs to f and x is a set we call such instances the special case of weighted f vertex deletion we apply an approximation algorithm that exploits the structural properties of the special case to compute a solution now consider the problem of finding the structure x one way is to enumerate all the candidates for x and then pick the one where g x has a balanced vertex separator of least cost this a balanced vertex separator is a set of vertices w such that every connected component of g w contains at most half of the vertices of separator plays the role of w however the number of candidates for x in a graph could be too many to enumerate in polynomial time for example in the case of weighted chordal vertex deletion the set x will be a clique in the graph and the number of maximal cliques n in a graph on n vertices could be as many as hence we can not enumerate and test every candidate structure in polynomial time however we can exploit certain structural properties of family f to reduce the number of candidates for x in the graph in our problems we tidy up the graph by removing short obstructions that forbid the graph from belonging to the family then one can obtain an upper bound on the number of candidate structures in the above example recall that a graph g is chordal if and only if there are no induced cycles of length or more it is known that a graph g without any induced cycle of length has at most o maximal cliques observe that we can greedily compute a set of vertices which intersects all induced cycles of length in the graph therefore at the cost of factor in the approximation ratio we can ensure that the graph has only polynomially many maximal cliques hence one can enumerate all maximal cliques in the remaining graph to test for x next consider the task of solving an instance of the special case of the problem we again apply a recursive scheme but now with the advantage of a much more structured graph by a careful modification of an lp solution to the instance we eventually reduce it to instances of weighted multicut in the above example for weighted chordal vertex deletion we obtain instances of weighted multicut on a chordal graph we follow this approach for all three problems that we study in this paper we believe our recursive scheme can be applied to obtain o logo n algorithms for weighted f vertex edge deletion corresponding to several other graph families weighted planar f deletion let f be a finite set of graphs containing a planar graph formally weighted planar f deletion is defined as follows weighted planar f deletion wpf input an undirected graph g and a weight function w v g question find a minimum weight subset s v g such that g s does not contain any graph in f as a minor the wpf problem is a very generic problem that encompasses several known problems to explain the versatility of the problem we require a few definitions a graph h is called a minor of a graph g if we can obtain h from g by a sequence of vertex deletions edge deletions and edge contractions and a family of graphs f is called minor closed if g f implies that every minor of g is also in given a graph family f by forbidminor f we denote the family of graphs such that g f if and only if g does not contain any graph in forbidminor f as a minor by the celebrated graph minor theorem of robertson and seymour every minor closed family f is characterized by a finite family of forbidden minors that is forbidminor f has finite size indeed the size of forbidminor f depends on the family now for a finite collection of graphs f as above we may define the weighted f deletion problem and observe that even though the definition of weighted f deletion we only consider finite sized f this problem actually encompasses deletion to every minor closed family of graphs let g be the set of all finite undirected graphs and let l be the family of all finite subsets of g thus every element f l is a finite set of graphs and throughout the paper we assume that f is explicitly given in this paper we show that when f l contains at least one planar graph then it is possible to obtain an o logo n approximation algorithm for wpf the case where f contains a planar graph while being considerably more restricted than the general case already encompasses a number of the instances of wpf for example when f a complete graph on two vertices this is the weighted vertex cover problem when f a cycle on three vertices this is the weighted feedback vertex set problem another fundamental problem which is also a special case of wpf mfd is weighted vertex deletion or weighted here the task is to delete a minimum weight vertex subset to obtain a graph of treewidth at most since any graph of treewidth excludes a grid as a minor we have that the set f of forbidden minors of treewidth graphs contains a planar graph vertex deletion plays an important role in generic efficient polynomial time approximation schemes based on bidimensionality theory among other examples of planar f deletion problems that can be found in the literature on approximation and parameterized algorithms are the cases of f being and which correspond to removing vertices to obtain an outerplanar graph a graph a diamond graph and a graph of pathwidth respectively apart from the case of weighted vertex cover and weighted feedback vertex set there was not much progress on of wpf until the work of fiorini joret and pietropaoli which gave a constant factor approximation algorithm for the case of wpf where f is a diamond graph a graph with two vertices and three parallel edges in fomin et al considered planar f deletion the unweighted version of wpf in full generality and designed a randomized deterministic o n o n approximation algorithm for it later fomin et al gave a randomized constant factor approximation algorithm for planar f deletion our algorithm for wpf extends this result to the weighted setting at the cost of increasing the approximation factor to logo theorem for every set f l wpf admits a randomized deterministic o n factor o n approximation algorithm we mention that theorem is subsumed by a recent related result of bansal reichman and umboh they studied the edge deletion version of the vertex deletion problem under the name bounded treewidth interdiction problem and gave a bicriteria approximation algorithm in particular for a graph g and an integer they gave a polynomial time algorithm that finds a subset of edges f of g such that o log n log log n opt and the treewidth of g f is o log with some additional effort their algorithm can be made to work for the weighted vertex deletion problem as well in our setting where is a fixed constant this immediately implies a factor o log n log log n approximation algorithm for wpf while the statement of theorem is subsumed by the proof gives a simple and clean introduction to our methods weighted chordal vertex deletion formally the weighted chordal vertex deletion problem is defined as follows weighted chordal vertex deletion wcvd input an undirected graph g and a weight function w v g question find a minimum weight subset s v g such that g s is a chordal graph the class of chordal graphs is a natural class of graphs that has been extensively studied from the viewpoints of graph theory and algorithm design many important problems that are on general graphs such as independent set and graph coloring are solvable in polynomial time once restricted to the class of chordal graphs recall that a graph is chordal if and only if it does not have any induced cycle of length or more thus chordal vertex deletion cvd can be viewed as a natural variant of the classic feedback vertex set one can run their algorithm first and remove the solution output by their algorithm to obtain a graph of treewidth at most o log then one can find an optimal solution using standard dynamic programming fvs indeed while the objective of fvs is to eliminate all cycles the cvd problem only asks us to eliminate induced cycles of length or more despite the apparent similarity between the objectives of these two problems the design of approximation algorithms for wcvd is very challenging in particular chordal graphs can be a clique is a chordal graph as we can not rely on the sparsity of output our approach must deviate from those employed by approximation algorithms from fvs that being said chordal graphs still retain some properties that resemble those of trees and these properties are utilized by our algorithm prior to our work only two approximation algorithms for cvd were known the first one by jansen and pilipczuk is a deterministic o log opt log n approximation algorithm and the second one by agrawal et al is a deterministic o opt n approximation algorithm the second result implies that cvd admits an o n log n approximation in this paper we obtain the first o logo n algorithm for wcvd theorem cvd admits a deterministic o n approximation algorithm while this approximation algorithm follows our general scheme it also requires us to incorporate several new ideas in particular to implement the third step of the scheme we need to design a different o log n approximation algorithm for the special case of wcvd where the of the input graph g can be partitioned into two sets x and v g x such that g x is a clique and g v g x is a chordal graph this approximation algorithm is again based on recursion but it is more involved at each recursive call it carefully manipulates a fractional solution of a special form moreover to ensure that its current problem instance is divided into two subinstances that are independent and simpler than their origin we introduce multicut constraints in addition to these constraints we keep track of the complexity of the subinstances which is measured via the cardinality of the maximum independent set in the graph our multicut constraints result in an instance of weighted multicut which we ensure is on a chordal graph formally the weighted multicut problem is defined as follows weighted multicut input an undirected graph g a weight function w v g r and a set t sk tk of k pairs of vertices of question find a minimum weight subset s v g such that for any pair si ti t g s does not have any path between si and ti for weighted multicut on chordal graphs no approximation algorithm was previously known we remark that weighted multicut is on trees and hence it is also on chordal graphs we design the first such algorithm which our main algorithm employs as a black box theorem weighted multicut admits a approximation algorithm on chordal graphs this algorithm is inspired by the work of garg vazirani and yannakakis on weighted multicut on trees here we carefully exploit the characterization of the class of chordal graphs as the class of graphs that admit clique forests we believe that this result is of independent interest the algorithm by garg vazirani and yannakakis is a classic algorithm a more recent algorithm by golovin nagarajan and singh uses total modularity to obtain a different algorithm for multicut on trees if opt log n we output a greedy solution to the input graph and otherwise we have that opt n n log n hence we call the o opt n approximation algorithm weighted distance hereditary vertex deletion we start by formally defining the weighted distance hereditary vertex deletion problem weighted distance hereditary vertex deletion wdhvd input an undirected graph g and a weight function w v g question find a minimum weight subset s v g such that g s is a distance hereditary graph a graph g is a distance hereditary graph also called a completely separable graph if the distances between vertices in every connected induced subgraph of g are the same as in the graph distance hereditary graphs were named and first studied by hworka however an equivalent family of graphs was earlier studied by olaru and sachs and shown to be perfect it was later discovered that these graphs are precisely the graphs of rankwidth rankwidth is a graph parameter introduced by oum and seymour to approximate yet another graph parameter called cliquewidth the notion of cliquewidth was defined by courcelle and olariu as a measure of how the input graph is this is similar to the notion of treewidth which measures how the input graph is one of the main motivations was that several problems become tractable on the family of cliques complete graphs the assumption was that these algorithmic properties extend to graphs however computing cliquewidth and the corresponding cliquewidth decomposition seems to be computationally intractable this then motivated the notion of rankwidth which is a graph parameter that approximates cliquewidth well while also being algorithmically tractable for more information on cliquewidth and rankwidth we refer to the surveys by et al and oum as algorithms for vertex deletion are applied as subroutines to solve many graph problems we believe that algorithms for weighted vertex deletion will be useful in this respect in particular vertex deletion has been considered in designing efficient approximation kernelization and fixed parameter tractable algorithms for wpf and its unweighted counterpart planar f deletion along similar lines we believe that and its unweighted counterpart will be useful in designing efficient approximation kernelization and fixed parameter tractable algorithms for weighted f vertex deletion where f is characterized by a finite family of forbidden vertex minors recently kim and kwon designed an o log n approximation algorithm for distance hereditary vertex deletion dhvd this result implies that dhvd admits an o log n approximation algorithm in this paper we take first step towards obtaining good approximation algorithm for by designing a o logo n approximation algorithm for wdhvd theorem wdhvd or admits an o n approximation algorithm we note that several steps of our approximation algorithm for can be generalized for an approximation algorithm for and thus we believe that our approach should yield an o logo n approximation algorithm for we leave that as an interesting open problem for the future preliminaries for a positive integer k we use k as a shorthand for k given a function f a b and a subset a we let f denote the function f restricted to the domain graphs given a graph g we let v g and e g denote its and respectively in this paper we only consider undirected graphs we let n g denote the number of vertices in the graph g where g will be clear from context the open neighborhood or simply the neighborhood of a vertex v v g is defined as ng v w v w e g the closed neighborhood of v is defined as ng v ng v v the degree of v is defined as dg v v we can extend the definition of the to a set of vertices as follows given s neighborhood of a vertexs a subset u v g ng u ng u and ng u ng u the induced subgraph g u is the graph with u and u u u and u e g moreover we define g u as the induced subgraph g v g u we omit subscripts when the graph g is clear from context for graphs g and h by g h we denote the graph with vertex set as v g v h and edge set as e g e h an independent set in g is a set of vertices such that there is no edge in g between any pair of vertices in this set the independence number of g denoted by g is defined as the cardinality of the largest independent set in a clique in g is a set of vertices such that there is an edge in g between every pair of vertices in this set a path p x in g is a subgraph of g where v p x v g and e p x x e g where n the vertices and x are called the endpoints of the path p and the remaining vertices in v p are called the internal vertices of p we also say that p is a path between and x or connects and x a cycle c x in g is a subgraph of g where v c x v g and e c x x x e g it is a path with an additional edge between and x the graph g is connected if there is a path between every pair of vertices in g otherwise g is disconnected a connected graph without any cycles is a tree and a collection of trees is a forest a maximal connected subgraph of g is called a connected p component of given a function f v g r and a subset u v g we denote f u f v moreover we say that a subset u v g is a balanced separator for g if for each connected component c in g u it holds that c g we refer the reader to for details on standard graph theoretic notations and terminologies that are not explicitly defined here forest decompositions a forest decomposition of a graph g is a pair f where f is forest and v t g is a function that satisfies the following s i f v v g ii for any edge v u e g there is a node w v f such that v u w iii for any v v g the collection of nodes tv u v f v u is a subtree of f for v v f we call v the bag of v and for the sake of clarity of presentation we sometimes use v and v interchangeably we refer to the vertices in v f as nodes a tree decomposition is a forest decomposition where f is a tree for a graph g by tw g we denote the minimum over all possible tree decompositions of g the maximum size of a bag minus one in that tree decomposition minors given a graph g and an edge u v e g the graph denotes the graph obtained from g by contracting the edge u v that is the vertices u v are deleted from g and a new vertex uv is added to g which is adjacent to the all the neighbors of u v previously in g except for u v a graph h that is obtained by a sequence of edge contractions in g is said to be a contraction of a graph h is a minor of a g if h is the contraction of some subgraph of we say that a graph g is f free when f is not a minor of given a family f of graphs we say that a graph g is free if for all f f f is not a minor of it is well known that if h is a minor of g then tw h tw g a graph is planar if it is free here is a clique on vertices and is a complete bipartite graph with both sides of bipartition having vertices chordal graphs let g be a graph for a cycle c on at least four vertices we say that u v e g is a chord of c if u v v c but u v e c a cycle c is chordless if it contains at least four vertices and has no chords the graph g is a chordal graph if it has no chordless cycle as an induced subgraph a clique forest of g is a forest decomposition of g where every bag is a maximal clique the following lemma shows that the class of chordal graphs is exactly the class of graphs which have a clique forest lemma a graph g is a chordal graph if and only if g has a clique forest moreover a clique forest of a chordal graph can be constructed in polynomial time given a subset u v g we say that u intersects a chordless cycle c in g if u v c observe that if u intersects every chordless cycle of g then g u is a chordal graph approximation algorithm for wpf in this section we prove theorem we can assume that the weight w v of each vertex v v g is positive else we can insert v into any solution below we state a result from which will be useful in our algorithm proposition let f be a finite set of graphs such that f contains a planar graph then any graph g that excludes any graph from f as a minor satisfies tw g c c f we let c c f to be the constant returned by proposition the approximation algorithm for wpf comprises of two components the first component handles the special case where the vertex set of input graph g can be partitioned into two sets c and x such that c and h g x is an f free graph we note that there can be edges between vertices in c and vertices in we show that for these special instances in polynomial time we can compute the size of the optimum solution and a set realizing it the second component is a recursive algorithm that solves general instances of the problem here we gradually disintegrate the general instance until it becomes an instance of the special type where we can resolve it in polynomial time more precisely for each guess of c sized subgraph m of g we find a small separator s using an approximation algorithm that together with m breaks the input graph into two graphs significantly smaller than their origin it first removes m s and solves each of the two resulting subinstances by calling itself recursively then it inserts m back into the graph and uses the solutions it obtained from the recursive calls to construct an instance of the special case which is then solved by the first component constant sized graph f free graph we first handle the special case where the input graph g consists of a graph m of size at most c and an f free graph we refer to this algorithm as more precisely along with the input graph g and the weight function w we are also given a graph m with at most c vertices and an f free graph h such that v g v m v h where the v m and v h are disjoint note that the e g may contain edges between vertices in m and vertices in we will show that such instances may be solved optimally in polynomial time we start with the following easy observation observation let g be a graph with v g x y such that c and g y is an f free graph then the treewidth of g is at most lemma let g be a graph of treewidth t with a weight function w on the vertices and let f be a finite family of graphs then one can compute a minimum weight vertex set s such that g s is f free in time f q t n where n is the number of vertices in g and q is a constant that depends only on f proof this follows from the fact that finding such a set s is expressible as an formula whose length q depends only on the family f then by theorem we can compute an optimal sized set s in time f q t now we apply the above lemma to the graph g and the family f and obtain a minimum weight set s such that g s is f free general graphs we proceed to handle general instances by developing a d approximation algorithm for wpf thus proving the correctness of theorem the exact value of the constant d will be determined later recursion we define each call to our algorithm to be of the form where is an instance of wpf such that is an induced subgraph of g and we denote goal for each recursive call we aim to prove the following lemma returns a solution that is at least opt and at most moreover it returns a subset u v that realizes the solution d opt at each recursive call the size of the graph becomes smaller thus when we prove that lemma is true for the current call we assume that the approximation factor is bounded by d b opt for any call where the size n b of the of its graph is strictly smaller than log n termination in polynomial time we can test whether has a minor f f furthermore for each m v g of size at most c we can check if g m has a minor f f if g m is f free then we are in a special instance where g m is f minor free and m is a constant sized graph we optimally resolve this instance in polynomial time using the algorithm since we output an optimal sized solution in the base cases we thus ensure that at the base case of our induction lemma holds recursive call for the analysis of a recursive call let s denote a hypothetical set that realizes the optimal solution opt of the current instance let f be a forest decomposition of s of width at most c whose existence is guaranteed by proposition using standard arguments on forests we have the following observation observation there exists a node v v f such that v is a balanced separator for from observation we know that there exists a node v v f such that v is a balanced separator for s this together with the fact that s has treewidth at most c results in the following observation observation there exist a subset m v of size at most and a subset s v of weight at most opt such that m s is a balanced separator for this gives us a polynomial time algorithm as stated in the following lemma lemma there is a deterministic randomized algorithm which in finds m of size at most c and a subset s v m of weight at most q log opt q log opt for some fixed constant q q such that m s is a balanced separator for of size at most c in time o nc for proof note that we can enumerate every m v each such m we can either run the randomized q log approximation algorithm by feige et al or the deterministic q log approximation algorithm by leighton and rao to find a balanced separator sm of m here q and q are fixed constants by observation is a set s in sm m v and m such that w s w s q log opt thus the desired output is a pair m s where m is one of the vertex subset of size at most c such that sm we call the algorithm in lemma to obtain a pair m s since m s is a balanced separator for we can partitionsthe set of connected components of m s into two sets s and such that for v a and v a it holds that where and we remark that we use different algorithms for finding a balanced separator in lemma based on whether we are looking for a randomized algorithm or a deterministic algorithm next we define two inputs of the general case of wpf and let and denote the optimal solutions to and respectively observe that since it holds that opt we solve each of the subinstances by recursively calling algorithm by the inductive hypothesis we thus obtain two sets and such that and are f free graphs and and we proceed by defining an input of the special case of wpf j m observe that and are f free graphs and there are no edges between vertices in and vertices in in m and m is of constant size therefore we resolve this instance by calling algorithm we thus b such that m s b is a f graph and s b opt obtain a set s since m n and the optimal solution of each of the special subinstances is at most opt observe that any obstruction in s is either completely contained in or completely contained in or it contains at least one vertex from m this observation along with b is a f free graph implies that t the fact that m s b thus it is now sufficient to show that is a f free graph where t s d w t log n opt by the discussion above we have that b t s s d q log n opt log log opt recall that and opt thus we have that t q log opt log opt opt log opt log opt q log d log opt it is sufficient to ensure that q q log log which can be done by fixing d log log if we use the o log n approximation algorithm by feige et al for finding a balance separator in lemma then we can do the analysis similar to the deterministic case and obtain a randomized n approximation algorithm for wpf overall we conclude that to ensure that t d weighted chordal vertex deletion on general graphs in this section we prove theorem clearly we can assume that the weight w v of each vertex v v g is positive else we can insert v into any solution roughly speaking our approximation algorithm consists of two components the first component handles the special case where the input graph g consists of a clique c and a chordal graph here we also assume that the input graph has no short chordless cycle this component is comprised of a recursive algorithm that is based on the method of divide and conquer the algorithm keeps track of a fractional solution x of a special form that it carefully manipulated at each recursive call and which is used to analyze the approximation ratio in particular we ensure that x does not assign high values and that it assigns to vertices of the clique c as well as vertices of some other cliques to divide a problem instance into two instances we find a maximal clique m of the chordal graph h that breaks h into two simpler chordal graphs the clique c remains intact at each recursive call and the maximal clique m is also a part of both of the resulting instances thus to ensure that we have simplified the problem we measure the complexity of instances by examining the maximum size of an independent set of their graphs since the input graph has no short chordless cycle the maximum depth of the recursion tree is bounded by o log n moreover to guarantee that we obtain instances that are independent we incorporate multicut constraints while ensuring that we have sufficient budget to satisfy them we ensure that these multicut constraints are associated with chordal graphs which allows us to utilize the algorithm we design in section the second component is a recursive algorithm that solves general instances of the problem initially it easily handles short chordless cycles then it gradually disintegrates a general instance until it becomes an instance of the special form that can be solved using the first component more precisely given a problem instance the algorithm divides it by finding a maximal clique m using an exhaustive search which relies on the guarantee that g has no short chordless cycle and a small separator s using an approximation algorithm that together break the input graph into two graphs significantly smaller than their origin it first removes m s and solves each of the two resulting subinstances by calling itself recursively then it inserts m back into the graph and uses the solutions it obtained from the recursive calls to construct an instance of the special case solved by the first component graphs in this subsection we handle the special case where the input graph g consists of a clique c and a chordal graph more precisely along with the input graph g and the weight function w we are also given a clique c an a chordal graph h such that v g v c v h where the v c and v h are disjoint here we also assume that g has no chordless cycle on at most vertices note that the e g may contain edges between vertices in c and vertices in we call this special case the special case our objective is to prove the following result lemma the special case of wcvd admits an o log n approximation algorithm we assume that n else the input instance can be solve by let c be a fixed constant to be determined in the rest of this subsection we design a c log approximation algorithm for the special case of wcvd recursion our approximation algorithm is a recursive algorithm we call our algorithm and define each call to be of the form c h x here is an induced this assumption simplifies some of the calculations ahead subgraph of g such that v c v and h is an induced subgraph of the argument x is discussed below we remark that we continue to use n to refer to the size of the of the input graph g rather than the current graph arguments while the execution of our algorithm progresses we keep track of two arguments the size of a maximum independent set of the current graph denoted by and a fractional solution x due to the special structure of the computation of is simple observation the measure can be computed in polynomial time proof any maximum independent set of consists of at most one vertex from c and an independent set of h it is well known that the computation of the size of a maximum independent set of a chordal graph can be performed in polynomial time thus we can compute h in polynomial time next we iterate over every vertex v v c and we b for the graph h b h v in polynomial time since h b is a chordal compute h graph overall we return max h c the necessity of tracking stems from the fact that our recursive algorithm is based on the method of and to ensure that when we divide the current instance into two instances we obtain two simpler instances we need to argue that some aspect of these instances has indeed been simplified although this aspect can not be the size of the instance since the two instances can share many common vertices we show that it can be the size of a maximum independent set a fractional solution x is a function x v such that for every chordless cycle q of that x v q an optimal fractional solution minimizes the weight pg it holds w x v x v clearly the solution to the instance of wcvd is at least as large as the weight of an optimal fractional solution although we initially compute an optimal fractional solution x at the initialization phase that is described below during the execution of our algorithm we manipulate this solution so it may no longer be optimal prior to any call to with the exception of the first call we ensure that x satisfies the following invariants invariant for any v v it holds that x v n is the depth of the current recursive call in the recursion n here invariant for any v v c it holds that x v goal the depth of the recursion tree will be bounded by q log n for some fixed constant q the correctness of this claim is proved when we explain how to perform a recursive call for each recursive call c h x with the exception of the first call we aim to prove the following lemma for any q log n each recursive call to of depth n returns a solution that is at least opt and at most c log x moreover it returns a subset u v that realizes the solution at the initialization phase we see that in order to prove lemma it is sufficient to prove lemma initialization initially the graphs and h are simply set to be the input graphs g and h and the weight function is simply set to be input weight function moreover we compute an optimal fractional solution x xinit by using the ellipsoid method recall that the following claim holds the depth of the first call is defined to be observation the solution of the instance of wcvd is lower bounded by xinit thus to prove lemma it is sufficient to return a solution that is at least opt and at most c log n x we would like to proceed by calling our algorithm recursively for this purpose we first need to ensure that x satisfies the and invariants to which end we use the following notation we let h x v v x v c log n denote the set of vertices to which x assigns high values moreover given a clique m in we let x m v denote the function that assigns to any vertex in m and max x u x v to any other vertex v v now to adjust x to be of the desired form both at this phase and at later recursive calls we rely on the two following lemmata x b h x w b g lemma define g b g b and x b then c log n w b h x c log n x proof by the definition of h x it holds that b x x b x h x c log n x n h x thus c log n thus it is safe to update to h x to g b h to h h x and x to g b where we ensure that once we obtain a solution to the new instance we add w h x to this solution and h x to the set realizing it lemma given a clique m in the function x m is a valid fractional solution such that x m x v x proof to prove that x m is a valid fractional solution let q be some chordless cycle in we need to show that x m v q since m is a clique q can contain at most two vertices from m thus since x is a valid fractional solution it holds that x v q v m x u by the definition x m this fact implies that v q v q m x u x u min n n min c log n c log n where the last inequality relies on the assumption n for the proof of the second part of the claim note that x m x v m x v x next it is possible to call recursively with the fractional solution x c in the context of the invariant observe that indeed for any v v it now holds that x c v x u x v n n n n for similarly by lemma x c n w x for it is also clear that g thus if lemma is true we return a solution that is at least opt and at most c log n w x as desired in other words to prove lemma it is sufficient that we next focus only on the proof of lemma the proof of this lemma is done by induction when we consider some recursive call we assume that the solutions returned by the additional recursive calls that it performs e such that g e comply with the demands of the which are associated with graphs g lemma termination once becomes a chordal graph we return as our solution and as the set that realizes it clearly we thus satisfy the demands of lemma in fact we thus also ensure that the execution of our algorithm terminates once lemma if then is a chordal graph c m figure subinstances created by a recursive call proof suppose by way of contradiction that is not a chordal graph then it contains a chordless cycle q since is an induced subgraph of g where g is assumed to exclude any chordless cycle on at most vertices we have that q note that if we traverse q in some direction and insert every second vertex on q into a set excluding the last vertex in case q is odd we obtain an independent set thus we have that g which is a contradiction thus since we will ensure that each recursive calls is associated with a graph whose independence number is at most the independence number of the current graph we have the following observation observation the maximum depth of the recursion tree is bounded by q log n for some fixed constant recursive call since h is a chordal graph it admits a clique forest lemma in particular it contains only o n maximal cliques and one can find the set of these maximal cliques in polynomial time by standard arguments on trees we deduce that h has a maximal clique b and m such that after we remove m from we obtain two not necessarily connected graphs h b such that h b h b h and that the clique m can be found in polynomial time h b v m v c h v h b v m v h b v m v c let g v h b and h v v m and observe that g here the last inequality holds because else by lemma the execution should have already terminated we proceed by replacing x by for the sake of clarity we denote by lemmata and to prove lemma it is now sufficient to return a solution that is at least opt and c log n c log n at most log w along with a c log n c log n c log n c log n set that realizes it moreover for any v v it holds that v c log n c log n note that by observation by setting c we have c log n c log n c log n c log n c log n c log n that e and therefore in c log n c log n c log n c log n particular to prove lemma it is sufficient to return a solution that is at least opt and at c log n most log w c log n c m q figure an illustration of a bad cycle next we define two subinstances c and c see figure we solve each of these subinstances by a recursive call to by the above discussion these calls are valid we satisfy the and invariants thus we obtain two solutions to and to and two sets that realize these solutions and by the inductive hypothesis we have the following observations observation intersects any chordless cycle in that lies entirely in either or n observation given i si c log gi w xi moreover since v c v m we also have the following observation observation w w w we say that a cycle of is bad if it is a chordless cycle that belongs entirely to neither nor see figure next we show how to intersect bad cycles bad cycles for any pair v u of vertices v v c and u v m we let v u denote the set of any simple path between v and u whose internal vertices belong only to and which does not contain a vertex v c and a vertex m such that v e symmetrically we let v u denote the set of any path between v and u whose internal vertices belong only to and which does not contain a vertex v c and a vertex m such that v e we note here that when v u e then v u v u we first examine the relation between bad cycles and pairs v u of vertices v v c and u v m lemma for any bad cycle q there exist a pair v u of vertices v v c u v m a path v u such that v v q and a path v u such that v v q proof let q be some bad cycle by the definition of a bad cycle q must contain at least one vertex a from v m and at least one vertex b from v m since c and m are cliques q can contain at most two vertices from c and at most two vertices from m and if it contains two vertices from c resp m then these two vertices are neighbors moreover since the set v c v m contains all vertices common to and q must contain at least one vertex v v c and at least one vertex u v m with v u e overall we conclude that the subpath of q between v and u that contains a belongs to v u while the subpath of q between v and u that contains b belongs to v u in light lemma to intersect bad cycles we now examine how the fractional solution handles pairs v u of vertices v v c and u v m lemma for each pair v u of vertices v v c and u v m with v u e there exists i such that for any path p pi v u x v p proof suppose by way of contradiction that the lemma is incorrect thus there exist a pair v u of vertices v v c and u v m with v u e a path v u such that x v and a path v u such that v since is a valid fractional solution we deduce that v v does not contain any chordless cycle consider a shortest subpath of between a vertex v c and a vertex v m and a shortest subpath of between a vertex v c and a vertex v m since neither nor contains any edge such that one of its endpoints belongs to v c while the other endpoint belongs to v m we have that furthermore since vertices common in and must belong to v c v m we have that does not contain internal vertices that belong to or adjacent to internal vertices on overall since c and m are cliques we deduce that v v contains a chordless cycle to see this let a be the vertex closest to on that is a neighbor of observe that a exists as and are neighbors and a moreover we assume without loss of generality that if a then has no neighbor on apart from now let b be the vertex closest to a on the subpath of between a and that is a neighbor of if b then the of and the subpath b of between a and b together induce a chordless cycle else let be the vertex closest to on that is a neighbor of then the of the subpath of between and and the subpath of between a and together induce a chordless cycle since v v is an induced subgraph of v v we have reached a contradiction given i let denote the fractional solution that assigns to each vertex the value b v c v m and g b v c v m assigned by times moreover let g b b observe that and are chordal graphs now for every pair v u such that v v c u v m we perform the following operation we initialize v u next we consider every pair v such that v v c v m v v and u v b b ng v g b g b has a path between and insert each pair in a b a v v g a and b into v u we remark that the vertices in a pair in v u are not necessarily distinct the definition of v u is symmetric to the one of v u the following lemma translates lemma into an algorithm lemma for each pair v u of vertices v v c u v m and v u e one can compute in polynomial time an index i v u such that for any path p pi v u v p proof let v u be a pair of vertices such that v v c u v m and v u e if there is i such that p pi v u then we have trivially obtained the required index which is i v u i otherwise we proceed as follows for any index j we perform the following procedure for each pair a b ti v u we use dijkstra s algorithm to b i where the weights are compute the minimum weight of a path between a and b in the graph g given by in case for every pair a b the minimum weight is at least we have found the desired index i v u moreover by lemma and since for all v v c v m it holds that v v for at least one index j the maximum weight among the minimum weights associated with the pairs a b should be at least if this value is at least for both indices we arbitrarily decide to fix i v u at this point we need to rely on approximate solutions to weighted multicut in chordal graphs in this context we will employ the algorithm given by theorem in section here a fractional solution y is a function y v such that for every pair si ti t and any path p between si and pti it holds that y v p an optimal fractional solution minimizes the weight w y w v y v let fopt denote the weight of an optimal fractional solution by first employing the algorithm given by lemma we next construct two instances b and the second instance is of weighted multicut the first instance is g b where the sets and are defined as follows we initialize now g for every pair v u such that v v c u v m i v u and v u we insert each pair in v u into the definition of is symmetric to the one of by lemma and since for all v v c v m it holds that v v we deduce that and are valid solutions to and respectively thus by calling the algorithm given by theorem with each instance we obtain a solution to the first instance along with a set that realizes it such that w and we also obtain a solution to the second instance along with a set that realizes it such that w for some fixed constant by observation and lemma we obtained a set s for which we have the following observation observation s intersects any chordless cycle in and it holds that w s n recall that to prove lemma we need to show that n log x and we have furthermore we have c log g w x n c log x this together with lemma implies that it is enough n to show c log g w x recall that for any i ri w thus by observation and since for any i gi we have that w s c log n c log w w w w c log n by observation we further deduce that c log n s c log w c log n c log n c log n now it only remains to show that c log c log g c log c log g c log n which is equivalent to c log c log recall that q log n observation thus c log n n c log n n it is sufficient that we show that c log however the term c log is lower bounded by in other words it is sufficient that we fix c e d log general graphs in this subsection we handle general instances by developing a d approximation algorithm for wcvd thus proving the correctness of theorem the exact value of the constant d max is determined this algorithm is based on recursion and during its execution we often encounter instances that are of the form of the special case of wcvd which will be dealt with using the algorithm of section recall that c is the constant we fixed to ensure that the approximation ratio of is bounded by c log recursion we define each call to our algorithm to be of the form where is an instance of wcvd such that is an induced subgraph of g and we denote we ensure that after the initialization phase the graph never contains chordless cycles on at most vertices we call this invariant the invariant in particular this guarantee ensures that the graph always contains only a small number of maximal cliques lemma the number of maximal cliques of a graph that has no chordless cycles on four vertices is bounded by o and they can be enumerated in polynomial time using a polynomial delay algorithm goal for each recursive call we aim to prove the following lemma returns a solution that is at least opt and at most moreover it returns a subset u v that realizes the solution d log n opt at each recursive call the size of the graph becomes smaller thus when we prove that lemma is true for the current call we assume that the approximation factor is bounded by d b opt for any call where the size n b of the of its graph is strictly smaller than log n initialization initially we set g w however we need to ensure that the invariant is satisfied for this purpose we update as follows first we let denote the set of all chordless cycles on at most vertices of clearly can be computed in polynomial time and it holds that now we construct an instance of weighted set where the universe is v the family of is and the weight function is since each chordless cycle must be intersected it is clear that the optimal solution to our weighted set instance is at most opt by using the standard algorithm for weighted set which is suitable for any fixed constant we obtain a set s v that intersects all cycles in and whose weight is at most opt having the set s we remove its vertices from now the invariant is satisfied which implies that we can recursively call our algorithm to the outputted solution we add w s and if lemma is true we obtain a solution that is at most n opt opt d n opt which allows us to conclude the correctness of theorem we remark that during the execution of our algorithm we only update by removing vertices from it and thus it will always be safe to assume that the invariant is satisfied termination observe that due to lemma we can test in polynomial time whether consists of a clique and a chordal graph we examine each maximal clique of and check whether after its removal we obtain a chordal graph once becomes such a graph that consists of a chordal graph and a clique we solve the instance by calling algorithm since c log we thus ensure that at the base case of our induction lemma holds recursive call for the analysis of a recursive call let s denote a hypothetical set that realizes the optimal solution opt of the current instance moreover let f be a clique forest of s whose existence is guaranteed by lemma using standard arguments on forests we have the following observation observation there exist a maximal clique m of and a subset s v m of weight at most opt such that m s is a balanced separator for the following lemma translates this observation into an algorithm lemma there is a algorithm that finds a maximal clique m of and a subset s v m of weight at most q log opt for some fixed constant q such that m s is a balanced separator for proof we examine every maximal clique of by lemma we need only consider o maximal cliques and these cliques can be enumerated in polynomial time for each such clique m we run the q log approximation algorithm by leighton and rao to find a balanced separator sm of m here q is some fixed constant we let s denote some set of minimum weight among the sets in sm m is a maximal clique of by observation w s q log opt thus the desired output is a pair m s where m is one of the examined maximal cliques such that sm we call the algorithm in lemma to obtain a pair m s since m s is a balanced separator for we can partitionsthe set of connected components of m s into two sets s and such that for v a and v a it holds that where and we remark that we used the o log n approximation algorithm by leighton and rao in lemma to find the balanced separator instead of the o log n approximation algorithm by feige et al as the algorithm by feige et al is randomized next we define two inputs of the general case of wcvd and g let and denote the optimal solutions to and respectively observe that since it holds that opt we solve each of the subinstances by recursively calling algorithm by the inductive hypothesis we thus obtain two sets and such that and are chordal graphs and and we proceed by defining an input of the special case of wcvd j g observe that since and are chordal graphs and m is a clique this is indeed an instance of the special case of wcvd we solve this instance by calling algorithm we thus obtain a b such that m s b is a chordal graphs and s b c log opt set s since m n and the optimal solution of each of the subinstances is at most opt observe that since m is a clique and there is no edge in e between a vertex in and a vertex in any chordless cycle of s entirely belongs to either m b or m this observation along with the fact that m s b is a chordal graphs implies that g t is a chordal graphs where t s thus it is now sufficient to show that t opt by the discussion above we have that b t s s q log opt log c log opt recall that and opt thus we have that t q log opt opt c log opt opt q c log log opt opt it is sufficient to ensure which can be done by fixing d q c log overall we conclude that to ensure that t that q c d log d weighted multicut in chordal graphs in this section we prove theorem let us denote c recall that for weighted multicut a fractional solution x is a function x v g such that for every pair s t t and any path p between s and t it holds that x v p an optimal fractional solution minimizes p the weight w x g w v x v let fopt denote the weight of an optimal fractional solution theorem follows from the next result whose proof is the focus of this section lemma given an instance of weighted multicut in chordal graphs one can find in polynomial time a solution that is at least opt and at most fopt along with a set that realizes it preprocessing by using the ellipsoid method we may next assume that we have optimal fractional solution x at hand we say that x is nice if for all v v g there exists i n such that x v ni let h x v v g x v denote the set of vertices to which x assigns high values b v g as follows for all v v g if x v lemma define a function x b v and otherwise x b v is the smallest value of the form for some i n that is then x b is a fractional solution such that w b at least v then x x x b is a fractional solution consider some path p between s and t such that proof to show that x p b v p p x x b v s t t let x v v g x v we have that x p b v p it is sufficient to show that p x x v thus to show that x p p x x v since x is a fractional solution it holds that x v p p x x v p p x x v x v thus p since p x n p x p x p we conclude that p x x v b v the second part of the claim follows from the observation that for all v v g x v b our preprocessing step also relies on the following standard accordingly we update x to x lemma b g h x w b g lemma define g b g x w h x b and x b then c w b c w x proof by the definition of h x it holds that w b x w x w h x thus b x h x c w x c w h x w h x c w x b w to w b where we ensure that once we obtain a we thus further update g to g b and x to x solution to the new instance we add w h x to this solution and h x to the set realizing it overall we may next focus only on the proof of the following lemma lemma let g w be an instance of weighted multicut in chordal graphs and x be a nice fractional solution such that h x then one can find in polynomial time a solution that is at least opt and at most c w x along with a set that realizes it the algorithm since g is a chordal graph we can first construct in polynomial time a clique forest f of g lemma without loss of generality we may assume that f is a tree else g is not a connected graph and we can handle each of its connected components separately now we arbitrarily root f at some node rf and we arbitrarily choose a vertex rg rf we then use dijkstra s algorithm to compute in polynomial time for each vertex v v g the value d v min x v p where p v is the set of paths in g between rg and p v we define n bins for all i n the bin bi contains every vertex v v g for which there exists j such that d v v ni d v d v ni x v let n be a bin that minimizes w the output consists of w and br be the set that contains every vertex v v g approximation factor given r let b for which there exists j n such that d v r x v we start with the following claim c w x lemma there exists such that w b proof for any d observe that there exists exactly one j n for which there exists r such that d r and denote it by j d suppose that we choose r uniformly at random consider some vertex v v g then since h x the probability that there exists j n such that d v r x v is equal to the probability that d v r d v x v now the probability that d v r d v x v br is c p is equal to c x v the expected weight w b g x v w v c w x thus there c w x exists r such that w b now the proof of the approximation factor follows from the next claim lemma there exists i n such that bi b proof let i be the smallest index in n such that ni consider some vertex v bi then for some j n d v x v ni d v since ni we have that d v since x is nice it holds that there exists t n such that d v x v nt thus for any p it holds that d v x v ni p by the choice of i ni and therefore d v x v which implies that v b feasibility we need to prove that for any pair s t t g does not have any path between s and consider some path p v between s and here s and v suppose by way of contradiction that v p then for all vi v p it holds that there is no j n such that d vi in x vi let s v f be the closest node to rf that satisfies s v p since f is a clique tree and p is a path the node s is uniquely defined let vbi be some vertex in s v p for the sake of clarity let us denote the subpath of p between vbi and v by q ut where vbi and ut v let j be the smallest value in n that satisfies d x in note that d in it is thus well defined to let p denote the largest index in t such that d up in first suppose that p t we then have that in d for all i t it holds that d ui d x ui we thus obtain that d x d up in this statement implies that which is a contradiction now we suppose that p note that in d x by the minimality of j and d ut in we get that d ut d x in other words d ut d x let des s denote the set s consisting of s and its descendants in f since f is a clique tree we have that v q s thus any path from rg to ut that realizes d ut contains a vertex from s since there exists a path from rg to ut that realizes d ut we deduce that there exists a path pt from rg to ut that realizes d ut and contains a vertex x ng let denote the subpath of pt between x and ut and let p denote the path that starts at and then traverses then x v p x x v x d ut d x x x note that d d x x and therefore x v p x d ut d x x x x x x d ut d x since h x and d ut d x we get that x v p the symmetric analysis of house gem domino cycle on at least vertices figure obstruction set for distance hereditary graphs the subpath of p between vbi and shows that there exists a path p between and such that x v p overall we get that there exists a path p between s and v u t such that x v p since c we reach a contradiction to the assumption that x is a fractional solution vertex deletion in this section we prove theorem we start with preliminaries preliminaries a graph g is distance hereditary if every connected induced subgraph h of g for all u v v h the number of vertices in shortest path between u and v in g is same as the number of vertices in shortest path between u and v in another characterization of distance hereditary graphs is the graph not containing an induced isomorphic to a house a gem a domino or an induced cycle on or more vertices refer figure we refer to a house a gem a domino or an induced cycle on at least vertices as a a on at most vertices is a small a biclique is a graph g with vertex bipartition x y each of them being such that for each x x and y y we have x y e g we note here that x and y need not be independent sets in a biclique clearly we can assume that the weight w v of each vertex v v g is positive else we can insert v into any solution our approximation algorithm for wdhvd comprises of two components the first component handles the special case where the input graph g consists of a biclique c and a distance hereditary here we also assume that the input graph has no small we show that when input restricted to these special instances wdhvd admits an o n approximation algorithm the second component is a recursive algorithm that solves general instances of the problem initially it easily handles small then it gradually disintegrates a general instance until it becomes an instance of the special form that can be solved in polynomial time more precisely given a problem instance the algorithm divides it by finding a maximal biclique m using an exhaustive search which relies on the guarantee that g has no small and a small separator s using an approximation algorithm that together break the input graph into two graphs significantly smaller than their origin distance hereditary graph in this subsection we handle the special case where the input graph g consists of a biclique c and a distance hereditary graph more precisely along with the input graph g and the weight function w we are also given a biclique c and a distance hereditary graph h such that v g v c v h where the v c and v h are disjoint here we also assume that g has no on at most vertices which means that every g is a chordless cycle of strictly more than vertices note that the e g may contain edges between vertices in c and vertices in we call this special case the biclique distance hereditary special case our objective is to prove the following result lemma the biclique distance hereditary special case of wdhvd admits an o n factor approximation algorithm we assume that n else the input instance can be solve by let c be a fixed constant to be determined later in the rest of this subsection we design a c log approximation algorithm for the biclique distance hereditary special case of wdhvd recursion our approximation algorithm is a recursive algorithm we call our algorithm and define each call to be of the form c h x here is an induced subgraph of g such that v c v and h is an induced subgraph of the argument x is discussed below we remark that we continue to use n to refer to the size of the of the input graph g rather than the current graph arguments while the execution of our algorithm progresses we keep track of two arguments the number of vertices in the current distance hereditary graph h that are assigned a value by x which we denote by and the fractional solution x observation the measure can be computed in polynomial time a fractional solution x is a function x v such that for every chordless cycle q of on at least vertices it holds that x v q an optimal fractional solution p minimizes the weight x v x v clearly the solution to the instance of wdhvd is at least as large as the weight of an optimal fractional solution although we initially compute an optimal fractional solution x at the initialization phase that is described below during the execution of our algorithm we manipulate this solution so it may no longer be optimal prior to any call to with the exception of the first call we ensure that x satisfies the following invariants invariant for any v v it holds that x v log invariant for any v v c it holds that x v we note that the invariant used here is simpler than the one used in section since it is enough for the purpose of this section goal the depth of the recursion tree will be bounded by o log n where the depth of initial call is the correctness of this claim is proved when we explain how to perform a recursive call for each recursive call to c h x we aim to prove the following lemma for any each recursive call to of depth n returns a solution that is at least opt and at most c log n log x moreover it returns a subset u v that realizes the solution at the initialization phase we see that in order to prove lemma it is sufficient to prove lemma initialization initially the graphs and h are simply set to be the input graphs g and h and the weight function is simply set to be input weight function moreover we compute an optimal fractional solution x xinit by using the ellipsoid method recall that the following claim holds this assumption simplifies some of the calculations ahead observation the solution of the instance of wdhvd is lower bounded by xinit moreover it holds that n and therefore to prove lemma it is sufficient to return a solution that is at least opt and at most c log n log g w x along with a subset that realizes the solution part of the necessity of the stronger claim given by lemma will become clear at the end of the initialization phase we would like to proceed by calling our algorithm recursively for this purpose we first need to ensure that x satisfies the and invariants to which end we use the following notation we let h x v v x v log n denote the set of vertices to which x assigns high values note that we can assume for each v h x we have x v moreover given a biclique m in we let x m v denote the function that assigns to any vertex in m and n x v to any other vertex v v now to adjust x to be of the desired form both at this phase and at later recursive calls we rely on the following lemmata b h x w b b g lemma define g b g b and x b then c log n log g b x h x log n log g x where b is an x x n h x since g proof by the definition of h x it holds that b b thus log n log g b b induced subgraph of it also holds that g x w h x c g w x log n h x h x c g x thus it is safe to update to h x to g b h to h h x and x to g b where we ensure that once we obtain a solution to the new instance we add w h x to this solution and h x to the set realizing it lemma let q be a chordless cycle on at least vertices and m be a biclique in with vertex partitions as v m such that v q m then there is a chordless cycle on at least vertices that intersects m in at most vertices such that e m e q m furthermore is of one of the following three types m is a single vertex m is an edge in g m m is an induced path on vertices in m proof observe that no chordless cycle on or more vertices may contain two vertices from each of and as that would imply a chord in it now if the chordless cycle q already satisfies the required conditions we output it as first consider the case when q m contains exactly two vertices that don t have an edge between them then the two vertices say are both either in or in suppose that they are both in and consider some vertex u let is the longer of the two path segments of q between and and note that it must length at least then observe that u contains a as have different distances depending on if u is included in an induced subgraph or not and further it is easy to see that this contains the induced path u however as all small obstructions have been removed from the graph we have that is a chordless cycle in on at least vertices furthermore m is the induced path u in and e m e q m now consider the case when q m contains exactly three vertices observe that it can not contain two vertices of and one vertex of or vice versa as q doesn t satisfy the required conditions therefore q m contains exactly three vertices from or from which again don t form an induced path of length so there is an independent set of size in q m and now as before we can again obtain the chordless cycle on at least vertices with e m e q m before we consider the other cases we have the following claim claim let m be a biclique in with vertex partition as v m then m has no induced proof let p be any induced path of length in m then either v p or v p now consider any such path p in and some vertex u then p u contains a of size which is a contradiction to the fact that has no small obstructions next let q m contain or more vertices note that in this case all these vertices are all either in or in since otherwise q would not be a chordless cycle in on at least vertices let us assume these vertices lie in other case is symmetric let v q be the sequence of vertices obtained when we traverse q starting from an arbitrary vertex where by claim they can not form an induced path on vertices v q consists of at least two connected components without loss of generality we may assume that and v are in different components observe that the only possible edges between these vertices may be at most two of the edges and v hence we conclude that either or v are a distance of at least in q let us assume that v are at distance or more in q and the other case is symmetric and be the paths not containing in q between and and and v respectively notice that for any u the graph u v v contains a since the graph is free of all small obstruction this denoted must be a chordless cycle on at least vertices furthermore this obstruction can contain at most vertices from v as otherwise there would be a chord in it hence m contains strictly fewer vertices than q m moreover we have e m e q m now by a recursive application of this lemma to we obtain the required a consequence of the above lemma is that whenever m is a biclique in we may safely ignore any intersects m in more than vertices this leads us to the following lemma lemma given a biclique m in the function x m is a valid fractional solution such that x m n x proof to prove that x m is a valid fractional solution let q be some chordless cycle not on vertices in we need to show that x m v q by our assumption q can contain at most vertices from m thus since x is a valid fractional solution it holds that x v q v m n by the definition of x m this fact implies that x m v q x m v q v m n n n where the last inequality relies on the assumption n for the proof of the second part of the claim note that n m n x we call recursively with the fractional solution x c and by lemma c n x if lemma were true we return a solution that is at least opt x n and at most c log n log g w x m c log n log g w x as desired in other words to prove lemma it is sufficient that we next focus only on the proof of lemma the proof of this lemma is done by induction when we consider some recursive call we assume that the solutions returned by the additional recursive calls that it performs which are e such that g e complies with the conclusion of the lemma associated with graphs g termination once becomes a distance hereditary graph we return as our solution and as the set that realizes it clearly we thus satisfy the demands of lemma in fact we thus also ensure that the execution of our algorithm terminates once log lemma if log n then is a distance hereditary graph proof suppose that is not a distance hereditary graph then it contains an obstruction q since x is a valid fractional solution it holds that x v q but x satisfies the invariant therefore it holds that x v q q log these two observations imply that q log furthermore at least log n of these vertices are assigned a value by x log therefore if log n then must be a distance hereditary graph the fact that the recursive calls are made onto graphs where the distance hereditary subgraph contains at most the number of vertices in the current distance hereditary subgraph we observe the following observation the maximum depth of the recursion tree is bounded by q log n for some fixed constant recursive call since h is a distance hereditary graph it has a decomposition t where t is a binary tree and is a bijection from v to the leaves of t furthermore of t is which means that for any edge of the tree by deleting it we obtain a partition of the leaves in t this partition induces a cut of the graph where the set of edges crossing this cut forms a biclique m with vertex partition as v m in the graph by standard arguments on trees we deduce that t has an edge that defines a partition such that after we remove the biclique edges between and from we obtain two not necessarily connected graphs and such that h and note that the bicliques m and c are vertex disjoint we proceed by replacing the fractional solution x by x m for the sake of clarity we denote x m let v v c v m v v c v m we adjust the current instance by relying on lemma so that satisfies the invariant in the same manner as it is adjusted in the initialization phase in particular we remove h from h and and we let c h and denote the resulting instance and graphs observe that now we have n we will return a solution that is at least opt and at most c log n log along with a set that realizes in the analysis we will argue this it is enough for our purposes next we define two subinstances c and c we solve each of these subinstances by a recursive call to and thus we obtain two solutions of sizes to and to and two sets that realize these solutions and by the inductive hypothesis we have the following observations observation intersects any chordless cycle on at least vertices in that lies entirely in either or n observation given i c log n log w moreover since v c v m we also have the following observation observation log n log n here the coefficient log has been replaced by the smaller coefficient log we say that a cycle in is bad if it is a chordless cycle not on four vertices that belongs entirely to neither nor next we show how to intersect bad cycles bad cycles let us recall the current state of the graph is partitioned into a biclique c and a distance hereditary graph h furthermore there is a biclique m with vertex bipartition as and so that deleting the edges between and gives a balanced partition of h into and now by lemma we may ignore any chordless cycle that intersects either of the two bicliques c and m in more than three vertices each and this allows us to update our fractional feasible solution to then we recursively solve the instances and and remove the returned solution now consider the remaining graph and any obstructions that are left as the graph no longer contains small obstructions it is clear that any remaining obstruction is a chordless cycle on at least vertices and is a bad cycle we first examine the relation between bad cycles and pairs v u of vertices v v c and u v m lemma if a bad cycle exists then there must also be a bad cycle q such that q m c is a union of two internally vertex disjoint and and such that and and each of them connect a pair of vertices in m proof let be a bad cycle let us recall that the input graph can be partitioned into the biclique c and a distance hereditary graph h hence c furthermore if m then is preserved in m this means that is either present in or in and hence it can not be a bad cycle which is a contradiction hence m as well finally contains vertices from both and which implies q and q are both as well now by applying lemma to and c we obtain a bad cycle such that c is either a single vertex or an edge or an induced path of length three since m c we can again apply lemma to and m and obtain a bad cycle q such that each of q c and q m is either a single vertex or an edge or an induced path of length three hence q v m v c is a pair of internally disjoint paths whose endpoints are in m furthermore one of these paths denoted is contained in and the other denoted is contained in the above lemma lemma implies that it is safe to ignore all the bad cycles that don t satisfy the conclusion of this lemma we proceed to enumerate some helpful properties of those bad cycles that satisfy the above lemma we call the path segments of the bad cycle lemma suppose are path segments of a bad cycle q where and where and are a solution to and respectively then for any which is an induced path in with the same endpoints as we have that q m c v v is also a bad cycle proof observe that and are paths between the same endpoints in which is a distance hereditary graph therefore is an induced path of the same length as furthermore no vertex in is adjacent to a vertex in q hence is also a bad cycle the above lemma allows us to reduce the problem of computing a solution that intersects all to computing a solution for an instance of weighted multicut more formally let q be a bad cycle with path segments and the feasible fractional solution assigns a total value of at least to the vertices in q as assigns to every vertex in m c we have that at least one of or is assigned a total value of at least suppose that it were then assigns a total value to in this fractional solution is a solution to the weighted multicut problem defined on the pairs of vertices in c m which are separated by in whose description is given below given i let denote the fractional solution that assigns to each vertex the value assigned by times for a pair v u of vertices such that v v c and u v m we call v u an important pair if there is a bad cycle q with path segments and that connects v and u let and be a solution to and respectively obtained recursively for an important pair v u we let v u denote the set of any simple path between v and u whose internal vertices belong only to and which does not contain any edge such that one of its endpoints belongs to v c while the other endpoint belongs to v m symmetrically we let v u denote the set of any path between v and u whose internal vertices belong only to and which does not contain any edge such that one of its endpoints belongs to v c while the other endpoint belongs to v m lemma for an important pair v u of vertices where v v c and u v m in polynomial time we can compute an index i v u such that for any path p pi v u v p proof let v u be an important pair of vertices with v v c and u v m we start by arguing that such an index exists assuming a contradiction suppose there exists v u and v u such that v and v recall that we have a bad cycle bad cycle q in with paths segments as and which connects v and u but this implies that q contradicting that was a feasible solution to therefore such an index always exists for any index j we use dijkstra s algorithm to compute the minimum weight of a b where the weights are given by in case the minimum path between v and u in the graph g i i weight is at least we have found the desired index i v u moreover we know that for at least one index j the minimum weight should be at least if the minimum weight is at least for both induces we arbitrarily decide to fix i v u we say that an important pair u v is separated in gi if the index assigned by lemma assigns i to pi u v now for every important pair v u such that v v c u v m and v u e we perform the following operation we check if this pair is separated in and if so then we initialize v u then for each pair of neighbors of x of v and y of u we add the pair x y to u v the set u v is similarly defined at this point we need to rely on approximate solutions to the weighted multicut problem which is given by theorem below theorem theorem given an instance of weighted multicut one can find in polynomial time a solution that is at least opt and at most d log n fopt for some fixed constant d along with a set that realizes it here a fractional solution y is a function y v g such that for every pair si ti t and any path p between si and p ti it holds that y v p an optimal fractional solution minimizes the weight w y g w v y v let fopt denote the weight of an optimal fractional solution by employing the algorithm given by lemma we next construct two instances of b v u v v c u weighted multicut the first instance is g b v m i v u and v u is an important pair and the second instance is g v u v v c u v m i v u and v u is an important pair by lemma and are valid solutions to and respectively thus by calling the algorithm given by theorem with each instance we obtain a solution to the first instance along with a set that realizes it such that log and we also obtain a solution to the second instance along with a set that realizes it such that log now by observation and lemma we have obtained a set s for which we have the following observation observation s intersects any chordless cycle in and it holds that s n we start by showing that h x recall that for any i ri log thus by observation and since for any i n and we have that s log n c log n log log n log n by observation we further deduce that log n w s c log g log n log n n n c log c log now it only remains to show that n which is equivalent to c log observe that q log n for some fixed constant q indeed it initially holds that g n at each recursive call the number of vertices assigned a value by decreases to at most a factor of of its previous value and the execution terminates once this value drops below log n thus it is sufficient to choose n n n n c log as the term the constant c so that is lower bounded by e it is sufficient that we fix c e d log n n n n note that c log where d therefore c this n together with lemma and implies that s h x c log x which proves lemma general graphs in this section we handle general instances by developing a d approximation algorithm for wdhvd thus proving the correctness of theorem the recursive algorithm we define each call to our algorithm to be of the form where is an instance of wdhvd such that is an induced subgraph of g and we denote we ensure that after the initialization phase the graph never contains a on at most vertices we call this invariant the invariant in particular this guarantee ensures that the graph always contains only a small number of maximal bicliques as stated in the following lemma lemma lemma let g be a graph on n vertices with no on at most vertices then g contains at most maximal bicliques and they can be enumerated in polynomial time goal for each recursive call we aim to prove the following lemma returns a solution that is at least opt and at most moreover it returns a subset u v that realizes the solution here d is a constant which will be determined later at each recursive call the size of the graph becomes smaller thus when we prove that lemma is true for the current call we assume that the approximation factor is bounded by d b opt for any call where the size n b of the of its graph is strictly smaller than log n initialization we are given g w as input and first we need to ensure that the invariant is satisfied for this purpose we update g as follows first we let denote the set of all on at most vertices of clearly can be computed in polynomial time and it holds that no now we construct an instance of weighted set where the universe is v g the family of all setsof size at most in and the weight function is since each must be intersected therefore the optimal solution to our weighted set instance is at most opt by using the standard algorithm for weighted set which is suitable for any fixed constant we obtain a set s v g that intersects all the in and whose weight is at most opt having the set s we remove its vertices from g to obtain the graph and now that the invariant is satisfied we can call on and to the outputted solution we add w s and we note that during the execution of the algorithm we update only by removing vertices from it and thus it will always be safe to assume that the invariant is satisfied now by lemma we obtain a solution of weight at most n opt opt d n opt then combined with s it allows us to conclude the correctness of theorem termination observe that due to lemma we can test in polynomial time if our current graph is of the special kind that can be partitioned into a biclique and a distance hereditary graph we examine each maximal biclique of and check whether after its removal we obtain a distance hereditary graph once becomes such a graph that consists of a biclique and a distance hereditary graph we solve the instance by calling algorithm observe that this returns a solution of value o n opt which is also o n opt recursive call similar to the case for wcvd instead computing a balanced separators with a maximal clique and some additional vertices here we find a balanced separator that comprises of a biclique and some additional but small number of vertices existence of such a separator is guaranteed by lemma from lemma it follows that the graph with no of size at most contains at most o maximal bicliques and they can enumerated in polynomial time we use the weighted variant of lemma from in lemma the proof of lemma remains exactly the same as that in lemma of lemma lemma let be a connected graph on vertices not containing any of size at most and w v g r be a weight function then in polynomial time we can find a balanced vertex separator k x such that the following conditions are satisfied k is a biclique in g or an empty set w x q log opt where q is some fixed constant here opt is the weight of the optimum solution to wdhvd of we note that we used the o log approximation algorithm by leighton and rao in lemma to find the balanced separator instead of the o log approximation algorithm by feige et al as the algorithm by feige et al is randomized let us also remark that if k is a biclique then there is a bipartition of the vertices in k into a b where both a and b are which will be crucially required in later arguments next we apply in lemma to to obtain a pair k x since k x is a balanced separator for we can partitionsthe set of connected components of m s into two sets s and such that for v a and v a it holds that where and we then define two inputs of the general case wdhvd and let and denote the optimal solutions to and respectively observe that since it holds that opt we solve each of the two by recursively calling algorithm by the inductive hypothesis we obtain two sets and such that and are both distance hereditary graphs and and now if k were an empty set then it is easy to see that x is a feasible solution to the instance now let us bound the total weight of this subset x x q log opt recall that and opt q log opt opt d opt the more interesting case is when k is a biclique then we first remove x from the graph and note that the above bound also holds for this subset of vertices now observe that the graph x can be partitioned into a biclique k and a distance hereditary graph h g along with the weight function thus we have an instance of the biclique distance hereditary graph spacial case of wdhvd furthermore note that we retained a fractional feasible solution x to the lp of the initial input which upperbounds the value of a fractional feasible solution to the lp of the instance we apply the algorithm on k h which outputs a solution s such that s s o n opt observe that any obstruction in s is either completely contained in s or completely contained in s or it contains at least one vertex from this observation b is a distance hereditary graph implies along with the fact that k s b thus it is now sufficient that g t is a distance hereditary graph where t x d to show that w t log n opt by the discussion above we have that returns a solution of value c n opt where c is some constant t s q log opt c opt recall that and opt thus we have that t q log opt opt c opt opt c d log opt overall we conclude that to ensure that t that c d log which can be done by fixing d d opt it is sufficient to ensure c log conclusion in this paper we designed o logo n algorithms for weighted planar f deletion weighted chordal vertex deletion and weighted distance hereditary vertex deletion or weighted vertex deletion these algorithms are the first ones for these problems whose approximation factors are bounded by o logo n along the way we also obtained a approximation algorithm for weighted multicut on chordal graphs all our algorithms are based on the same recursive scheme we believe that the scope of applicability of our approach is very wide we would like to conclude our paper with the following concrete open problems does weighted planar f deletion admit a approximation algorithm furthermore studying families f that do not necessarily contain a planar graph is another direction for further research does weighted chordal vertex deletion admit a approximation algorithm does weighted vertex deletion admit a o logo n approximation algorithm on which other graph classes weighted multicut admits a approximation references agrawal lokshtanov misra saurabh and zehavi feedback vertex set inspired kernel for chordal vertex deletion in proceedings of the symposium on discrete algorithms soda pp bafna berman and fujito a algorithm for the undirected feedback vertex set problem siam journal on discrete mathematics pp bansal reichman and umboh robust algorithms for noisy and bounded treewidth graphs in proceedings of the symposium on discrete algorithms soda pp bansal and umboh personal and even a approximation algorithm for the weighted vertex cover problem journal of algorithms pp geiger naor and roth approximation algorithms for the feedback vertex set problem with applications to constraint satisfaction and bayesian inference siam journal on computing pp borie parker and tovey automatic generation of algorithms from predicate calculus descriptions of problems on recursively constructed graph families algorithmica pp courcelle j makowsky and rotics linear time solvable optimization problems on graphs of bounded theory of computing systems pp courcelle and olariu upper bounds to the clique width of graphs discrete applied mathematics pp diestel graph theory edition vol of graduate texts in mathematics springer farber on diameters and radii of bridged graphs discrete mathematics pp feige hajiaghayi and lee improved approximation algorithms for minimum weight vertex separators siam journal on computing pp fiorini joret and pietropaoli hitting diamonds and growing cacti in proceedings of the conference on integer programming and combinatorial optimization ipco vol pp fomin lokshtanov misra philip and saurabh hitting forbidden minors approximation and kernelization siam journal on discrete mathematics pp fomin lokshtanov misra and saurabh planar approximation kernelization and optimal fpt algorithms in proceedings of ieee annual symposium on foundations of computer science focs pp fomin lokshtanov raman and saurabh bidimensionality and eptas in proceedings of the symposium on discrete algorithms soda pp fomin lokshtanov and saurabh bidimensionality and geometric graphs in proceedings of the symposium on discrete algorithms soda pp fomin lokshtanov saurabh and thilikos bidimensionality and kernels in proceedings of the symposium on discrete algorithms soda pp garg vazirani and yannakakis approximate multi cut theorems and their applications siam journal on computing pp golovin nagarajan and singh approximating the problem in symposium on discrete algorithms soda pp golumbic algorithmic graph theory and perfect graphs academic press new york hammer and maffray completely separable graphs discrete applied mathematics pp oum seese and gottlob width parameters beyond and their applications the computer journal pp howorka a characterization of graphs the quarterly journal of mathematics pp jansen and pilipczuk approximation and kernelization for chordal vertex deletion in proceedings of the symposium on discrete algorithms soda pp kim and kwon a polynomial kernel for vertex deletion arxiv kleinberg and tardos algorithm design leighton and rao multicommodity theorems and their use in designing approximation algorithms journal of the acm pp lewis and yannakakis the problem for hereditary properties is journal of computer and system sciences pp lund and yannakakis the approximation of maximum subgraph problems in proceedings of the international colloquium on automata languages and programming icalp vol pp moon and moser on cliques in graphs israel journal of mathematics pp nemhauser and trotter properties of vertex packing and independence system polyhedra mathematical programming pp oum and journal of combinatorial theory series b pp approximating and quickly acm transactions on algorithms algorithmic and structural results corr oum and seymour approximating and journal of combinatorial theory series b pp robertson and seymour graph minors excluding a planar graph journal of combinatorial theory series b pp robertson and seymour graph minors the disjoint paths problem journal of combinatorial theory series b pp graph minors xx wagner s conjecture journal of combinatorial theory series b pp sachs on the berge conjecture concerning perfect graphs combinatorial structures and their applications tsukiyama ide ariyoshi and shirakawa a new algorithm for generating all the maximal independent sets siam journal on computing pp yannakakis the effect of a connectivity requirement on the complexity of maximum subgraph problems journal of the acm pp some open problems in approximation in proceedings of italian conference on algorithms and complexity second ciac pp
8
streaming data from hdd to gpus for sustained peak performance lucas beyer paolo bientinesi aachen institute for advanced study in computational engineering science financial support from the deutsche forschungsgemeinschaft german research foundation through grant gsc is gratefully acknowledged streaming data from hdd to gpus for sustained peak performance lucas beyer and paolo bientinesi rwth aachen university aachen institute for advanced study in computational engineering science germany beyer pauldj abstract in the context of the association studies gwas one has to solve long sequences of generalized problems such a task has two limiting factors execution time in the range of days or and data management sets in the order of terabytes we present an algorithm that obviates both issues by pipelining the computation and thanks to a sophisticated transfer strategy we stream data from hard disk to main memory to gpus and achieve sustained peak performance with respect to a cpu implementation our algorithm shows a speedup of moreover the approach lends itself to multiple gpus and attains almost perfect scalability when using gpus we observe speedups of over the aforementioned implementation and over a widespread biology library keywords gwas generalized computational biology computation multiple gpus data transfer multibuffering streaming big data gwas their importance and current implementations in a nutshell the goal of a association study gwas is to find an association between genetic variants and a specific trait such as a disease since there is a tremendous amount of such genetic variants the computation involved in gwas takes a long time ranging from days to weeks and even months in this paper we look at currently the fastest algorithm available and show how it is possible to speed it up by exploiting the computational power offered by modern graphics accelerators the solution of gwas boils down to a sequence of generalized least squares gls problems involving huge amounts of data in the order of terabytes the challenge lies in sustaining gpu s performance avoiding idle time due to data transfers from hard disk hdd and main memory our solution cugwas combines three ideas the computation is pipelined through gpu and cpu the transfers are executed asynchronously and the data is streamed from hdd to main memory to gpus by means of a buffering strategy combined these mechanisms allow cugwas to attain almost perfect scalability with respect to the number of gpus when compared to and another widespread gwas library our code is respectively and times faster in the first section of this paper we introduce the reader to gwas and the computations involved therein we then give an overview of upon which we build cugwas whose key techniques we explain in section and which we time in section we provide some closing remarks in section biological introduction to gwas the segments of the dna that contain information about protein synthesis are called genes they encode traits which are features of physical appearance of the organism eye or hair as well as internal features of the organism blood type or resistances to diseases the hereditary information of a species consists of all the genes in the dna and is called genome this can be visualized as a book containing instructions for our body following this analogy the letters in this book are called nucleotides and determining their order is referred to as sequencing the genome even though the genome sequence of every individual is different within one species most of it for humans stays the same when a single nucleotide of the dna differs between two individuals of the same species this difference is called a polymorphism snp pronounced snip and the two variants of the snp are referred to as its alleles association studies compare the dna of two groups of individuals all the individuals in the case group have a same trait for example a specific disease while all the individuals in the control group do not have this trait the snps of the individuals in these groups are compared if one variant of a snp is more frequent in the case group than in the control group it is said that the snp is associated with the trait disease in contrast with other methods for linking traits to snps such as inheritance studies or genetic association studies gwas consider the whole genome the importance of gwas we gathered insightful statistics about all published gwas since the first gwas started to appear in and the amount of yearly published studies has constantly increased reaching more than studies in this trend is summarized in the left panel of fig showing the median of each year s studies along with for the first and second quartiles one can observe that while gwa studies started out relatively small since the amount of analyzed snps is growing tremendously besides the number of snps the other parameter relevant to the implementation of an algorithm is the sample size that is the total number of individuals of both the case and the control group what can be seen in fig is that while it has grown at first in the past four years the median sample size seems to have settled around individuals it is apparent that in contrast to the snp count the growth of the sample size is negligible this data as well as discussions with biologists confirm the need for algorithms and software that can compute a gwas with even more snps and faster than currently possible the mathematics of gwas the gwas can be expressed as a variance component model whose solution ri can be formulated as ri xit m xi xit m y i m where m is in the millions and all variables on the side are known this sequence of equations is used to compute in ri the relations between variations in y the phenotype and a phenotype is the observed value of a certain trait of an individual for example if the studied trait was the hair color the phenotype of an individual would be the one of blonde brown black or red a median snp count b median sample size fig the median first and second quartile of a the and b the sample size of the studies each year variations in xi the genotype each equation is responsible for one snp meaning that the number m of equations corresponds to the number of snps considered in the study figure captures the dimensions of the objects involved in one such equation the height n of the p ri xit xit xi y n n fig the dimensions of a single instance of matrices xi and m and of the vector y corresponds to the number of samples thus each row in the xi corresponds to a piece of each individual s genetic makeup information about one snp and each entry in y rn corresponds to an individual s m models the relations amongst the individuals two individuals being in the same family finally an important feature of the matrices xi is that they can be partitioned as xl where xl contains fixed covariates such as age and sex and thus stays the same for any i while xri is a single column vector containing the genotypes of the snp of all considered individuals in the example of the body height as a trait the entries of y would then be the heights of the individuals even though has to be computed for every single snp only the right part of the designmatrix xri changes while xl m and y stay the same the amount of data and computation involved we analyze the storage size requirements for the data involved in gwas typical values for p range between and but only one entry varies with according to our analysis in section we consider n as the size of a study as of june the snp database dbsnp lists known snps for humans so we consider m with these numbers assuming that all data is stored as double precision floating point therefore the size of y and m is about mb and mb respectively both fit in main memory and in the gpu memory the output r reaches gb coming close to the limit of current systems main memory and is too big to fit in a gpu s gb of memory weighting in at tb x is too big to fit into the memory of any system in the foreseeable future and has to be streamed from disk in the field of bioinformatics the probabel library is frequently used for association studies on a sun fire server with an intel xeon cpu ghz the authors report a runtime of almost hours for a problem with p n and m and estimate the runtime with m to be roughly two days compared to the current demand m million is a reasonable amount of snps but a population size of only n individuals is clearly much smaller than the present median fig the authors state that the runtime grows more than linearly with n and in fact tripling up the sample size from to increased their runtime by a factor of coupling this fact with the median sample size of about individuals the computation time is bound to reach weeks or even months prior work the algorithm presently the fastest available algorithm for solving is since our work builds upon this algorithm we describe its salient features algorithmic features exploits the the symmetry and the positive definiteness of the matrix m by decomposing it through a cholesky factorization llt m since m does not depend on i this decomposition can be computed once as a preprocessing step and reused for every instance of substituting llt m into and rearranging we obtain t t ri xi xi xi y for i m effectively replacing the inversion and multiplication of m with the solution of a triangular linear system trsv which may or may not be the optimal storage type more discussion with biologists and analysis of the operations is necessary in order to find out whether float is precise enough if that was the case the sizes should be halved we only consider what the authors called the linear model with the mmscore option as this solves the exact problem we tackle the second piece of knowledge that is exploited by is the structure of x xl xl stays constant for any i while xr varies plugging xi xl into and moving the constant parts out of the loop leads to an algorithm that takes advantage of the structure of the sequence of gls shown in listing the acronyms correspond to blas calls listing solution of the sequence of gls l xl y rt stl for potrf m trsm l xl trsv l y gemv xl y syrk xl i in m xri trsv l xri sbl dot xri xl sbr syrk xri rb dot xri y r posv s r llt m xl y st l xri t sbli i t xbri i t ri implementation features two implementation features allow to attain efficiency first by packing multiple vectors xri into a matrix xrb the slow routine to solve a triangular linear system trsv at line can be transformed into a fast trsm then listing is an algorithm that can not deal with an xr which does not fit into main memory this limitation is overcome by turning the algorithm into an one in this case using a doublebuffering technique while the cpu is busy computing the block b of xr in a primary buffer the next block b can already be loaded into a secondary buffer through asynchronous the full algorithm is shown in listing this algorithm attains more than efficiency listing the full algorithm l xl y rt stl aio for potrf m trsm l xl trsv l y gemv xl y syrk xl read xr b in blockcount aio read xr aio wait xr b xrb trsm l xrb for xri in xr b sbl gemm xri xl sbr syrk xri rb gemv xri y llt m xl y st l xb t sbli i t xbri i t i r posv s r aio wait r aio write r b aio wait r blockcount ri increasing performance by using gpus while the efficiency of the algorithm is satisfactory the computations can be sped up even more by leveraging multiple gpus with the help of a profiler we determined confirming the intuition that the trsm at line in listing is the bottleneck since cublas provides a implementation of routines trsm it is the best candidate to be executed on gpus in this section we introduce cugwas an algorithm for a single gpu and then extend it to an arbitrary number of gpus before the trsm can be executed on a gpu the algorithm has to transfer the necessary data since the size of l is around mb the matrix can be sent once during the preprocessing step and kept on the gpu throughout the entire computation unfortunately the whole xr matrix weights in at several tb way more than the gb per buffer limit of a modern gpu the same holds true for the result of the trsm which needs to be sent back to main memory thus there is no other choice than to send it in a fashion each block xrb weighting at most gb when profiled a implementation of the algorithm displays a pattern fig typical for applications in which is an both gpu green and cpu gray need to wait for the data transfer orange furthermore the cpu is idle while the gpu is busy and fig profiled timings of the implementation our first objective is to make use of the cpu while the gpu computes the trsm regrettably all operations following the trsm the at lines in listing which we will call the are dependent on its result and thus can not be executed in parallel a way to break out of this dependency is to delay the by one block in a pipeline fashion so that the relative to the block of xr is delayed and executed on the cpu while the gpu executes the trsm with the block thanks to this pipeline we have broken the dependency and introduced more parallelism completely removing the gray part of fig streaming data from hdd to gpu the second problem with the aforementioned implementation is the time wasted due to data transfers modern gpus are capable of overlapping data transfers with computation if properly exploited this feature allows us to eliminate any overhead and thus attain sustained peak performance on the gpu the major obstacle is that the data is already being from the to the main memory a quick analysis shows that when targeting two layers of one layer for disk main memory transfers and another layer for main memory gpu transfers two buffers on each layer are not sufficient anymore the idea here is to have two buffers on the gpu and three buffers on the cpu the buffering can be illustrated from two perspectives the tasks executed and the buffers involved the former is presented in fig we refer the reader to for a thorough description here we only discuss the technique in terms of buffers gpu gpu trsm b gpu trsm send cpu hdd recv send cpu comp recv b read cpu b t read write cpu gpu transfer gpu computation data dependencies hdd cpu transfer cpu computation asynchronous dispatch fig a of the algorithm sizes are unrelated to runtime in this scenario the size of the blocks xrb used in the gpu s computation is equal to that on the cpu when using multiple gpus this will not be the case anymore as the cpu loads one large block and distributes portions of it to the gpus the gpu s buffers are used in the same way as the cpu s buffers in the simple algorithm while one buffer is used for the computation the data is transferred to and from the other buffer but on the cpu s level in ram three buffers are now necessary for the sake of simplicity we avoid the explanation of the initial and final iterations and start with iteration b with reference to fig assume that the and blocks already reside in the gpu buffers and in the cpu buffer c respectively the block b buffer contains the solution of the trsm of block b at this point the algorithm proceeds by dispatching both the read of the block b from disk into buffer a and the computation of the trsm on the gpu on buffer and by receiving the result from buffer into buffer b the first two operations are dispatched they are executed asynchronously by the memory system and the gpu while the last one is executed synchronously because these results are needed immediately in the following step as soon as the synchronous transfer b completes the transfer of the next block b from cpu buffer c to gpu buffer is dispatched and the is executed on the cpu for the previous block b in buffer b on the cpu see fig trsm trsm gpu b gpus hdd data x results r c b a b b hdd data x results r a retrieve the previous result b from gpu and the block b of data from disk computation c b a b b send the next block b from ram to the gpu execute the on b on the cpu trsm gpus hdd data x results r b gpus c b a b c write the results b to disk hdd data x results r b c b a b d switch buffers at both levels for the next iteration fig the algorithm as seen from a buffer perspective as soon as the cpu is done computing the its results are written to disk fig finally once all transfers are done buffers are rotated through pointer or index rotations not copies according to fig and the loop continues with b b using multiple gpus this technique achieves sustained peak performance on one gpu since boards with many gpus are becoming more and more common in computing we explain here how our algorithm is adapted to take advantage of all the available parallelism the idea is to increase the size of the xrb blocks by a factor as big as the number of available gpus and then split the trsm among these gpus as long as solving a trsm on the gpu takes longer than loading a large enough block xrb from hdd to cpu this parallelization strategy holds up to any number of gpus since in our systems loading the data from hdd was an order of magnitude faster than the computation of the trsm the algorithm scales up to more gpus than were available listing shows the final version of the conditions for the first and last pair of iterations are provided in parentheses on the right results in order to show the speedups obtained with a single gpu we compare the hybrid algorithm presented in listing using one gpu with the then to determine the scalability of cugwas we compare its runtimes when leveraging and gpus in all of the timings the time to initialize the gpu and the preprocessing lines in listing both in the order of seconds have not been measured the gpu usually takes s to fully initialize and the preprocessing takes a few seconds too but depends only on n and this omission is irrelevant for computations that run for hours listing cugwas the black bullet is a placeholder for all gpus l potrf m llt m cublas send l xl trsm l xl xl y trsv l y y rt gemv xl y stl syrk xl st l gpubs for b in cu trsm wait if b in blockcount cu send wait if b in xb cu trsm async if b in blockcount aio read xr a if b in for gpu in ngpus if b in cu recv b gpu gpubs gpubs aio wait xr c if b in for gpu in ngpus if b in cu send async c gpu gpubs gpubs for xri in b if b in t sbl gemm xri xl sbli i t sbr syrk xri xbri t rb gemv xri y i r posv s r ri si aio wait r if b in aio write r if b in results the experiments with a were performed on the quadro cluster at the rwth aachen university the cluster is equipped with two nvidia quadro gpus and two intel xeon cpus per node the gpus which are powered by fermi chips have gb of ram and a theoretical computational power of gflops each in total the cluster has a gpu peak of tflops the cpus which have six cores each amount to a total of gflops and are supported by gb of ram the cost of the combined gpus is estimated to about while the combined cpus cost around figure shows the runtime of along with that of cugwas using one gpu thanks to our strategy we can leverage the gpu s peak performance and achieve a speedup over a implementation cublas trsm implementation attains about of the gpu s peak performance about gflops the peak performance of the cpu in this system amounts to gflops if the whole computation were performed on the gpu at trsm s rate the largest speedup possible would be we achieve because the computation is pipelined the is executed on the cpu in perfect overlap with the gpu this means that the performance of cugwas is perfectly in line with the theoretical peak in addition the figure indicates that the algorithm has linear runtime in m and allows us to cope with an arbitrary the red vertical line in the figure marks the largest value of m for which two blocks of xr fit into the gpu memory for n without the presented multibuffering technique it would not be possible to compute gwas with more than m snps while cugwas allows the solution of gwas with any given amount of snps scalability with multiple gpus to experiment with multiple gpus we used the tesla cluster at the universitat jaume i in spain since it is equipped with an nvidia tesla which contains four fermi chips same model as the quadro system for a combined gpu compute power of tflops but with only gb of ram each the host cpu is an intel xeon delivering approximately gflops in order to evaluate the scalability of cugwas we solved a gwas with p n and m on the tesla cluster varying the number of gpus as it can be seen in fig the scalability of the algorithm with respect to the number of gpus is almost ideal doubling the amount of gpus reduces the runtime by a factor of a runtime with respect to snp count m b scalability with respect to gpu count runtime s runtime s m snp count cugwas number of gpus cugwas ideal scalability fig the runtime of our cugwas algorithm a using compared to using and b using a varying amount of gpus conclusion and future work we have presented a strategy which makes it possible to sustain peak performance on a gpu not only when the data is too big for the gpu s memory but also for main memory in addition we have shown how well this strategy lends itself to exploit an arbitrary number of gpus as described by the developers of probabel the solution of a problem of the size described in section by the gwfgls algorithm took hours in contrast with cugwas we solved the same problem in even accounting for about seconds for the initialization and moore s law doubling the runtime as probabel s timings are from the difference is dramatic we believe that the contribution of cugwas is an important step towards making gwas practical software the code implementing the strategy explained in this paper is freely available at http and http acknowledgements financial support from the deutsche forschungsgemeinschaft german research association through grant gsc is gratefully acknowledged the authors thank diego for providing us with the of the center for computing and communication at rwth aachen for the resources enrique for granting us access to the tesla system as well as yurii aulchenko for intorducing us to the computational challenges of gwas references association studies http bientinesi computing petaflops over terabytes of data the case of genomewide association studies corr a catalog of published association studies aulchenko bientinesi solving sequences of generalized problems on architectures corr announcement corrections ncbi dbsnp build for human http aulchenko struchalin van duijn probabel package for association analysis of imputed data bmc bioinformatics beyer exploiting graphics accelerators for computational biology diploma thesis volkov demmel benchmarking gpus to tune dense linear algebra in proceedings of the conference on supercomputing pp ieee press piscataway igual van de geijn a system for programming matrix on multithreaded architectures in acm transactions on mathematical software pp acm new york
5